This post is a follow-up to a post from last Friday night, entitled Calling Balls and Strikes Against Catchers. Given the timing, you might have missed that post, so you should read through it for some background. Or you can skip reading through it, since I’m about to give you a quick summary. Within that post, I presented some evidence, based on 2012 PITCHf/x data, that catchers were given more generous strike zones while batting than non-catchers. That is, umpires called fewer strikes on catchers than you’d expect, and the difference in rates for catchers and for non-catchers came out to about one strike per 100 called pitches.
I pursued it off a comment tip, and I found the results to be of some interest. However, there were also some potential sources of error. I looked only at 2012, and I didn’t even look at 2012’s complete picture, limiting myself instead to regulars and semi-regulars. I decided this was worth digging in a little deeper, so I called on Dark Overlord David Appelman to supply me with greater information. What’s presented below is far more thorough, and therefore, far more acceptable.
I left behind 2012, and chose to focus instead on the entire reliable PITCHf/x Era, stretching from 2012 back to 2008. I asked Appelman for pitch and strike data for all players at all positions, because I figured, as long as I’m checking the catchers, I might as well check everyone. So we’ve got results for catchers, first basemen, second basemen, and so on and so forth, with pitchers and designated hitters included.
I’ve explained many times before the formula for figuring out expected strikes. Based on available plate-discipline data, we’ve got the rate of pitches in the strike zone, and we’ve got the rate of swings at pitches out of the strike zone. Given expected strikes, you can find the difference between that result and actual strikes. Then I prefer to put that difference over a denominator of 1,000 called pitches.
Between 2008-2012, the league average was about 20 fewer strikes than expected strikes per 1,000 called pitches. This would read as “-20”. The league average doesn’t come out to zero because human umpires are imperfect, and the PITCHf/x strike zones are imperfect, too. So once I got a column of results, I subtracted -20 from each result, to yield an adjusted result above or below zero. This does nothing to change the actual differences between positions. I can’t tell if I’ve explained this well enough yet, but you probably follow, so I’m just going to get to the table. Here we find the adjusted difference between actual strikes and expected strikes, per 1,000 called pitches, broken down by position, covering 2008-2012:
There’s less to be found, with catchers. Catchers have still been given slightly more generous strike zones while batting, but the difference is smaller than it looked last Friday night, and catchers come out looking the same here as third basemen. They’re not markedly different from first basemen or second basemen. There might be some effect, here, but the effect would be small, and we haven’t adjusted for player height, player stance, or count. As the count gets more pitcher-friendly, the strike zone shrinks; as the count gets more hitter-friendly, the strike zone expands. The result we see for catchers doesn’t strike me as being sufficiently meaningful to pursue further. So, nuts to you, last Friday night.
But now look at the very bottom of the table. For pitchers, over the last five years, we have a sample of more than 100,000 pitches, and more than 56,000 called pitches. Nearly 75,000 of those pitches have been strikes. The expected strikes total, however, is below 74,000, such that we get a big difference. Setting the average at zero, pitchers have gotten one extra strike called on them per 25 called pitches. This isn’t enough to turn good-hitting pitchers into bad-hitting pitchers, but this is something I didn’t expect to find. I came into this examining catchers, and now I’m most interested in the result for their battery mates.
A factor driving this could be the PITCHf/x strike zones not properly adjusting to pitcher batting stances. That’s just a guess on my part, but pitchers tend to be pretty upright, as you can see with Madison Bumgarner below in a shot from last year’s World Series:
If the zone isn’t big enough to reflect reality, we might get the results we’re seeing. But that might not explain everything, or anything. Keep in mind that pitchers often find themselves behind in the count when they’re batting. So their effective strike zones should be a little smaller. Maybe what we’re observing is something systemic concerning PITCHf/x, or maybe it turns out umpires call bigger strike zones when pitchers are batting than they do when non-pitchers are batting. In that event, we could come up with any number of explanations, like “does it matter, they’re all terrible, just get on with it.” Umpires might feel like they’re cheating batting-pitchers less, since batting-pitchers suck. Who’s going to complain about a borderline strike when it’s Tommy Hanson in the box?
We haven’t proven anything, but by expanding the data sample, we’ve shifted our focus from catchers at the plate to pitchers at the plate. Instead of thinking catchers might be given more generous strike zones, now we have reason to believe pitchers might be given more disadvantageous strike zones. When they’re batting, against other pitchers. Against whom they could take advantage of the strike zone when they’re on the mound. It’s all very complicated, and it might be nothing, but as always, it’s probably worth a closer look yet. Somebody has to stand up for the pitchers at the plate.