As long as there are humans in charge of the strike zone, there are going to be inconsistencies. And as long as there are inconsistencies, there are going to be suspicions of bias.
Sometimes these are specific and a little paranoid. There are fans who believe umpires are biased against their favorite teams. Other times these paranoia are general and accepted without actual hard proof. For example, there’s a common belief that pitchers considered aces get the benefit of a bigger strike zone, that they’re given the benefit of the doubt around the borders, and this is just a part of the game, and there’s nothing to be done about it, really.
But is this really a part of the game?
We can accept it, or we can investigate it, and we might as well investigate it before we decide whether or not to accept it. This wouldn’t have been an option in the ’90s, when people believed Greg Maddux and Tom Glavine were getting calls. But we’ve had PITCHf/x data for the past several years, and we can make use of it toward this end. What strikes zones do aces get, relative to the non-aces?
We’ll cover the window between 2008-13. First, we must define an ace. This is subjective, but I’m going with a definition of at least five Wins Above Replacement, based on runs allowed. In other words, an ace-level season is defined as a season worth at least five WAR as a starting pitcher within the six-year window. This gives me a sample of 99 pitcher-seasons, or about 17 a year, which sounds fine to me. These seasons will be compared against other, inferior starting pitcher-seasons, of at least 50 innings each.
The next step is the more complicated step, involving a little math. For every pitcher’s season, we have to figure out what the strike zone was like that they pitched to. It turns out this is actually pretty simple. Over on FanGraphs, we offer PITCHf/x-based plate-discipline data. You can see the rate of pitches thrown in the strike zone, and you can see the rate of swings at pitches out of the strike zone. FanGraphs also offers raw strike and pitch totals. From all this information, one can calculate an “expected strike” total.
This can then be compared to the actual strike total. If a pitcher got more actual strikes than expected strikes, it can be said he pitched to a more favorable zone. If a pitcher got fewer actual strikes than expected strikes, it can be said he pitched to a less favorable zone. The theory here is that ace-level pitchers end up with more actual strikes than expected strikes, because they get more calls off the edges.
For every pitcher-season, I calculated the difference between actual strikes and expected strikes, per 200 innings. So what do we find from the resulting data?
Print This Post