Baseball fans have been treated to incredible starting pitching performances in recent years, with several ace staffs leading their teams to regular-season and postseason success. Initially, I set out to examine the number of innings pitched by AL starting rotations because I expected that there would be a big disparity from team to team. And more specifically, I thought that the percentage of innings pitched by a team’s starting rotation would correlate positively to either its W-L record, or more likely, its Pythagorean W-L record.
I gathered five years of data (2009 – 2013 seasons) and calculated the Starting Pitcher Innings Pitched Percentage (SP IP%). This number is simply the number of innings a team’s starters pitched divided by the total innings the team pitched. If a starter was used in relief, those innings didn’t count. I only looked at AL teams, because I assumed that NL starting pitchers could be pulled from games prematurely for tactical, pinch-hitting purposes, while AL starters were likely to stay in games as long as they weren’t giving up runs, fatigued, or injured.
Two things struck me about the results:
1. There was little correlation between a team’s SP IP% and its W-L record or its SP IP % and Pythagorean W-L record
2. The data showed little variance and was normally distributed
I looked at 71 AL team seasons from 2009 – 2013 and found that on average, AL Teams used starting pitchers for 66.8% of innings, with a standard deviation of 2.83%. The data followed a rather normal distribution, with teams SP IP% breaking down as follows:
|Standard Deviations||# of Teams||% of Total Teams|
|-2 or lower||2||2.82%|
|-1 to -2||10||14.08%|
|-1 to 0||22||30.99%|
|0 to 1||26||36.62%|
|1 to 2||10||14.08%|
|2 or higher||1||1.41%|
Over two-thirds of the teams (48 of 71) fell within the range of 63.6 to 69.2 SP IP%, which is much less variance than I expected to find. And only three seasons fall outside the range of two standard deviations from the mean: two outliers on the negative end and one on the positive end. Those teams are:
2013 Minnesota Twins: 60.06 SP IP%
2013 Chicago White Sox 60.25 SP IP%
2011 Tampa Bay Rays 73.02 SP IP%
Taken at the extreme, these numbers show a huge gap in the number of innings the teams got out of their starters. Minnesota, for example, got only 871 innings out of starters in 2013, while the 2011 Tampa Bay Rays 1,058 innings in a season with fewer overall innings pitched. Another way of conceptualizing it would be to say that Minnesota starters pitched averaged just over 5 1/3 innings of each nine-inning game in 2013, while the 2011 Rays starters pitched nearly 6 2/3 innings. But when the sample is viewed as a whole the number of innings is quite close, as seen on this graph of SP IP% for the last five years:
The correlation between SP IP% and team success (measured via W-L or Pythagorean W-L) was minimal. (The Pearson coefficient values of the correlations were .1692 and .1625, respectively). Team victories are dependent on too many variables to isolate a connection between team success (measured via team wins) and SP IP%; a runs scored/runs allowed formula for calculating W-L record was barely an improvement over the traditional W-L measurement. Teams like the Seattle Mariners exemplify the issue with correlating the variables: their starters have thrown above-average numbers of innings in most of the years in the study, but rarely finished with a winning record.
What I did find, to my surprise, was a relatively narrow range of SP IP% over the last five years, with teams distributed normally around an average of 66% of innings. In the future, it might be helpful to expand the sample, or look at a historic era to see how the SP IP% workload has changed over time. The relative consistency of SP IP% over five seasons and across teams could make this metric useful for future studies of pitching workloads, even if these particular correlations proved unsuccessful.