In general, the literature has suggested if you’re comparing two similar offenses, the more consistent offense is preferable throughout the season. The reason has to do with the potential advantages a team can gain when they don’t “waste runs” in blow-out victories. The more evenly a team can distribute their runs, the better than chances of winning more games.
I decided to take my new volatility (VOL) metric and apply it to team-level offense to see if it conformed to this general consensus*.
To determine a team’s overall offensive VOL, I used the same approach as I did with individual hitters — with two slight tweaks:
VOL = STD(RS/G)/Yearly_(RS/G)^.67
VOL = volatility
STD(RS/G) = the standard deviation of a team’s runs scored per game
Yearly_(RS/G)^.67 = a team’s seasonal runs scored per game, raised to the .67-power
The correlation between team VOL and the number of wins above or below their expected wins from 2002 to 2012** was -.34.
To get a better sense of the overall impact, I grouped teams into four buckets based on their VOL scores’ rankings (relative to other teams in each season) and then calculated the average wins above/below expected for each bucket. Here are the results:
|VOL Rank||Ave Wins +/- Expected Wins|
Teams that ranked first through eighth in terms of VOL for a given season (where lower VOL equates to a more consistent offense) beat their expected win total by an average of two wins and were 1.6 times as likely to beat their expected wins than teams that finished outside of the top-eight in VOL for a season (64% vs. 40%). Compare that to teams ranked 25 through 30 and you have an overall difference of plus-four wins.
If we look at the top- and bottom-20 teams since 2002 — in terms of wins over/under expectations — the relationship is even clearer:
|Year||Team||Actual Wins||W-L%||Over/Under Expected Wins||VOL||VOL Rank|
The average VOL rank of the top-20 teams since 2002 was 8.5, with 14 of the 20 finishing in the top 10 for VOL. The bottom 20 came in at 19.6, with only two teams ranking in the top 10.
But how did teams fare in 2012 when it came to beating win expectations and the volatility of their lineups?
Here are the top- and bottom-five teams from this past season:
|Team||Actual Wins||W-L%||Expected Wins||Over/Under Expected Wins||VOL||VOL Rank|
Baltimore, Cincinnati, San Francisco and Washington each ranked in the top 10 in terms of offensive consistency, while none of the five worst teams broke the top-10. The bottom three teams posted the 23rd, 24th and 27th ranked offenses when it came to VOL.
As has been stated previously, VOL isn’t a silver bullet. At the end of the day, a team’s success is mostly determined by it’s run differential. Putting together a highly consistent team at the sake of more run scoring doesn’t make sense.
To illustrate this point, I looked at teams with high- (1-8) and low-ranked (24-30) offenses and compared their average win totals based on whether those offenses where high-volatility (a VOL ranking of 24-30) or low-volatility (1-8):
|Runs Scored||Runs Scored|
Poor offenses won about the same number of games, regardless of the volatility. But elite offenses won eight more games on average when they were also elite in terms of their consistency (93), compared to their highly inconsistent counterparts (85). (The results were similar when I compared poor and elite run-prevention teams.)
So what have learned so far?
First, it appears there’s difference between how players distribute their offensive performance throughout a season. (That has some relationship year-to-year.) Second, it seems the degree to which a team’s offensive production is consistent can have an impact on whether they can beat their expected record.
Both findings are still preliminary, but they suggest the next question: How does hitter volatility combine to determine overall run-scoring volatility? That’s a much trickier question, but I will hopefully have something on it in the near future.
*I am still looking at the great feedback from colleagues and commenters about the new metric. For now, I decided to run this quick test with the existing metric.
**I derived expected wins using the Pythagenpat approach.
Print This Post