Does Consistent Play Help a Team Win?

One of the many insights to come from Bill James was the fact that a team’s winning percentage could very easily be estimated based simply on the difference between the runs they scored and the runs they allowed. And while James’ Pythagorean Expectation cannot account for all variation in team performance, it does a fantastic job.

One possibility that is not accounted for is that teams may distribute their runs differently, game to game, than others throughout the season. It’s possible that two teams with identical run differentials could have significantly different records. Here’s a short example:

Assume two teams, A and B, both with a run differential of 0 (both score and allow 29 runs) over the course of a 10-game series against each other. The Pythagorean Expectation tells us that both teams should have a record of 5-5. However, in this scenario, team B wins 6 out of 10.

Game 1 2 3 4 5 6 7 8 9 10
Team A 10 1 1 1 1 2 2 3 3 5
Team B 1 4 4 3 3 5 3 1 1 4

The difference here is in how both teams distributed their runs. Team B was more consistent in terms of their runs scored compared to Team A. Now, this example only looks at 10 games against the same opponent. Does a team’s consistency really impact their chances of winning over the course of 162 games?

Looking at teams over the past 6 seasons the short answer is less volatility (particularly in terms of runs allowed) leads to more wins. The effect was much greater for run prevention than run scoring. So the more consistent a team is in terms of their runs allowed, game by game, the more wins they should expect. However, greater volatility–particularly in run prevention–leads to more wins than expected based on a team’s Pythagorean Expectation. The less consistent teams were in terms of their game by game run prevention, the more wins above their expected wins they realized.

The topic of team run distribution as well as player consistency or volatility has been dealt with a number of times in the past.

One of the earliest looks at the distribution of team runs was done by Keith Woolner at Baseball Prospectus (see here and here). Tom Tango followed up on this with his own model for predicting runs per game distributions.

Sal Baxamusa took this research one step further at The Hardball Times and began teasing out the implications of different run distributions–i.e. does team performance change depending on how they distribute their runs. In a 2007 article, Baxamusa showed that team’s did not gain a great advantage if they scored more than five runs per game. Therefore, the less variation teams had around their run scoring (i.e. the more they scored between 3-6 runs per game) the greater their likely winning percentage.

Baxamusa suggested that while run scoring should be consistent, run prevention should be less so. The logic was based on some work by David Gassko that for top-line starters you want a more consistent pitcher, but with lesser arms you want more volatility. Why? Imagine two different pitchers that both average the same runs allowed (say, 5 per game). Over the course of 30 starts, Pitcher A gives up exactly 5 runs in every start, while Pitcher B gives up 10 runs in 15 starts and 0 runs in the other 15 starts. Pitcher A is likely to go 15-15, while Pitcher B is likely to go 18-12.

Gassko’s research began to bridge the gap between the utility of team consistency and what that means for roster construction based on individual consistency. Our own Eric Seidman looked at individual pitcher consistency in a pair of articles while at Baseball Prospectus back in 2010 and I looked at this issue for both hitters and pitchers at Beyond the Box Score last year.

For this article, I wanted to look at whether being more or less consistent gave teams an advantage in terms of their winning percentage.

Here are the details.

Methodology:
To start teasing this idea out I calculated the game by game difference of each team’s runs scored versus their seasonal average for each team from 2006 through 2011. I did the same for their runs allowed. Because team’s have different averages I calculated what is called the coefficient of variation–simply the standard deviation of a set of numbers divided by their mean. This helps to control for the impact that higher run scoring and lower run prevention can have on standard deviations.

Runs Scored Volatility (RS_Vol) = the Coefficient of Variation (Standard Deviation/Average) of the difference between individual game Runs Scored and Seasonal Runs Scored Average

Runs Allowed Volatility (RA_Vol) = the Coefficient of Variation (Standard Deviation/Average) of the difference between individual game Runs Allowed and Seasonal Runs Allowed Average

Results:

Team Wins Actual Wins – Expected Wins
RS_Vol -0.37 0.16
RA_Vol -0.71 0.85

Overall, RS_Vol had a negative relationship to team wins. So the more consistent a team’s run scoring, game to game, the higher their win total. The relationship was the same for RA_Vol, just much stronger. This runs a bit counter to some earlier thoughts on team consistency, as some have suggested that you would rather have a more consistent offense, but a more volatility in your run prevention.

I also looked the average wins for teams with different combinations of volatility. Teams that were below average in terms of their run scoring and run prevention volatility (i.e. less volatile, more consistent) won 93 games on average. Even teams that were just below average for RA_Vol averaged 84 wins, which further illustrates the advantage of consistency in that area (at least, that’s what the initial analysis suggests).

RS_Vol RA_Vol Average Wins
Below Average Below Average 93
Above Average Below Average 84
Below Average Above Average 79
Above Average Above Average 68

However, the relationship looks much different in terms of whether teams under- or over-performed their expected wins.

RS_Vol has a positive relationship with wins above expected (or simply actual wins minus expected wins). So the more volatile your offense the higher your likely wins above expected. RA_Vol was also positively correlated, but to a much higher degree (.85). So for teams that really outperform their Pythagorean Expectation it is most likely due to the fact that they are wildly inconsistent, game to game, when it comes to preventing runs.

What to make of all this? I think it leads to some interesting questions, most of which deal with roster construction. Given that larger impact on wins, should an above average team focus on building a generally consistent offense (particularly at the top and heart of the line-up)? Should a below average team look to bring in players–particularly pitchers–that are highly volatile in the run prevention game? What about the rotation? Should you build through consistent 1-3 starters and high volatile 4-5 starters?

As I found in earlier research, volatility is generally not a characteristic that is stable year to year, so building on it is difficult. However, over time players do seem to display a “skill” for consistency. In this way, volatility is like Clutch–highly variable year to year, but identifiable after a large number of observations.

There is lot’s of work to be done in this area. I plan on looking into these questions more, as well as other authors here at FanGraphs.

Peer review is always welcome, so please do send along your own thoughts, theories, or links to similar work.

(Big hat tip to Jeff Zimmerman and Eric Seidman for help with this post and to Noah Isaacs for inspiring the research)





Bill leads Predictive Modeling and Data Science consulting at Gallup. In his free time, he writes for The Hardball Times, speaks about baseball research and analytics, has consulted for a Major League Baseball team, and has appeared on MLB Network's Clubhouse Confidential as well as several MLB-produced documentaries. He is also the creator of the baseballr package for the R programming language. Along with Jeff Zimmerman, he won the 2013 SABR Analytics Research Award for Contemporary Analysis. Follow him on Twitter @BillPetti.

54 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
RMR
12 years ago

It seems to me, as a Reds fan at least, that this usually comes up in the context of: Fan/Announcer complains that player X strikes out too much and insists that either he should be more consistent individually or that the team would be better off with a more consistent player. Of course, the obvious trade-off here is that the guy you have is more productive/talented overall, so you live with the inconsistency.

But if more consistent is better than less, where’s the break-even point. What’s the offsetting amount of overall production/talent you can give up and still be better off with a more consistent player? Is a consistent 2.5 WAR player actually helping your team more than an inconsistent 3.0 WAR?

And perhaps more fundamentally, how should consistency be defined at the player level?

Baltar
12 years ago
Reply to  RMR

This is a good comment, and I greatly look forward to Petti’s further studies.
Remember, however, when a manager, announcer, fan or other baseball type calls for a player to be consistent, he really is calling for the player to be consistently good, not that he want the player to play closer to his average.