- FanGraphs Baseball - http://www.fangraphs.com/blogs -

# We’re Going Streaking (Again)

In the first part of this piece, I established a framework for evaluating streakiness, using David Wright’s consistent performance in 2007 and his streaky performance in 2010 as examples. Now that we have a methodology for assessing the streakiness of players, we can extend it to all players. I repeated the same process I applied to Wright for all 1,545 players with 500 or more PA in every year dating back to 2001. To save computer processing time, I only ran 1,000 simulations for all players, rather than the 10,000 I ran for Wright in the first part of this piece (this is the difference between the calculations taking days and their taking weeks). While this reduces our precision slightly, the distributions are nearly identical:

So, let’s just get right to the red meat. Here are the five most and five least streaky players in every year from 2001 to 2010:

So what do we see? Looking at 2010, there are some great names at both the top and bottom of the list. I don’t think anyone would be disappointed to have the streaky Carlos Gonzalez or the un-streaky Joey Votto on their team. And certainly, no one would take 2010’s super-steady Skip Schumaker over either one. Looking back at earlier years, Mark Kotsay looks pretty consistent. Not listed here, however, is Kotsay’s 2004, in which he posted a true streakiness of .987. Chone Figgins also goes from among the most consistent players in 2005 to the streakiest in 2007. Neither of the two David Wright seasons we looked at earlier makes the list, but his concussion-marred 2009 season was the streakiest in the league that year. So it seems that Wright is not the only one whose streakiness jumps around from season to season.

The surprising jumps we see from Wright, Figgins, and Kotsay, it turns out, are not a fluke. One way to assess the extent to which a statistic represents an inherent skill, as opposed to randomness, is to calculate the correlation coefficient across seasons. The correlation coefficient, represented by the letter r, tells you how closely related two variables are—in this case, that means how reliably you can predict a player’s performance in a given season based on what he did the year before. A correlation coefficient close to 1 suggests a strong relationship, and a correlation coefficient close to 0 suggests no relationship at all. Negative relationships are also possible, but shouldn’t be relevant in this case. A high correlation across seasons is a good indication that what you’re looking at is related to a player’s actual skill. Strikeout rates for pitchers, for example, tend to correlate across seasons at around 0.7 or 0.8. Voros McCracken’s groundbreaking work on BABIP found that it only correlates across seasons at about 0.3, which lead to an increased emphasis on strikeout and walk rates instead of ERA. With streakiness, the correlation coefficient is -0.014, which is not statistically different from zero (p = 0.667). Here is a scatterplot of player streakiness, with the x axis reflecting a player’s streakiness one season, and the y axis showing his streakiness the next season:

In this sample, there were 938 players who had 500 plate appearances in consecutive seasons (a player can count more than once, e.g. playing from 2001-2003 means that both the pair from 2001-2002 and the pair from 2002-2003 are included). Each blue dot represents a player’s streakiness in two consecutive seasons. How far the dot is to the right indicates how streaky he was in the first season, and how far the dot is toward the top indicates how streaky he was the next season. This is just a sea of randomness. Clearly, there is no relationship at all between the two. Compare that to the scatterplot for a true skill like batter contact rate (Balls In Play)/(Balls In Play + SO), which has an extremely strong year-to-year correlation (r = 0.893):

We can see clearly that a high contact rate in one year almost guarantees a high contact rate the next year. At the most, players might shift by about 10% from year to year, but a high-contact player will almost never become a whiff artist, nor will a strikeout king close every hole in his swing. This means that if we know a player’s contact rate in one year, we can make an accurate guess about what it will be the next year.

With streakiness, however, it is quite the opposite: Knowing a player’s streakiness in one season effectively gives us no ability at all to predict his streakiness in the next. In fact, even knowing a player’s streakiness in three consecutive seasons gives us no ability to say anything about the fourth. Streakiness also appears random within a given season: correlation between streakiness from one month to the next (minimum 100 PA) is r = 0.013, which is, again, not statistically different from zero (N = 3,844, p = 0.413). In short, if we believe our methodology—which I personally have no reason to doubt, although I’m open to suggestions—streakiness among hitters appears to be completely random.

While streakiness may be random for individual hitters, there is reason to think that streakiness overall is not. Here’s a histogram of the total distribution:

For those unfamiliar with histograms, this simply cuts the range of streakiness scores into 20 bins, (i.e. 0.00-0.05, 0.05-0.10, etc) and displays the number of players who fall into each bin. If streakiness were truly random, we would expect a uniform distribution, with roughly the same number of players in each bin and the bars forming a flat horizontal line. What we see, however, is a greater proportion of players in the top half of the distribution than in the lower half. This means that, on the whole, a greater proportion of players appear streaky than appear unstreaky. Moreover, this shift toward the streaky end of the spectrum, while not extreme (mean = 0.537, median = 0.566), is highly statistically significant (p < 0.0001, using both parametric and nonparametric methods). This suggests that players may tend to be streaky, on the whole, even if individual players are not. Although, as commenter Lee noted yesterday, this may also be a function of the non-random nature of the schedule—park effects and opposing pitchers undoubtedly play their part as well.

I should add a couple of things about the methodology. Although I did not start this research knowing if I would find a strong measure of streakiness, I did set out to find something that would be useful for identifying streaky players. I was, to be honest, completely shocked by the utter absence of a relationship between a player’s streakiness in one year and his streakiness in the next. In an effort to find something, I tried this study several different ways. I tried increasing the size of the moving average window. I tried using different measures of streakiness, such as the difference between a player’s maximum moving wOBA and his minimum, variance in moving wOBA, or even using strikeout and homerun rates instead of wOBA. I tried adjusting for luck on balls in play, giving extra credit for line drives over pop-ups. I also tried with pitchers, albeit in fewer different ways (the calculations take much longer to run, for a variety of reasons). I used xFIP, which effectively gives a pitcher’s luck-and-defense-independent ERA, with fifteen-day windows for relievers and twenty-five-day windows for starters. Again, the correlation was basically zero. No matter how I sliced it, the results came back the same. Each time, there was no relationship from one year to the next.

Furthermore, streakiness did not show any relationship at all with any conventional statistics (batting average, on-base percentage, slugging, wOBA, BABIP, walk rate, or strikeout rate), suggesting true randomness. The one relationship that was statistically significant was a weak negative correlation between streakiness and plate appearances (r = -0.061, p = 0.016). It is tempting to think that this may suggest that better players (who play more) are less streaky, but this is unlikely. The fact that streakiness shows no relationship with any other hitting statistics suggests that any relationship with plate appearances is, if anything, a function of strategic usage in response to streakiness. My best guess is that when a player has a streaky season, he is more likely to have a prolonged cold stretch and spend some time on the bench. But given the fairly large size of the sample and the weakness of the correlation, it may also just be a fluke.

So what, ultimately, can we take away from all of this? Although the analysis is complicated, the lessons it teaches us are straightforward. Streaky seasons undoubtedly exist, but it appears that there is no such thing as a streaky or unstreaky player. Rather, the truth seems to be that all players are streaky players. Being human, they have their ups and downs, and they are inherently streakier than random chance would dictate. They are not dice, and they are not random number generators. If Murray Chass ever read Fangraphs, I’m sure he’d be thrilled to hear that. But, again, there is no evidence whatsoever to suggest that a player who is especially streaky in one season will continue to be so in the next. Is this the final word on this issue? Almost certainly not. But right now there’s just no reason to believe that a player’s inherent streakiness, even if it exists, will have any greater impact on his performance than random chance. So, perhaps the next time you hear another owner in your fantasy league complain about how streaky David Wright is, you can offer Skip Schumaker one-for-one, and see what happens.

A Google Doc containing the results of this study has been made available for your perusal. The information used here was obtained free of charge from and is copyrighted by Retrosheet. Interested parties may contact Retrosheet at “www.retrosheet.org”.