Archive for January, 2018

Adding to the K-vs.-Clutch Dilemma

A few recent researchers have been doing some fascinating work on the relationship between strikeouts and clutch and leverage performance. Some good work has been done and there has even been good content added to the comment sections of the respective articles. To start a talk on anything that has to do with clutch performance, there are a few things that need to be settled first.

What is clutch?

The stat called ‘clutch’ has aptly been called into question recently. Does it measure what it is intended to measure, is the main issue. Clutch is namely one’s ability to perform in high leverage situations vs. their performance in not-high leverage situations. If someone is notably poor in important PAs compared to their relative performance in lower leverage situations, clutch will let us know. However, if someone is a .310 hitter in all situations, that hitter is very good, but clutch is not really going to tell us much.

I think the topic has been popularized partly because of Aaron Judge, who had a notoriously low ‘clutch’ number last season. Many have blamed his process to striking out, which indeed could very well be a factor in the relative situational performance gap. However, Judge helped his team win last year despite his record-setting strikeout process. Still, Judge wasn’t even top 40 in WPA last year, but then again neither were a lot of good players. But are high strikeout guys really worse off in high leverage spots? The rationale with putting a strong contact hitter up to the plate in high leverage game-changing spots is intuitively obvious, but all else equal, is someone like Ichiro really better in game-changing situations than someone like Judge?

Many have been using clutch to compare relationships with other stats. To be quite honest, I can’t seem to get much of a statistical relationship between anything and ‘clutch’ so I am opting for a different route. We know that a player’s high leverage PAs are worth many times more to the importance of their team as low leverage situations, by about a factor of 10. If we assume WPA is the best way of measuring a player’s impact to their team winning in terms of coming through in leverage spots, then we can tackle the clutch problem, in the traditional sense of the word.

WPA is not perfect, like every other statistic that exists or will exist. There are a lot of factors that play into a player’s potential WPA. Things like place in the batting order, strength of teammates among other factors all play a part. But in terms of measuring performance in high leverage, it works quite well.

Examining the correlation matrix between WPA and several other variables tells is some interesting things.

**K=K% and BB=BB%

We assume already that a more skilled hitter is going to better be able to perform in high leverage situations than a not as skilled hitter. What we see is that K% appears to have a negative relationship with WPA, but not a strong one, and not as strong as BB%, which has a positive relationship. Looking at statistics like wOBA, K% and BB% along with WPA can be tricky because players with good wRC numbers can also strike out a lot. See Mike Trout a few years back. Those same players can also walk a lot. I like this correlation matrix because it also shows the relationship between stats like wOBA and K%, which you can see are negatively correlated but also very thinly. The relationship between stats like these will not be perfect. Again, productive hitters can still strikeout a lot. Those same players again can also walk a lot. This helps to lend evidence to confirm that a walk is much more valuable than a strikeout is detrimental.

I’ll add a few more variables to the correlation matrix without trying to make it too messy.

We see again that WPA and wOBA show the strongest relationship. The matrix also suggests that we debunk the myth that ground ball competent hitters lead to better performance in high leverage situations.

So why do we judge players like Judge (no pun intended) so much for their proneness to striking out, when overall, they are very productive hitters who still produce runs for their teams? The answer is that we probably shouldn’t. But it wouldn’t be right just to stop there.

So how exactly should we value strikeouts? One comment in a recent article mentioned that when measuring clutch against K% and BB%, he or she finds a statistically significant negative relationship between K% and clutch. However, that statistical significance goes away when also controlling for batting averages. Interestingly, I found the same is true when using WPA as the dependent variable but instead of using batting average, I used wOBA.

To further test this, I use an Ordinary Least Squares Linear regression to test WPA against several variables to try to find relationships. I run several models based mainly on some prior studies that suggests relationships with high leverage performance and other variables. Before I go into the models, I feel I need to talk a little more about the data.

More about the data:

I wanted to have a large sample size of recent data so I use a reference period of 10 years, encompassing the 2007-2017 seasons. I use position players with at least 200 PAs for each year that they appear in the data, which seems to allow me to capture other players with significant playing time besides just starters. This also gives me a fairly normal distribution of the data. The summary statistics are shown below.

There aren’t really abnormalities in the data to discuss. I find the standard deviations of the variables to be especially interesting, which will help me with my analysis. All in all, I get a fairly normal distribution of data, which is what I am going for. The only problems I found with observations swaying far from the mean were with ISO and wOBA. To account for this, I square both the variables, which I found produces the most normal adjustment of any transformation. The squared wOBA and ISO variables is what I will be using in the models.

I use multiple regression and probability techniques to try to shed light on the relationship between strikeouts and high leverage performance. First I use an OLS linear regression model with a few different specifications. These specifications can be found below.

For the first equation, I find that wOBA, BB% and K% all have statistically significant relationships with WPA at the one percent level. I know that is not exactly ground breaking, but we can get a better idea of the magnitudes of the relationship. The results of the first regression are below.

I find that these three variables alone account for about 60% of the variance in WPA. Per the model, we find that a one percentage point increase in K% corresponds to about a 1.14 percentage point decrease in WPA. Upping your walk rate one percent has a greater effect in the other direction, corresponding to about a 5-percentage point increase in WPA. Also per the model, we find that a one percentage point increase in the square root of wOBA corresponds to about a 35.50 percentage point increase in WPA. These interpretations, however, are tricky, and do not really mean much. Since WPA usually runs from about a -3 to +6 scale, looking at percentage point increases does not really tell us anything tangible, but it does give a sense of magnitude.

To account for this, I convert the measurement weights into changes by standard deviation to help us compare apples-to-apples on a level field. The betas of the variables shown below.

We see that wOBA not surprisingly has the greatest effect on WPA while K% has the smallest. All else equal, a one standard deviation increase in K% corresponds with just a -0.04 standard deviation decrease in WPA. A one standard deviation increase in BB% has more an upward effect on WPA than K% does a downward one, albeit by not much. Though the standard deviations for these variables are not very big, so the movement increments will be small. Nevertheless, we still see level comparisons across the variables in terms of magnitude.

We go back to the fact that good hitters still sometimes strike out a good portion of the time. We like to think that strikeout hitters are also just power hitters, but Mike Trout was not that when he won his MVP while striking out more than anyone in the league. Not completely gone are the days where the only ones who were allowed to strike out were the ones who hit 40+ round trippers a year. I’m not necessary trying to argue one way or another, but getting comfortable with high strikeout yet productive players could take some getting used to. We value pitchers who can rack up high numbers of strikeouts because it eliminates the variance in batted balls, but comparing high K pitchers and high K batters is not exactly the same. Simply putting the ball in play is not quite enough in the MLB when you’re a hitter, but eliminating the batted ball variance through strikeouts is important for pitchers.

Speaking of batted ball variance, we can account for that in the models. I add ISO, hard hit ball%, GB% and FB%. I would have liked to add launch angle to the sample but I do not have the time to match the data right now, but that would likely improve the sample. I do my best and account for exit velocity with Hard%. I do not account for Soft% or Med% because some preliminary tests showed no statistical significance. Same goes for LD%, which was a bit surprising. I am mainly looking for how K% changes while controlling for these new variables, and if I can get any better account for the variance in the model.

When controlling for the new variables, the magnitude of the K% shows a stronger negative relationship. We find that despite some other popular belief, ground balls seem to be negatively correlated with WPA, but not as much as fly balls. wOBA and BB% show the strongest positive relationship with WPA. Hard% shows a positive relationship with WPA but is only significant at the 10% level. This model accounts for about 65% of the variation in WPA.

Batted ball profiling for WPA is still a little tricky. Running F-tests for significance on GB and FB, I find that indeed both of them together are significant in the model. However, when controlling for season to season variance, GB and FB percentages are not significant and don’t help the model. I think it’s likely the case that extreme fly ball hitters, all else equal, will not be as strong in high leverage situations.  Kris Bryant seems to fit the profile of a guy who constantly puts the ball in the air yet struggled in high leverage spots last year. On the opposite end of the spectrum, extreme ground ball hitters were not WPA magicians either. It is likely that when looking at the entire sample, FB and GB rates play a part, but when looking at an individual season level, the variance in these rates doesn’t really tell us much.

The explanation may be as simple as that MLB fielders are good. Yes, batted ball variance is very real, but simply making contact, all else equal, does not much change your ability at adding to your team’s chances at winning as striking out. Do not get me wrong, putting a ball in play is always better, but the simple fact of putting the ball in play in itself is not much more helpful. In addition, striking out a lot could suggest mechanical issues with a player’s swing, timing issues etc, though I do not believe it should be a blanket generalization. Mike Trout (I like mentioning Trout, but there are many more who fit this profile) may strike out a lot (not so much anymore) but he also has a great controlled swing where he hits the ball at optimal launch/speed angles, making him good at performing in high leverage situations.

Perhaps the shift has hurt the ability of extreme pull hitters to produce enough to the point where it hurts their WPA. A better idea would probably be to look at platoon splits to see if extreme pull lefties are hurt more than extreme pull righties, since lefties get shifted on much more often. The next explanation is more of an opinion gathered from my playing days and could easily be debated, but the ability to use the whole field is a sign of a better well-rounded hitter. Being an extreme pull hitter often means you lock yourself in to one approach, one swing, and one pitch. But again, I have no statistical evidence to back that up, but that is what I have gathered while being on the field. I think it is good to sometimes throw the eye test into statistical analysis to keep the study grounded.

It seems that performance in high leverage situations is more a mentality and ability to adjust approaches given the situation. The overall conclusion I gather is that K% is detrimental to one’s ability to perform in high leverage situations, but not by much. There are good hitters who strike out a bit, but those good hitters are still good hitters, as demonstrated but the strong relationship between stats like wOBA and WPA. Yes, Aaron Judge struck out a lot last season and had a big dip in relative performance in high leverage situations as seen by his Clutch metric, but all 29 other teams wish they had him. However, even when looking at BB/K rate, the leaders at the very top also show the highest WPAs, but the other leaders beyond that do not follow suit.

To see a more visual relationship between K% and WPA, below is a scatter plot comparing the two metrics with a line of best fit.

Looking a scatter plot of WPA vs. K%, we can see a slight downward relationship with WPA, but the data is mostly scattered around the means, helping confirm my aforementioned conclusion. We can see that there are not as many high K guys with high WPAs as there are high K guys with lower WPAs, but that doesn’t really tell us much because there are obviously going to be more average and below average players than above average. I’ll let you guess the player who had an over 30% K rate yet had a WPA of well over 5.

I know the matrix graph is a little overwhelming, but we can see that K% does not show much of a strong visual relationship with anything. We see a slight upward tick in the slope of measuring K and ISO together, but still predominantly scattered around the means. We also see a slight downward tick in the slope of GB% and K%. Besides the obvious strong relationship with wOBA and WPA, BB% does indeed show positive visual relationship with WPA. The fact that ISO shows a relationship with both K and WPA is interesting. Perhaps ISO helps explain the quality of batted ball variance that I have been trying to capture. The 2s after wOBA and ISO indicate their squared variables.

It seems that no one trait makes a hitter good in high leverage situations or not. Exceptionally well-rounded hitters, such as Joey Votto and Mike Trout, seem to constantly be ahead of everyone else in high leverage situations. Even still, they are not the same types of hitters exactly, though both walk a lot and make quality contact with the baseball. I believe that performance in high leverage situations is a mentality and the ability to keep a solid approach in the face of pressure. Using the Clutch metric itself is probably better when looking at how batters deal with pressure, but players know what is high leverage and what is not and respond accordingly.

Interestingly enough, though I won’t go into much detail here, I took O-Swing and Z-Swing rates and measured them both independently against WPA as well as with the full model. What I found was that O-Swing’s effect on WPA is statistically significant from zero while Z-Swing’s is not. O-Swing% of course showed a negative relationship with WPA. Disciplined batters who have the ability not to chase pitches, thereby recognizing good ones, indeed are poised to do better in big spots (if that is not stating the obvious). I don’t think anyone will pinpoint the exact qualities of a good situational hitter. The best pure hitters will have the edge on WPA, even if they are prone to striking out.

Using Statcast Data to Predict Future Results


Using Statcast data, we are able to quantify and analyze baseball in ways that were recently immeasurable and uncertain. In particular, with data points such as Exit Velocity (EV) and Launch Angle (LA) we can determine an offensive player’s true level of production and use this information to predict future performance. By “true level of production,” I am referring to understanding the outcomes a batter should have experienced, based on how he hit the ball throughout the season, rather than the actual outcomes he experienced. As we are now better equipped to understand the roles EV and LA play in the outcome of batted balls, we can use tools like Statcast to better comprehend performance and now have the ability to better predict future results.

Batted Ball Outcomes

Having read several related posts and projection models, particularly Andrew Perpetua’s xStats and Baseball Info Solutions Defense-Independent Batting Statistic (DIBS), I sought to visualize the effect that EV and LA had on batted balls. For those unfamiliar with the Statcast measurements, EV is represented in MPH off the bat, while LA represents the trajectory of the batted ball in Vertical Degrees (°) with 0° being parallel to the ground.

The following graph visualizes how EV and LA together can visually explain batted ball outcomes and allows us to identify pockets and trends among different ball in play (BIP) types.


The following two density graphs were created to show the density of batted ball outcomes by EV and LA, without the influence of one another.

As expected, our peaks in density are located where we notice pockets in Graph 1. Whereas home runs tend to peak at 105 MPH and roughly 25°, we see that outs and singles are more evenly distributed throughout and doubles and triples fall somewhere in between, with peaks around 100 MPH and 19°. These graphs served as a substantiation to the understanding that hitting the ball hard and in the air correlates to a higher likelihood of extra-base hits. I found it particularly interesting to see triples resembled doubles more than any other batted-ball outcome in regards to EV and LA densities. Triples are often the byproduct of a variable such as larger outfields, defensive misplays, and batter sprint speed, which are three factors not taken into account during this project.

Expected Results

My original objective in this project was to create a table of expected production for the 2017 season using data from 2017 BIP. Through trial and error, I shifted my focus towards the idea that I could use this methodology to better understand the influence expected stats using EV/LA can have in predicting future results. With the implementation of Statcast in all 30 Major League ballparks beginning in 2015, I gathered data on all BIP from 2015 and 2016 from Baseball Savant’s Statcast search database. In addition, I created customized batting tables on FanGraphs for individual seasons in 2015, 2016, and 2017 for all players with a plate appearance (PA).

After cleaning the abundance of Statcast data that I had downloaded, I assigned values of 0 and 1 to all BIP, representing No Hit or Hit respectively, and values of 1, 2, 3, and 4 for Single/ Double/Triple/Home Run respectively. Comparing hits and total bases to their FanGraphs statistics for all individuals, I made sure all BIP were accounted for and their real-life counting statistics matched. Following this, I created a table of EV and LA buckets of 3 MPH and 3°, along with bat side (L/R), and landing location of the batted ball (Pull, Middle, Opposite), using Bill Petti’s horizontal spray angle equation. While projection tools often take into account age, park factors, and other variables, my intention was to find the impact of my four data points and to tell how much information this newly quantifiable batted-ball data can give us.

By calculating Batting Average (BA) and Slugging Percentage (SLG) for every bucket, we can more accurately represent a player’s true production by substituting in these averages for the actual outcomes of similar batted balls. For instance, a ball hit the opposite way by a RHB in 2015 and 2016 between 102 and 105 MPH and 21° and 24° was worth .878 BA and a 2.624 SLG, representing the values I will substitute for any batted ball hit in this bucket.

While a player’s skills may be unchanged, opportunity in one season can be tremendously different from the following, affecting individual counting statistics. With a wide range of factors that can lead to changes in playing time, from injuries to trades to position battles, rate statistics are steadier when looking at year-to-year correlation than counting statistics. Typically rate statistics, such as BA and SLG, will correlate better because they remove themselves from the variability and uncertainty of playing time, which counting statistics are predicated heavily on. Totaling the BA and SLG for each individual batter’s BIP from the 2015 and 2016 season, I was able to then divide by their respective at-bats for that year to determine their expected BA (xBA) and SLG (xSLG).

Year-to-Year Correlation Rates For BA/SLG/xBA/xSLG to Next Season BA/SLG, 2015 to 2016 / 2016 to 2017

Season (Min. 200 AB Per Season)


2015 to 2016

2016 to 2017













While our correlation rates for xBA and xSLG are not terribly strong from season to season over their BA and SLG counterparts, we are seeing some positive steps towards predicting future performance. The thing that stands out here is the decline in SLG and xSLG from 2015/2016 to 2016/2017 and my suspicions are that batters are beginning to use Statcast data. It is widely known that a “fly-ball revolution” has been taking place and many players are embracing this by changing their swings and trying to elevate and drive the ball more than ever. With a new record in MLB home runs in 2017, I would not be surprised to see our correlation rates jump back up next season as the trend has now been identified and our batted-ball data should reflect that.

By turning singles, doubles, triples, and home runs into rate statistics per BIP, we are able to put aside the playing time variables and apply these rates to actual opportunities. Similar to calculating xBA and xSLG, I created a matrix of expected BIP rates (xBIP%) for each possible BIP outcome (x1B%, x2B%, x3B%, xHR%, xOut%). In other words, for each bucket of EV/LA/Stand/Location, I calculated the percentage of all batted-ball outcomes that occurred in that bucket (i.e. 99-102 MPH/18-21°/RHB/Middle: x1B% = 0.012, x2B% = 0.373, x3B% = 0.069, xHR% = .007, xOut% = .536), and summed the outcomes for each batter, giving their expected batting line for that season.

Using this information, I wanted to find the actual and expected rates per BIP for each possible outcome (actual = 1B/BIP, expected = x1B/BIP, etc.) and apply these to the next seasons BIP totals. For example, by taking the 2B/BIP and x2B/BIP for 2015 and multiplying by 2016BIP, I can find the correlation rates for actual and expected results, with disregard to opportunity and playing time in either season. Below are the correlations from 2015 to 2016 and 2016 to 2017, with both their actual and expected rates applied to the BIP from the following season.

Correlation Rates For Actual and Expected Batted Ball Outcomes, 2015 to 2016 /

2016 to 2017

Season (200 BIP Per Season)


2015 to 2016

2016 to 2017

























Looking at the above table, the expected statistics have a higher correlation to the following seasons production than a player’s actual stats. The lone area where actual stats prevail in our year-to-year correlations is projecting triples, which should come as no surprise. Two noticeable areas that this study neglects to take into account are park factors and batter sprint speed. Triples, more than any other batted-ball outcome, rely on these two factors, as expansive power alleys and elite speed can influence doubles becoming triples very easily.

One interesting area where this projection tool flourishes is x2B/BIP to home runs in the following season. By taking the x2B/BIP and multiplying by the following seasons’ BIP and then running a correlation to the home runs in that second season, we see a tremendous jump from the actual rate in season one to the expected rate in season one.

Correlation Rates of 2B/x2B To HR In Following Season, 2015 to 2016 / 2016 to 2017

Season (200 BIP Per Season)


2015 to 2016

2016 to 2017

2B -> HR



x2B -> HR




With this information, we can continue to understand the underlying skills and more accurately determine expected future offensive production. By continuing to add variables to tools like this, including age, speed, park factors, as many projection models have done, we can incrementally gain a better understanding to the question at hand. This research attempted to show the effect EV/LA/Stand/Location have on batted balls and how that data can help us find tendencies, underlying skills, and namely, competitive advantages.

Having strong correlation rates on xBIP% to the next season’s actual results, it is exciting to find another area of baseball that gives the information and ability to better understand players and their abilities. With the use of Statcast, we are looking to create a better comprehension of what has happened and how can we use that to know what will happen, and it appears that we have.

Remembering April 2016 Odubel Herrera

Ah, Odubel Herrera. When you read his name, what’s the first thing that pops into your mind? Is it the lapses on the basepaths? Is it the absolute joy he displays on the baseball field? Is it his stellar defense in center field? Is it his free-swinging approach at the plate? Is it the BAT FLIPS? No matter what Odubel Herrera brings to your mind, I doubt your first thought is of a disciplined hitter with an advanced approach at the plate. But, what if it was?

Let’s take a look back to April of 2016. Odubel Herrera was starting his second big-league season, coming off a very productive rookie season, where he played excellent defense in center field, sprayed hits all around the field, and, quite frankly, was one of the few reasons to watch the 2015 Phillies. However, Odubel walked at a well below-average rate of 5.2%, and struck out more than average, at 24%. He didn’t hit for much power, and his statline was greatly aided by an unsustainable BABIP of .387. I believed, and probably rightfully so, that if Odubel failed to make changes at the plate, his offensive value would suffer. With that being said, watching Herrera at the plate in April 2016 was one of the most unexpected transformations I’ve ever seen from a major-league player.

So, what changed for Odubel in April of 2016? To get an idea of how different he was at the plate, take a look at this chart, comparing his career plate-discipline numbers to his April 2016 plate-discipline numbers.


Odubel Herrera’s Plate Discipline Metrics

Metric Odubel Herrera, April 2016 Odubel Herrera, Career
O-Swing% 21.1 36.3
Z-Swing% 65.4 68.9
O-Contact% 66.0 64.3
Z-Contact% 84.1 85.5
SwStrk% 8.6 11.7


Wow. While these rates were accumulated over a fairly small sample size of 104 plate appearances, the differences between his April 2016 numbers and his career numbers are absolutely stunning. His O-Contact and Z-Contact rates were within 2 percentages of his career numbers, and his Z-Swing percentage was only three and a half percent less.

Meanwhile, Herrera’s O-Swing% in April 2016 was over 15 points lower than his career rate. For reference, in 2017, the players with the most similar O-Swing rates to April 2016 Odubel Herrera were Anthony Rendon, Brett Gardner, and Chase Utley. As you would expect, three extremely disciplined hitters with good control of the strike zone. They rated 12th, 13th, and 14th in O-Swing% among hitters with 300 or more plate appearances. Very impressive.

When we look at players who, in 2017, had similar O-Swing rates to Odubel’s career numbers, we see a much different trio of hitters. Tommy Joseph, Darwin Barney, and Jose Iglesias. Not exactly a group of fearsome hitters at the plate. They rated as 251st, 252nd, and 253rd among hitters with at least 300 plate appearances, out of 287 hitters. Not so impressive.

Could Herrera’s improved discipline stats have been aided by simply being pitched outside of the zone more often? No, not the case. Throughout his career, Odubel has been pitched inside the zone on 41.9% of pitches. In April 2016, he was pitched inside the zone even more often, with 48.9% of pitches coming inside the strike zone.

Did Herrera make a conscious decision to be more selective at the plate? It looks like that may be the case. I found a few articles about Herrera’s improved plate discipline in April 2016, including this one from the Morning Call. The article speaks of Odubel Herrera and his father having an issue with his relatively high strikeout rate during his rookie season. Unhappy with his high strikeout rate, Herrera came into 2016 planning to display a more patient approach at the plate. That, he certainly did.

So, what changed after April of 2016? Odubel Herrera quickly regressed towards his career average O-Swing%. Herrera finished 2017 with a hideous 40% O-Swing rate, one of the worst marks in baseball. He walked only 5.5% of the time, and struck out 22.4% of the time. Both rates are similar to the ones he produced in his rookie season. He put up an even 100 wRC+, which is actually pretty impressive for someone who swings at so many bad pitches. However, if Herrera ever wants to be something more than a league-average to slightly above-average hitter, he’ll need his plate-discipline metrics to look more like they did in April of 2016 than they have throughout his career.

Odubel Herrera walked an incredible 22.1% of the time in April 2016, while striking out only 17.3% of the time. For one month, Odubel Herrera borrowed Joey Votto’s eyes at the plate. Since then, he’s looked a lot more like Odubel Herrera. Will he ever look like Joey Votto again? For a player as unique and ever-changing as Odubel Herrera, I don’t want to rule it out. This is what’s fun about baseball. In a few months, we’ll get to see.

The 2017 BABIP All-Star Team

Oh BABIP, the stat of luck. For those wondering what the baseball BABIP is – it stands for Batting Average on Balls in Play. So basically a player’s batting average excluding home runs and strikeouts. It’s often viewed as a stat of luck.

So who was lucky in 2017? Who are the 2017 BABIP All-Stars? Here are the qualified (unless noted otherwise) BABIP leaders at each position.

Catcher: Alex Avila, .382 BABIP *Min 300 PA

Whoa, .382! Yeah, that’s not going to happen again, at least not in 300 plate appearances. Alex Avila had a nice bounce back in 2017, his best season since his career year in 2011. But what do we make of it considering he had such a high BABIP? Well for starters, Avila had the second-highest hard-hit rate of all players with at least 300 plate appearances behind only J.D. Martinez. Yes, Alex Avila’s ridiculous 48.7% hard-hit rate was better than Aaron Judge, Giancarlo Stanton, Joey Gallo, Miguel Sano – everyone but Martinez (which was 49% if you’re wondering). A high hard-hit rate does generally relate to a higher BABIP, but we have no reason to believe he’ll even sniff a 40% hard-hit rate again, and with limited speed it’s hard to imagine his BABIP being anywhere near .382.

2018 Expectations: .320

First Base: Trey Mancini, .352 BABIP

2017 was Trey Mancini’s first big-league season so we have to look back to his minor-league numbers for comparisons. A .352 BABIP seems pretty high for a lumbering first baseman, but Trey Mancini actually posted a high BABIP regularly in the minors. He held a BABIP above .344 in five different 52+ game stints at different minor-league levels, including a .400 BABIP over 84 games at AA in 2015. Even in his largest sample, 125 games at AAA in 2016, he posted a .351 BABIP!

He holds a decent hard-hit rate at 34.1% and was able to avoid a lot of infield fly balls. So, while .352 may seem high, I’d expect Mancini to consistently achieve an above-average BABIP. I do anticipate his norm being a little lower – around .335, but overall I don’t think this an out of the ordinary BABIP.

2018 Expectations: .335

Second Base: Jose Altuve, .370 BABIP

Look, Jose Altuve is one of the best in the game, and a perennial first-round pick in fantasy baseball. There’s no questioning his talent, but a .370 BABIP should be viewed as really high for any player. And for Altuve, this was the highest mark of his career, although not by much. Altuve achieved a BABIP of .360 back in 2014 and hit the .347 mark in 2016.

Altuve is a high-contact player with a lot of speed. His BABIP will generally always be higher than most, but .370 is pushing it. I’d peg his expectations at .340-.350 for 2018.

2018 Expectations: .345

Third Base: Chase Headley, .341 BABIP

A .341 BABIP is quite a bit higher than Chase Headley’s career BABIP of .328, but not that extreme. His career high, albeit in only 113 games, was .368 back in 2011. But what really stands out to me here is his .303 BABIP in 2016. Headley’s 2016 and 2017 seasons were nearly identical when you dig into the numbers. Similar hard-hit rates, strikeout and walk rates, and an identical ISO. Even down to the infield fly-ball percentage, the stats show a very similar season, but the results were very different for BABIP. So what the baseball gives?

Well, BABIP is generally viewed as luck, and I think this is a case where Headley had some bad in 2016 and some good in 2017. I’d put his BABIP expectations below that of even his career, somewhere around .320.

2018 Expectations: .320

Short Stop: Tim Beckham, .365 BABIP

I feel like Tim Beckham has been in the game for years, but 2017 was really his first full season in the bigs. A former first overall draft pick, Beckham finally started to break out last year. His strikeout rate continues to be an issue, but he showed promise in several areas. We don’t have great data to compare his BABIP to, but Beckham has good speed and hits it hard when he makes contact. One of the best numbers to support a high BABIP is his extremely low infield fly-ball percentage, 3.7%. Regardless, a .365 BABIP isn’t going to happen again. I think FanGraphs’ projections of .330 nails it right on the head.

2018 Expectations: .330

Left Field: Tommy Pham, .368 BABIP

Tommy Pham, what a season! Where did this come from, what the baseball Tommy? Well, Pham had shown strong signs in recent years at AAA, but struggled mightily with strikeouts in 2016. Wow, what a difference some vision correction can do! For those unaware, in 2008 Pham was diagnosed with a degenerative eye condition, which has recently been treated. There are numerous articles on this, but here is one from the St. Louis Post-Dispatch to check out. So what do we do here? Well, while Pham did strike out a ghastly 38.8% of the time in 2016, he still maintained a strong BABIP of .342. His hard-hit rate remains strong and he has a nice line-drive rate. And let’s not forget, Pham does have some wheels, too.

There’s not a great answer for this one, but we have to expect a dip in 2018. Numbers are supportive of a higher BABIP, but not at .368.

2018 Expectations: .340

Center Field: Charlie Blackmon, .371 BABIP

This guy just keeps getting better. Sure, Charlie Blackmon enjoys the Coors Field effect, but his numbers are still very impressive. I’m going to make this one simple. Blackmon is a great player with speed and has increased his hard-hit rate by almost 5%, but even Coors Field won’t help him to a BABIP of .371 again. I do, however, believe he can repeat his mark from 2016, .350.

2018 Expectations: .351

Right Field: Avisail Garcia, .392 BABIP

I’ve actually written about Avisail Garcia in more detail elsewhere, but to summarize – this isn’t going to happen again. This was the highest BABIP by a qualified hitter since 2013, and Garcia has never been anywhere close to this in his big-league career. Yes, he has shown improvements in numerous ways, but expect this BABIP to come crashing down to earth and landing at around .320.

2018 Expectations: .320

Designated Hitter: Domingo Santana, .363 BABIP

Did you know Domingo Santana had a .359 BABIP in 2016? Right off hand, it would seem .363 isn’t too far off expectations for the young slugger who is finally showing his potential. A .363 BABIP shouldn’t be expected for anyone, but I have a hard time arguing against it for Santana. Take a look at some of his AAA BABIP totals – 2014: .408 in 120 games, 2015: .429 in 75 games with the Astros and .467 in 20 games with the Brewers. Crazy! He has good speed and hits the ball hard. Did you know he had the second highest line-drive rate of all qualified hitters in 2017 at 27.4%?

2018 Expectations: .345

And just for fun – Pitcher: Robbie Ray, .433 BABIP *Min 50 PA

Who doesn’t like to talk about pitcher hitting stats! With a qualifier of 50 minimum plate appearances, Robbie Ray takes the cake for pitchers with a whopping .433 BABIP. What else do we even need to say here?

2018 Expectations: It doesn’t matter

Stars and Scrubs Forever

This post was originally from my website, and one image is courtesy of


Every offseason, each team’s GM and front office has a choice to make: should we stock up on depth, or go sign the big fish on the free agent market? Recently, as Travis Sawchik of FanGraphs pointed out, teams have been trending towards the depth route, but when it comes to free agent hitters, teams are far better off allocating their money towards just a few stars. Here’s why:


I. Depth-based teams perform no better than Stars and Scrubs teams

Back in 2014, Jonah Keri and Neil Paine from FiveThirtyEight did some research (they, in turn, cite FanGraphs) to show that the way a roster is constructed has little effect on how it performs. Here is the chart they produced based on the data they found:paine-out-of-sample-war.png

On their chart’s x-axis, the data shows how balanced a team is, while on the y-axis, the chart displays how well the team performed. While the article makes sure to note that at the highest extremes, depth works, there is not an overall trend to be found. The teams who had the most total contributions from the sum of their players did the best, whether that came concentrated on a few superstars or it came from every individual. And, when one thinks about it, it makes sense that neither strategy would be perfect. Banking on a few players seems to come with risks of health, but at the same time if they can stay healthy, those stronger players may be more consistent. Jonah and Neil also make an interesting point with regards to the trade deadline and further roster building after its base: It’s far easier and cheaper to replace a scrub at second base or left field with an average player than to replace an average player with a star.

So, to be clear, there is little correlation between how a team spreads out their roster and how well they do in a season. Both have advantages, and both have disadvantages, which turn out to be pretty equal, as shown by the data. The battle then becomes about value, which I wrote a little about with regard to the current free agent class. Between two teams that get equal contributions from the sum of their players, which roster construction type is cheaper? With the exception of an especially greedy owner, the team who chooses the more cost-efficient makeup should be able to afford an extra player for the same price, pushing them just over their competitor.


II. Stars and Scrubs is a more cost-efficient method of roster construction than Depth

To find this information, I built a Python program that looks at tabular data from FanGraphs and MLB Trade Rumors. Along the x-axis of my program’s graph (below) is the WAR of various position players in their contract years, and along the y-axis is the average annual value of the contract they proceeded to sign. Using a polynomial regression model, I made a curve of best fit (in red), which should show about how much it would cost annually to sign a player of each WAR value. salary vs war graph.png

The basic red curve takes on the form of an inverse cube function, steep in the middle stretching out lengthwise on either end. That means it costs more money to tack on an extra share of a win to an average player than to a great or a poor player. That concept is best illustrated by the blue graph (the red line’s derivative), which peaks at a 2.51-win player, just above average (2.0), meaning each extra part of a win you want to add is most expensive for players with a WAR between 2 and 3.

The green money line, however, is the most important, and you don’t need calculus to understand it. Let’s zoom in a little.cost per win zoom.png

On the x-axis is the total WAR that a free agent accumulated last season, and on the y-axis is the amount of money that each of those wins costs (contract AAV divided by the WAR contribution). The math says that as a player’s WAR approaches zero, their price approaches infinity, but we’ll assume that a team can get a replacement level player for the MLB minimum wage, around $500,000. The lesson there is simply that buying a player with a WAR under 1.0 is a bad idea (but does buying a player with a negative WAR earn you money per win?). A 1.0-WAR player starts out as a rip-off per win, but the value quickly rises. A 1.6-WAR player represents the local minimum in cost per win, at only $4.18MM. The price of a win then starts to rise again for the average and above average athletes, hitting a local maximum of $4.35MM per win for a 3.3-WAR player. But then, as foreshadowed by the plateauing of the red curve and decrease in the blue curve, the green curve begins to drop. By the time it hits a 5.5-WAR player, a win only costs $3.66MM, which is as far as the data will take the line without overfitting the smaller sample up top.

The local minimum at 1.6 WAR is important for a team that only has money for maybe one very minor investment (namely, do not invest in a below-great player worth much more than 1.6 WAR, or much below, because teams can always promote or claim 0.0 WAR players for minimum wage), but the ever-decreasing price tag per win of the best players is the most important part. To be a top-hitting team in 2017, the nine players in your lineup needed to total around 27 WAR for the season — on average 3 WAR per player. To build this kind of roster of pure depth, that is every player is equal, each player would command an average annual value of $12.9 million, for a total cost of $116.1MM. However, a team who builds their 27 WAR with five 5.5 WAR hitters and four replacement level hitters will only spend $102.5MM. If they want to spend the same amount of money as the first team, they could add an extra 3.25 WAR bat, making their team superior (that’s the difference between the Cardinals’ and Mets’ offense, or the Diamondbacks’ and Braves’ offense) to their depth-based counterpart.

If you exclude the ability to add replacement level players for minimum, a big advantage for more extreme stars and scrubs teams is keeping payroll down. Here are the total payrolls of various 27-WAR roster constructions, with the deeper ones at the top and the shallower ones at the bottom:

Lineup Makeup Payroll
9x 3 WAR $116.1MM
4x 3.5 WAR, 4x 2.5 WAR, 1x 3 WAR $117.7MM
4x 4 WAR, 4x 2 WAR, 1x 3 WAR $116.5MM
4x 4.5 WAR, 4x 1.5 WAR, 1x 3 WAR $103.7MM
4x 5 WAR, 4x 1 WAR, 1x 3 WAR $103.3MM
4x 5.5 WAR, 4x 0.5 WAR, 1x 3 WAR $105.3MM


There’s a sudden drop-off in payroll once a team gets below a certain amount of depth, which coincides with both the part of the green graph at the end that becomes a really steep downhill and the part of the small valley in the beginning of the curve. If it didn’t already seem clear, this should answer up any questions. A stars and scrubs roster provides much more value for a team than a depth-based one, allowing them additional payroll space to add better players. The FiveThirtyEight data from Part I showed that roster makeup does not affect team record, and that team talent was decided purely based on how good the sum of the players are. By saving money through a stars and scrubs construction, a team can add more good players, therefore adding to that sum, and becoming the better team.


III. Conclusion

The collected data shows a lot, but it’s far from perfect. For starters, I only focus on WAR, which is a terrific statistic, but is in no way completely tell-all (I’ve written about the topic in the past). Additionally, I only look at FanGraphs’ fWAR, which is only 1/3 of the WAR story. Furthermore, the method assumes that free agents will replicate their previous season during the years of their contract, ignoring aging curves, or at least that teams assume they will. Anyone who follows baseball at all knows this is far from the truth. Teams know free agents are incredibly risky commodities, and the suggestion that a team would consider building a roster entirely out of free agents is kind of ridiculous. This is especially true for superstar free agents, who will require a longer commitment than average ones. The best method of player acquisition for value and talent has been, is, and will probably always be player development. That said, a made-up model of teams acquiring only free agents works well to represent a more realistic model, when a team might have to decide if it wants to allocate a small part of the budget to a few hitters, or only one hitter. Finally, the study only looks at hitters. An analysis of pitchers would need a whole new article.

At first, the suggestion that the best teams should be superstar-driven is a little depressing. It’s fun to watch stars play, but part of the beauty of the game is that everyone is the lineup has the same chance to make a contribution. But one could also look at the findings in a much more positive light. Rebuilding teams don’t need every single prospect around the diamond to work out. Having just a few players break out in superstar fashion (e.g. the 2017 Yankees, who continue to add more superstar power) can make a team instantly competitive. Signing just one or two big free agents (teams are shying away, but J.D. Martinez plus Eric Hosmer could turn any franchise around if they continue to grind after signing) can turn a mediocre roster into a World Series contender. It’s all very good for the parity of the game. The power of just one or two stars can light up a whole team.

Is It Time to Rethink Hitter’s Counts?

Hitters have always lived by the idea that they will try and work the count in their favor to not only get closer to a walk, but to force the pitcher to be more predictable. Limit the pitcher down to just throwing you a fastball, and give yourself a better chance at guessing correctly. Pitchers do not want to walk people and will throw their fastball much more predictably as they fall down in the count.

Take Clayton Kershaw, for example. As Jeff Sullivan pointed out in an excellent piece, Kershaw is pretty strong against using his curveball in hitter’s counts. A pitch he throws roughly 17 percent of the time has been almost nonexistent in hitter’s counts. For any hitter, getting to a friendly count against Kershaw means he does not have to worry about seeing the curveball. Take a look at how he has used all of his pitches, by count, in 2017.

via Baseball Savant

Get yourself in a hitter-friendly count and sit fastball. Of course, it is easier said than done to hit Kershaw, but it has led me to wonder whether it is right to keep throwing so many fastballs in counts where hitters are anticipating fastballs.

To start, I pulled the results for off-speed and fastball usage in hitter’s counts for all pitchers in 2017 (min 50 off-speed and fastballs each in hitter’s counts). Just to try and get a sense as to whether there was any relation, I first took a look at off-speed usage in hitter’s counts vs xwOBA.

Nothing to really find here; a lot of randomness. What about fastball usage in hitter’s counts?

There is a small relationship here, but not too much to glean from this, even from the guys who have bigger (faster) fastballs.

But pay attention to the y-axis for both plots: the fastball group is centered higher than the off-speed group. It is not something small, either.

– xwOBA on off-speed (hitter’s count): 0.387
– Avg xwOBA on fastballs (hitter’s count): 0.437

Much of the concern here, I am sure, revolves around the basis that pitchers throw fastballs in these counts because there is a significantly higher chance of throwing a strike with a fastball versus an off-speed pitch. Well, that simply is not the case.

– Zone% off-speed (hitter’s count): 52.1 percent
– Zone% fastballs (hitter’s count): 58.2 percent

We see only a six percent difference here. There is a lot that goes into guys throwing off-speed pitches for strikes, but this is something more negligible than I thought. Normally, you would think some players would not have this much control over off-speed pitches, but they are big-league pitchers.

So, we have pitchers who can throw off-speed pitches in the zone nearly as often as they do fastballs when hitters are ahead in the count. How have hitters fared against those pitches in the zone?

This is from the same two groupings of pitchers (min 50 off-speed and fastballs in hitter’s counts), so there is some overlap for some players. But, I hope you can see the off-speed grouping is centered a little more left than the fastball groupings. For these players, the average off-speed exit velocity was roughly two MPH lower than the average fastball exit velocity (82.4 vs 85 MPH). The league average sees a similar split as well (88.1 vs 90.7 MPH). To put this velocity gap in perspective, among the 387 pitchers who threw at least 750 pitches in 2017, the standard deviation of exit velocity was 1.56 mph.

One thing I have neglected so far is pitch location. Oftentimes, it’s hard enough for hitters to adjust and hit a pitch they weren’t expecting so I could be looking too closely at stuff. I had mentioned in my previous post that the exit velocities for offspeed pitches in these counts were lower than fastballs (roughly 2 MPH slower). To get a sense as to how pitchers have done this, it’s important to take a look as to where they’ve located these pitches. I started by sorting for batted balls hit 85 mph or less in hitter’s counts.

It’s important to note that both of the concentrated groupings are located in the zone. Most importantly, but not so surprisingly, the majority of these pitches come on the lower, outside part of the plate. However, to generate weak contact, you’d expect pitchers to be more fine in their location. This is still pretty dependent on fitting pitches on the lower third, but it’s getting weak contact on pitches in the zone.

The pitch groupings themselves are something sort of hazy or hard to really discern. To get a more definite look as to what damage is being done, it’s best to take a look at xwOBA by zone for the different pitcher-vs.-hitter matchup combos.

I’ll be talking about these different pitch zones as they are shown in PitchF/x and Baseball Savant data.


I apologize for not being able to present this information in a more palatable fashion but I hope you can see that it’s more than just throwing offspeed low and away.

There’s a lot to digest in this. Depending on pitcher handedness, there are 50 and 80-point swings in xwOBA for pitches thrown in the same location. There are some issues with a lack of data, but only for the corner zones.

There are a few caveats to all of this. There was nothing direct about throwing more off-speed pitches in hitter’s counts that led to better results; there is a smaller sample of off-speed pitches thrown versus fastballs thrown in hitter’s counts, and sequencing is always an issue that is hard to build in. And maybe it is not fair to consider all counts where the hitter is ahead. 1-0 certainly is not the same as 3-0, but enough of the general convention still seems to be in place today. Even a pitcher like Clayton Kershaw becomes more predictable and narrows his arsenal after falling down 1-0.

But it is time for pitchers to expand their arsenals and use their off-speed pitches more often in hitter’s counts. The league as a whole is throwing offspeed pitches 29% of the time when down in the count, and that number has been gradually increasing season after season. Pitchers can certainly throw their off-speed pitches in the zone nearly as often as they can their fastballs, and to better results as well. Much of the hitter’s advantage when the count is in his favor is that he has a better idea as to what pitch is coming. Given the skill of MLB pitchers, it is an advantage that very well could be taken away to favorable results.

(all data via Baseball Savant)

Pitch Velocity and Injury: Is Throwing Less Hard Worth It?

Is throwing hard worth the DL time? Someone presented this interesting question to me on Twitter (thanks Aaron!), and I felt like it was worthy of enough analysis to deserve an article. It certainly appears as though hard-throwing pitchers see more DL time, but at the same team, it also appears as though throwing harder is worth more in terms of on-field value. To properly answer this question, I can break it down into three sub-queries: 1. Is throwing hard worth more? 2. Are pitchers who throw hard more prone to injuries and are injured for longer? 3. If both of these effects exist, what is the trade-off point? Is there some magical MPH range which optimizes health and value?

If I can establish definitive answers to 1 and 2, we might have a chance at answering question 3. Let’s dive in.

Throwing hard is simultaneously better and not better

There definitely exists a popular notion that throwing harder is worth more — it is one of the most important tools used in grading prospects, and pitchers are now actively training to try to increase their velocity, in hopes that it makes them more valuable.

But that doesn’t mean that pitching harder makes a pitcher more valuable. I took a look at MLB pitchers’ average pitch velocities for four of the most common pitches — the fastball, slider, curveball, and change-up — and took a look at their value as a function of velocity, using PitchF/x pitch values per 100 pitches.

There’s a big outlier that affects the framing of the data — my guess is that Sam Gaviglio threw a single pitch that was classified as a fastball and that one pitch was hit for a home run, hence the extrapolated run value for that pitch looks silly — but the trend is still visible. There exists a very weak, positive correlation between fastball velocity and pitch value.

It’s more of the same for sliders…


…and surprisingly, even change-ups! It seems counter-intuitive, seeing as change-ups are considered valuable not for being fast, but for instead being slow and messing up hitters’ timing. While this is true, pitch values do not exist in a vacuum and must be interpreted in context. For a pitcher with a 97 MPH fastball and a 90 MPH changeup, that changeup is about equal in value to the changeup of a pitcher with a 95 MPH fastball and 88 MPH changeup, though the former pitcher is more valuable overall by virtue of throwing harder.

Indeed, if I plot a pitcher’s average pitch speed across all of their pitches, I can see a similar trend emerge — weak, positive correlation. To get their average total velocity, I weighted the velocity of each type of pitch thrown by each pitcher based on how frequently they threw each pitch — this approximates the overall average of all of their pitches as if I calculated a simple average of pitch velocity. I weighted their value per 100 pitches in a similar manner.

Based on our very rough approximation, we can estimate how many runs per 100 pitches per 1 MPH a given type of pitch is worth with a linear regression.

Pitch Run Values and Velocity
Pitch Type Runs per 100 Pitches per MPH R2
Fastball 0.1915 0.04317
Slider 0.04101 0.002705
Curveball 0.07368 0.003819
Changeup 0.07852 0.004071
All Pitches 0.0709 0.06076

Across all pitches, it appears as though 1 MPH on your pitch is worth about .0709 runs per 100 pitches, which is close to the values for curveballs and changeups. What stands out the most is that for a fastball, 1 MPH is worth .1915 runs per 100 pitches, more than double that of the next pitch! And, among individual pitches, fastballs unsurprisingly have the best correlation between value and velocity.

I would be remiss, however, if I failed to mention that the correlation is still extremely weak for the fastball, as it is with all pitches and with velocity in general. Simply put, velocity is but a single tool in a pitchers’ arsenal, and pitchers can be effective without it (Bartolo Colon, 2015-2016) and ineffective with it (Jose Urena, 2015-2017). Movement, spin, placement, and sequencing are all important tools, and the most effective pitchers have mastery over all of these. This is why there exists only an extremely weak correlation between velocity and pitch value, and the gains of throwing faster are marginal at best — if you throw 3000 pitches and average 89 MPH across all of them, you’d gain about 1.8 runs total if you threw 1 MPH faster on all of your pitches.

Not only that, but pitch values can vary wildly from season to season. To see evidence of this, look at Aroldis Chapman’s fastball value from season to season.

Aroldis Chapman
vFA (pfx) wFA/C (pfx) Year
100.0 1.71 2017
100.4 2.51 2016
99.4 1.19 2015
100.2 1.74 2014
98.4 0.99 2013
98.0 2.07 2012
98.1 0.79 2011

Aroldis Chapman’s average fastball velocity, while consistently the fastest in the league, sees a lot of variability in value. Sure, it was most valuable when at its fastest — but it was comparably valuable at its slowest! It’s still roughly the same pitch throughout Chapman’s career, but its value has varied wildly, partly due to other pitch characteristics, and partly due to the context of pitch values.

But for our purposes, we now have a very rough quantification of the value of 1 MPH — 0.2 runs per 100 pitches per MPH for fastballs, and 0.07 runs per 100 pitches for about every other pitch.

Ouch, oof, owie, my arm

For the second part of this analysis, we need to examine whether or not pitchers who throw harder are at a higher risk for injury, and tend to be injured for longer than pitchers who would throw slower. Again, this feels like it’s common sense, but is instead more of a popular notion — the strains, wears, and tears of throwing harder should result in more frequent and more severe injuries, but this only our perception of it. We should not take this notion for granted, and instead empirically look at whether evidence exists for this idea.

I looked at 2017 pitchers and grouped them by average pitch velocity, then examined how many of them hit the DL at some point during the season.

Woah! 80.0% of pitchers who threw 95 MPH or harder on average hit the DL at some time in 2017, compared to 29.6% of pitchers who threw 95-93, which looks like a massive difference. It’s not nearly as significant as the chart appears, however, as there were only five pitchers who fell into that bucket this season (Aroldis Chapman, Brian Ellington, Enny Romero, Trevor Rosenthal, and Zach Britton), and four of them (Chapman, Romero, Rosenthal, and Britton) hit the DL in 2017. But last year, only one of that group hit the DL, when Romero made a brief 15-day-DL appearance for a strained back.

A brief aside: What’s curious about this chart is that pitchers with lower average velocity tended to hit the DL more frequently than pitchers who threw harder. Part of this is small-sample-size bias, as there were only 10 pitchers who averaged less than 81 MPH across all of their pitches, but part of it is age: Eno Sarris noted that pitch velocity never peaks in MLB players, but only declines steadily during the course of players’ careers. And being older puts players at greater risk of injury, especially pitchers. Indeed, most the pitchers at the lower end of the average velocity table are older pitchers, like Bronson Arroyo, Rich Hill, and Jered Weaver. These pitchers are more prone to injury not because they throw less hard; they throw less hard and are prone to injury because they are old.

So where are we left then with regards to the effect of pitch velocity and injury? It looks inconclusive with 2017’s data alone. Had we performed our analysis with 2016’s data, we would have found a significantly lower rate of DL times for pitchers throwing 95+, as only two of five pitchers who averaged 95+ in 2016 hit the DL at any point in 2016. Perhaps we should expand our analysis.

It’s almost inevitable that I have to link back to Jeff Zimmerman’s THT piece on the relationship between fastball velocity and injury. Zimmerman looks at the increasing velocity of pitchers league-wide and the trend of increased DL time for pitchers from 2002-2014 (a much larger sample size than the 2017 sample size that I’ve been working with) and also looks at individual pitchers’ FB velocity and their disabled list time. Below is part of a table from Zimmerman’s THT article that I found particularly illuminating.

FB Velocity and DL Trips
MPH Count DL trip chance for next season Avg days
> 96 101 27.7% 73
93 to 96 1,031 20.6% 70
>93 1,132 21.2% 70
90 to 93 2,308 15.2% 70
87 to 90 1,655 11.2% 60
< 87 511 11.9% 80

From this table, it appears as though pitchers who throw 96+ are almost twice as likely to land on the DL after a given season as pitchers who throw 90-93 (Zimmerman noted that throwing hard doesn’t appear to hurt in the season that you throw hard — rather, the season after. This explains why the DL rate for pitchers who averaged 95+ MPH on all of their pitches spiked from 40% in 2016 to 80% 2017). Pitchers who throw 96+ also appear to be on the DL slightly longer than pitchers who throw 90-96 MPH, who are in turn at a slightly greater risk than pitchers who throw 87-90 MPH. The risk appears to dramatically increase for pitchers who throw less than 87 on the basis of age, as discussed above.

Expected Value of Pitching Harder

With Zimmerman’s findings, we are now prepared to make our evaluation on the trade-offs of throwing harder and the injury risks involved. None of this is exact by any stretch of the imagination, but we can treat it as a rough, back-of-the-napkin calculation to get an idea if the original premise of “pitching less hard to avoid injury” holds true.

We know that by pitching 1 MPH faster using his fastball, a pitcher would add .2 runs per 100 pitches on average. We can also estimate that a starting pitcher throws an average of 17 pitches per day while healthy (85 pitches per start with starts every five days) and a relief pitcher throws an average of 7 pitches per day while healthy (22 pitches per outing while pitching every three days). An average pitcher throws ~55% fastballs, so starters throw an average of 9.3 fastballs per day and relievers throw an average of 4 fastballs per day. Finally, we know the likelihood of being injured in the season after throwing so hard and how long those injuries last on average. So we can treat this as an expected value problem!

Expected value is a term in statistics that refers to probability and value. Think about it in terms of a raffle. If I buy a $2 ticket for a raffle for a prize that is worth $100, is it worth my $5 dollars if the odds of me winning the prize are 1/100? How about 1/25? To determine the expected value, I simply multiply what I stand to gain (the $100 dollars) by the odds of me gaining it (1/100, or 1/25), yielding my expected return ($1 for 1/100 odds, or $4 dollars for 1/25). If the value of the return is greater than my investment, it’s a smart idea! If not, I stand to lose money (so I would lose $1 dollar on average if my odds were 1/100, but I would gain $2 dollars on average if the odds were 1/25).

We can calculate the expected return of pitching faster based on our run values by plotting our linear approximation of pitch value as a function of velocity: Value = 0.1915 * Velocity – 17.8951. We can also approximate how many days a player will miss with a given FB velocity, either 46 days if they have an average fastball velocity below 96 or 64 days if they have an average fastball velocity above 96. We can then multiply the expected time to be missed by the probability that they will miss time to yield an expected value. Finally, we can look at how much value each player misses out on based on the expected run value of each pitch. So what do we get?

FB Velocity and Expected Lost Value
vFA xwFA/P DL trip chance for next season Expected Value  (RP) Expected Value (SP)
85 -0.016 0.119 +0.46 +1.07
86 -0.014 0.119 +0.41 +0.95
87 -0.012 0.119 +0.35 +0.82
88 -0.010 0.112 +0.28 +0.65
89 -0.009 0.112 +0.23 +0.53
90 -0.007 0.112 +0.18 +0.41
91 -0.005 0.152 +0.20 +0.46
92 -0.003 0.152 +0.12 +0.27
93 -0.001 0.152 +0.04 +0.08
94 0.001 0.206 -0.06 -0.14
95 0.003 0.206 -0.17 -0.40
96 0.005 0.206 -0.28 -0.66
97 0.007 0.277 -0.55 -1.28
98 0.009 0.212 -0.54 -1.25
99 0.011 0.277 -0.86 -2.00
100 0.013 0.277 -1.02 -2.36

So, in a very rough approximation, an SP could expect to lose 1-2 runs off their next season’s total while pitching above 96, and a relief pitcher could expect to lose .5-1 runs in the same span.

Is this significant? Not particularly. Fastballs are worth generally -20 to 20 runs per season, so 1-2 runs is already a comparatively small disadvantage, all other factors notwithstanding. Then consider the inherent unreliability of pitch values (year to year correlation is less than .25), and the importance of these trade-offs seems negligible (nevermind the fact that the approximations used to derive these conclusions are even more unreliable than pitch values!).

Of course, there’s something to be said for career-long-health by throwing less hard, but that is beyond the scope of this article. Ultimately, in the short run, there does not appear to be some significantly advantageous trade-off where pitchers simply throw less hard and are rewarded with significantly better health.

Using History and Steamer to Predict the Comeback Player of the Year Award

While the race for the Comeback Player of the Year (CPOTY) award is nowhere near as fierce or publicly anticipated as the races for major awards like MVP, Cy Young, or Rookie of the Year, it’s still an award rich with history that recognizes some of MLB’s best bounceback seasons. Here, we’ll look at the history of the award, and use some of the trends in the historical data to identify some candidates for the award this upcoming season.

In 1965, the Sporting News gave out its first set of CPOTY awards to Pirates pitcher Vern Law and Tigers first baseman Norm Cash. The award was created to recognize a player who “re-emerged on the baseball field during a given season,” although this ambiguous definition has led to some questionable selections (notably 2001 Ruben Sierra over Juan Gonzalez) and debate over what it truly means. The award is given annually to one player in each league, and is typically given to either a player returning from injury or one coming off a down season to return to a level of success previously achieved in their career. The award has been given by two bodies throughout its history, as the Sporting News presented it from 1965 to 2006, while MLB has given out the award since 2005. Over the life of the award, 106 total player seasons have been recognized, and a few players have won twice.

Looking at a handful of trends within this sample allows us to identify what characteristics of player seasons correlate with winning the award, and therefore may allow us to formulate decent guesses as to what players might have a strong chance to contend for the award in the coming seasons. Some of the more important characteristics of CPOTY award winners include (but aren’t necessarily limited to) performance (both past and in the winning season), whether the player was injured in the season preceding their comeback, the player’s position, and team success. Let’s dig in and look at these trends to construct an ideal profile for a Comeback Player of the Year favorite, then look at what players might fit the bill in the upcoming season.


For the sake of simplicity, we’ll divide the performance category into three sections: past success (defined as two seasons prior to the comeback season), down season (defined as the season immediately prior to the comeback year), and the comeback year itself. While this isn’t perfect, this division will allow us to easily view the swings in performance that are associated with the award and look for current players that fit that mold. To examine a player’s performance, I looked at WAR for each of the seasons in question because it is a good general guide for player value and encompasses not only ability but also playing time to a degree, since it is a counting stat. For the purposes of this award, a counting stat like WAR is more important than a rate stat like wRC+ or UZR/150 because some winners won the award following a solid but injury plagued season. Performance was considered both by looking at the dataset for the three season groups (2 years prior, 1 year prior, and year of) as well as for the differences between the 2 years prior performance vs the year prior performance and year prior vs year of performance. Below is a box-and-whisker plot showing the distributions of the three year datasets, with WAR on the Y-axis:

WAR bwp

As might be expected, the comeback season group yielded the most value of the three groups, followed by the past success season and then the down season. For the past success season, the middle 50% of values fell between approximately 0.5 WAR and 3.0 WAR, meaning that these seasons typically produced solid but rarely spectacular results. The middle 50% of values for the down season group fell between about 0 WAR and 1.5 WAR, meaning that most seasons in this group produced relatively middling or less value. It is also notable that the median is much closer to the lower quartile (0 WAR) than the higher quartile, and this skewing is because many of these down seasons saw players miss most or all of their season, leading to a significant number of players accumulating near 0 WAR in their down season. Finally, the middle 50% of bounceback seasons saw WAR values between 2.0 WAR and 5.0 WAR, meaning that most winners produced at least above average if not significantly above average value in their comeback season. The following table also shows the mean and median values for the three datasets (also broken down by certain time periods):

WAR Breakdown 2 YP YP Yof
Average (Total) 2.09 0.78 3.55
Median (Total) 2.05 0.35 3.35
Avg (Since 85) 2.07 0.43 3.56
Med. (Since 85) 2.05 0.10 3.10
Avg (Since 05) 2.31 0.40 3.73
Med. (Since 05) 2.15 0.20 3.65

Another way I evaluated performance was by looking at the differences in performance from year to year between the first two years (past success and down season) and the most recent two years (down season to comeback season). As expected, the first group saw a significant drop in performance while the second group typically saw a significant increase, often larger than the initial decrease. The following box-and-whisker plot shows the distribution of both sets of data, while the data table shows the mean and median values.
war diff bwp

WAR Change Diff.
Mean 2YP to YP -1.33396
Mean YP to Yof 2.822642
Median 2YP to YP -1.05
Median YP to Yof 2.6

So our ideal candidate will have put up at least solid value during their past success season, lost a significant chunk of that value the next season, and then experienced a big bounceback the following season, posting solid to excellent value. According to Steamer’s projections, there are 23 hitters and 12 pitchers (two relievers, 10 starters) expected to follow this pattern with a bounceback 2018.


The next key component of the award is the player’s injury status during the season immediately preceding his comeback. While comebacks from injury have become more prevalent over the life of the award, injury comebacks were hardly recognized early on. The two following graphs will show the number of injury comebacks vs non-injury comebacks over time along with the difference between the two categories and the percent of injured winners over time. (Disclaimer: a good portion of this injury data did come from Wikipedia because I couldn’t find much historical injury info elsewhere, so some of it may be a little inaccurate but should not be so much so that the trends change.)
Inj data

As you can see, the percentage of total winners of the award coming off injury has increased significantly as time has passed, with now nearly half of the award winners coming off injury. The difference has shrunk from a peak of 32 in 1989 to only 12 following 2017’s winners. The trend is even more stark when looking at the data broken up into specific time frames:

Injury Breakdown Yes No
Total 47 61
Since 1985 41 25
Since 2005 19 7

Since MLB took over the award in 2005, the trend has flipped entirely, with injury comebacks making up 73% percent of winners in that span. While there could be other complicating factors at play here, such as increased DL placements since the early days of the award, it still seems clear that suffering an injury during the preceding year has a strong tie to winning the award.


The next characteristic of CPOTY winners is position. For whatever reason, certain positions are disproportionately represented amongst award winners. Here is a breakdown of the winners by position, in table and pie chart form:

As you can see, the award is most frequently given to starting pitchers, followed by first basemen and designated hitters. Middle infielders and catchers have rarely won the award, while outfielders, third basemen and (especially recently) relievers have received their share. Besides the dominance of starting pitchers, the most striking stat is the prevalence of designated hitters winning the award. While they make up only 11.32% of total winners, it is important to keep in mind that DHs have only been eligible to win 45 potential awards (the number of awards given in the American League since the establishment of the DH rule), so they have won 26.67% of the awards for which they have been eligible, a shocking number for players that only add value on one side of the ball.

Possible explanations for the dominance of certain positions may lie in other factors. Since the award has typically been given based on offensive production without as much regard for defensive value, it makes sense that players at bat-first positions would win the award more frequently than those at defensively oriented positions. Additionally, catchers typically accrue fewer plate appearances than players at other positions, and therefore have less opportunity to accumulate shiny counting stats than designated hitters. Another possible explanation may lie in the fact that a history of prior success is typically a prerequisite to win the award, and that older players are more likely to have an extensive track record of success. Since the award leans toward older, more experienced players, the award is more often given to players at less valuable defensive positions because players tend to move down the defensive spectrum as they age, so more older players are occupying less valuable positions while younger guys handle the tougher assignments. There are certainly other possible explanations for this trend, but some combination of these factors may play a part in the trend of bat-first players winning the award.

It may be tougher to explain the dominance of starting pitchers winning the award. It’s possible that pitcher success may be more subject to season-to-season volatility than hitters (while I haven’t been able to find any statistical studies proving this, it may be an interesting area of future research I’m considering pursuing). Another explanation might lie in the fact that every team typically rosters five starting pitchers and only one starter at each offensive position, but the difference seems stark enough at positions like catcher and shortstop that this seems unlikely. Maybe more pitchers suffer major injuries, causing them to miss significant time? There seems to be some credence to this theory, as only 13.11% of hitters played between 0 and 10 games in their down season, while 20.51% of starters pitched 5 or less games. It’s also possible that the sample still isn’t big enough and that this positional skewing is largely due to random variation. Whatever the case, it seems fair enough to weigh this trend at least a little bit going forward, so in predicting possible 2018 winners we’ll give the edge to starting pitchers, first basemen, and designated hitters.

Team Success

A final factor that has seemingly been of some importance in winning the award has been team success. While nothing about the award necessitates that the player plays on a good team, CPOTY winners have disproportionately come from winning teams. The following table displays some important statistics in terms of team success for award winners, most notably the mean and median team winning percentage, along with the percent of award winners playing on teams with certain win benchmarks. A .615 WP is roughly 100 wins over 162 games, .585 is 95, .555 is 90, .525 is 85, and 81 is .500.

Team Success
Mean WP 0.537594
Median WP 0.552
% over .615 6.60%
% over .585 16.98%
% over .555 50.00%
% over .525 68.87%
% over .500 78.30%

As you can see, both the mean and median winning percentages for teams featuring a comeback player significantly exceed .500 and exceed it by enough that this difference can’t simply be attributed to the contributions of the comeback player in most cases. Even more strikingly, nearly 80% of winners played for teams that finished over .500, and nearly 70% of winners played for borderline playoff contenders or better (85+ wins). The histogram below illustrates the distribution of team winning percentage for players winning the award since its inception:
Team Success

The data is fairly skewed left, with very few award winners playing on truly terrible teams and a very large portion of CPOTY winners playing for teams in the 89 to 94 win range. While it is true that there aren’t necessarily a ton of winners on elite teams, I think it might be fair to chalk that up to the fact that are simply less elite teams than just good teams, so it isn’t that players on elite teams are less likely to win, just that there are less elite teams than good ones historically.

There’s no way to definitively answer why the award voting swings so heavily towards players on winning teams, but the data shows that this is indeed the case. Maybe voters believe that playing on a good team is part of a good comeback. It’s possible that players having bounceback seasons on winning teams are just more visible than those playing on teams going nowhere and therefore unfairly benefit in the voting. Another possibility is that voters are still relying on team-dependent stats like runs scored, runs batted in, pitcher wins, and saves, and guys on worse teams have less opportunity to rack up these stats. Perhaps there’s another driving reason, but clearly the award has historically favored guys playing on winning teams.

After combing through the data, a few characteristics of CPOTY winners have stuck out. A pattern of solid value->drop in value->return to solid-to-excellent-value stands out, as does the recent trend of awarding the CPOTY award to a player returning from injury. An ideal CPOTY candidate would also play on a projected contender and be a starting pitcher, first baseman, or designated hitter. While a player doesn’t necessarily need to meet all of these criteria to win the award and there are some good candidates who don’t (Greg Bird, Mark Trumbo, Dansby Swanson, Alex Reyes, Carlos Gonzalez, etc.), these characteristics have certainly been favored in the voting. Now it’s time to delve into the question of what players might have a good shot at taking home a comeback player of the year award next year.

After looking through the aforementioned group of 23 hitters and 12 pitchers, I decided to cut the sample down some by removing guys that aren’t really ticketed for regular duty next year, don’t project especially well, or never really broke out in the first place. This removed an additional six hitters, leaving 17 hitters and 12 pitchers. The following table further details each player’s candidacy in each of the criteria discussed earlier, sorted by position (Team W% is projected for 2018):
2018 Hitters
2018 Pitchers

Just looking at the two lists, they seem like pretty good groups of names for CPOTY contenders. Davis, Cabrera, Machado, Ramos, Hernandez, and Price especially stick out in the AL, while Eaton, Syndergaard, Cueto, Bumgarner and Cespedes seem like good bets in the NL. Personally, I’d lean towards Syndergaard in the NL and Machado (or Cabrera if Machado is dealt to the NL) in the AL. It’s certainly possible that the award winners this year don’t come from these lists, but based on historical trends, these 29 players seem like solid favorites to take home the Comeback Player of the Year award in 2018.

FanGraphs leaderboards and player stats, Baseball Reference Player Pages, and Wikipedia for injury new were heavily used to do research for this post.

Charlie Blackmon Is Doing His Best Matt Holliday Impression

In 2004, a 24-year-old kid from Oklahoma named Matt Holliday debuted for the Colorado Rockies. Just a couple years later in 2006, Holliday received his first MVP votes, finishing 15th in the voting. A year later in 2007, the Rockies went to the playoffs for just the second time in franchise history and Holliday finished second in the MVP voting. In 2011, a 24-year-old kid from Texas (then a clean-shaven baby-faced kid) named Charlie Blackmon debuted for the Colorado Rockies. A few years later in 2016, Blackmon also received his first MVP votes, finishing 26th. The following year, much like Holliday, the Rockies claimed a playoff berth with Blackmon leading the way and finishing fifth in MVP voting. The similarities don’t end there; two players who don’t seem very much alike had very similar stretches in very similar circumstances while playing in the same outfield a decade apart.

Let’s start with the basics.

Holliday 2006-2007 313 1380 70 239 251
Blackmon 2016-2017 302 1366 66 248 186

You can see just how close these two were in everything except for RBI. Of note with the RBI is Holliday often batted third or fourth while Blackmon always hit from the leadoff spot yet fell only 65 RBI short, which according to this article by RotoGraphs’ Scott Spratt denied Blackmon of approximately 10-13 RBI per 600 ABs. From the outside looking in, Holliday just by looking at him seems much more of a HR threat than Blackmon, and 2017’s MLB-wide HR surge definitely comes into play, but a HR is a HR is a HR, and in that regard they are neck and neck. These numbers alone are pretty impressively close, but let’s go deeper.

Holliday 2006-2007 7.90% 17.10% 0.264 0.364
Blackmon 2016-2017 7.85% 17.25% 0.249 0.361

For all intents and purposes these numbers are identical, except the ISO which for Holliday is a little higher.

Holliday 2006-2007 0.333 0.396 0.597 0.419 145 10.4
Blackmon 2016-2017 0.328 0.390 0.577 0.404 136 10.6

Now the final numbers to look at. Again these numbers except for wRC+ are essentially the same. Both players hit for near identical averages, OBP and so on. Blackmon has a little more in the WAR department, mostly because in the two-year timeframe he stole 10 more bases than Holliday and was better in the field. The only real difference between these two at this point is their age and service. Holliday was just 26 with two years of service time in 2006, while Blackmon was 29 and will be a free agent after 2018. So what does this all mean? Nothing really going forward, but the parallels between the two who exactly a decade apart literally made footprints in the exact same spots for the same team and had similar results on their teams seasons was surprising and interesting.

J.D. Martinez: Market Value and 2018 Projections

J.D. Martinez had another great year in 2017. With 3.9 sWAR[1] and a .430 wOBA, J.D. contributed well above average once again. Offensively (wOBA) he has been able to consistently contribute year after year since 2014. J.D. does carry some defensive shortcomings, yet he is an excellent asset in any lineup.

For the past three years he has been able to get on base at an above-average rate (.364 OBP), alongside an excellent .289 ISO and a .587 SLG. He does carry a lifetime 25% K-rate (approx.), but as long as he is able to produce and contribute the way he has, he should be able to make an impact in any organization.

In 2018[2], J.D. should see a slight decrease in wOBA (.395). Based on the 2018 projections, both OPS and ISO should decline marginally; nevertheless, J.D. should be able to perform as a top-caliber player.

Please find J.D.’s 2018 projections in the table below.

2018 Projections: J.D. Martinez
2015 28 4.7 0.372 0.344 0.535 0.879 0.253 0.282 27.1% 8.1%
2016 29 2.0 0.384 0.373 0.535 0.908 0.228 0.307 24.8% 9.5%
2017 30 3.9 0.430 0.376 0.690 1.066 0.387 0.303 26.2% 10.8%
2018 31 3.6 0.395 0.365 0.591 0.955 0.293 0.298 26.0% 9.6%

Projections: “SEG Projection System” (Including sWAR for 2015-2018)

sWAR = “SEG Projection System” calculation of WAR  

J.D. Martinez’s estimated AAV is around $27M, based on a five-year/$135M contract. J.D. is projected for 14.6 sWAR for the next five years.

Market Value: J.D. Martinez

2018 31 3.6 30.6 $8.4
2019 32 3.5 30.7 $8.8
2020 33 3.0 27.5 $9.2
2021 34 2.5 24.2 $9.7
2022 35 2.0 20.3 $10.2
TOTAL 14.6 $133.4

sWAR = “SEG Projection System” calculation of WAR 

$WAR: Adjusted for Inflation (5% per year)

[1] sWAR = “SEG Projection System” calculation of WAR

[2] 2018 Projections: JD Martinez (SEG Projection System)