Tuesday, I began looking into how the differences between WPA and WAR may influence the seemingly odd (at least through the lens of fWAR) pattern of free agent spending on relief pitching. The discovery that one marginal WAR means nearly one marginal WPA for relievers as opposed to just half of one marginal WPA for hitters and starters partially explains why teams pay roughly three-to-four times more per marginal WAR for relievers. However, in order to accept this as a legitimate reason for MLB teams to do so, one has to give full (or nearly full) credit to relievers for the leverage of the situations they pitch in — this is how pitchers like Tyler Clippard (+5.01 WPA) can finish second in the entire league behind just Justin Verlander despite pitching a fraction of the innings.
Relievers cannot directly decide the situations they pitch in — that is up to management — but is there some sort of innate characteristic of relievers which tends to decide when they enter games?
I decided to take a look at four different measures to see which best predict the gmLI — the average leverage at which a pitcher enters a game — of relievers the next year. The first three are measures of performance — the prior year’s WAR, the prior year’s WPA, and the prior year’s “deserved leverage index,” a concept introduced by Sky Kalkman back in 2009. Sky’s version of “deLI” is defined as (lgFIP/FIP)^1.5; to smooth out park factors, here I use (1/FIP-)^1.5. The fourth measure is the pitcher’s gmLI from the past season — sort of a check to see if there is inertia in pitcher roles.
Click on the image to go to the interactive visualization (much embiggened, of course).
Perhaps unsurprisingly given the piles of uncertainty surrounding relief roles on a yearly basis, none of the four has much to offer on a predictive level. The highest year-to-year correlation (dating back to the 2008 season) is a mere 0.249 with gmLI, just above 0.248 with prior season WAR. Teams don’t seem to respond much at all to either WPA or deLI, with both checking in under 0.15. It seems like some inertia exists, but not an obscene amount — clearly, players will be shifted between roles based on performance should they fail to produce in their designated role in the prior season. Prior year performance matters roughly as much in determining roles as the role in the previous season, according to this analysis.
But there’s one factor we haven’t looked at which we’ve seen as a deciding factor all-too-often as fans: money As a Milwaukee Brewers fan, I could call this either the Eric Gagne Corollary or perhaps the Trevor Hoffman Corollary, but every team has their own name for it. Teams simply don’t spend in the high millions for relievers to pitch in typical situations — looking at the past two seasons of free agent spending on relievers, only two pitchers have earned $3 million or more and had a gmLI below 1.0 — John Grabow, a $3.8 million LOOGY for the Cubs in 2010, and Dan Wheeler, a $3.0 million mop-up righty for Boston in 2011. Overall, though, when management spends on relievers, they spend to put them in high-leverage spots:
Again, click for the full, embiggened visualization.
The correlation of yearly salary to gmLI is 0.36, nearly 1.5 times higher than any of the other measure tested. When grouped by season (to cancel the effects of inflation), the correlation rises to 0.42. Players can still pitch their ways into or out of roles, but there is no question what kind of leverage Jonathan Papelbon or Heath Bell will see this season (regardless of what their performance says they should actually see). At the very least, this means managers and general managers are putting their high-priced relievers in the best positions to actually impact games. Whether or not they deserve these opportunities is another question (for another post).
For those interested in the data used for this post, it can be downloaded on the visualization pages by clicking on the button with an arrow on it.