Reliever Usage Redux: A Follow-Up

On Friday, I spent some time talking about the change in bullpen usage patterns over the last thirty years, and noted that the shift to more pitchers making shorter appearances hadn’t led to an improvement in performance for relief pitchers in the aggregate. There were a lot of good responses left in the comments, and there’s some useful commentary on the issue over at The Book Blog as well.

Many of the responses focused on a similar point that I didn’t do a very good job addressing – that by focusing on aggregate data, we could miss value being added if the performance in extremely important situations was greatly improved due to the new usage patterns. The results as a whole might be similar, but if the new allocation results in better performance during important situations and worse results when the game is already decided, then teams would be drawing a benefit from using relievers in this manner. William Juliano expressed this view on the issue in a really good post he did at his own blog as a follow-up, and looked into the relative performance of the top tier of relievers from both 1982 and 2011. As expected, he found the quantity-for-quality trade-off, as modern day relief aces are pitching fewer innings but getting somewhat better results in those innings than their counterparts were thirty years ago. The two changes essentially offset, as he notes, and there’s only a small difference in WAR between the 25 best relievers of 1982 and 2011.

Juliano finishes with the following conclusion:


So, where does that leave us? It seems certain that as a group relievers are no better or worse today than they were 30 years ago. However, instead of advocating a return to the past with the goal of saving money and roster spots (after all, if not wasted on marginal relievers, they’d probably be squandered on below average position players), perhaps the focus should be on improving bullpen usage within the modern theory? The one thing we know for certain is today’s relievers do pitch in more games, but, unfortunately, managers too often defer to the save rule and waste many of these appearances in low leverage situations. If managers would instead commit to shooting their best bullets at the right targets (i.e., high leverage situation regardless of inning), the current philosophy of shorter outings might prove to be the most optimal. At the very least, this hybrid approach is worth trying, especially when you consider that a return to the past approach promises little more than the status quo.

He’s right – my advocation for a return to the days of using relievers like Stanley doesn’t appear to offer substantial improvement in reliever value, as it’s simply going the other direction on the quantity-quality scale. And, in fact, holding up Stanley’s 1982 season as the example of optimal reliever usage is simply incorrect. While he had a 1.57 gmLI (the average leverage index when he was summoned from the bullpen) that year, his pLI (the overall average leverage index of all batters he face during the season) was just 1.29. In other words, the Red Sox brought him in during pretty tight situations where he could help keep the game close, but then he racked up his massive innings total by staying in the game even after the outcome had become more obvious.

A perfect example of this was the game against Baltimore on August 15th of that year. After Mike Torrez pitched four scoreless innings, Stanley entered the fifth with the game tied at zero. He threw three more scoreless innings before the Red Sox put up an eight spot in the botton of the seventh, giving them an 8-0 lead and a 100% chance of winning according to WPA – after all, an eight run lead with six outs to go in a low scoring environment is nearly insurmountable.

Despite the fact that the game was essentially over, Stanley remained on the mound for the final two innings. Those innings were of no real value to the Red Sox, as anyone on the staff could have taken the mound and preserved the win. So, while Stanley came into the game in critical situations and threw a lot of innings, he still managed to pitch in a good number of inconsequential situations, and that’s not really an optimal usage of an ace reliever either.

The ideal usage pattern is not simply increasing the number of innings thrown by the best relievers by allowing them to stay on the mound after a game has been decided, but in using them for as many high leverage innings as possible throughout a season. Stanley should not be held up as the model – the 1996 version of Mariano Rivera is what teams should strive for.

At age 26, Rivera appeared in 61 games and faced 425 batters, 269 fewer than Stanley faced in 1982. Still, at 6.96 batters faced per appearance, he was staying on the mound about 60 percent longer than a traditional ninth inning reliever. For comparison, Rivera faced 4.56 batters per appearance in 1997, the year he replaced John Wetteland as the Yankees closer, despite being just one season removed from showing he could handle a heavy workload and sustain a brilliant performance doing it.

Rivera’s gmLI in 1996 was only 1.36, lower than that of Stanley. But because they essentially let him regularly work the 7th and 8th innings of close games, his pLI was 1.56, meaning that the situations got more important when he was on the mound. While Stanley came into close games, kept them close, and then racked up innings while the outcome was no longer in as much danger, Rivera was used almost exclusively in situations where the game was on the line. And, because of his ability to get everyone out, he racked up 107.2 innings, putting up a +4.4 win season that ranks as the third highest of any reliever in the last 30 years.

Now, I know that’s easy to just dismiss everything Rivera does as a massive outlier and write off anything that he’s done as impossible for other mortals to repeat. However, 1996 Rivera posted a FIP- of 40, which 13 relievers have matched or done better than in a season with at least 50 innings pitched since 1982. Rob Dibble maintained a FIP- of 38 while facing 384 batters in 1990. Duane Ward faced 428 batters in 1991, and his FIP- was 43. Even more recently, Eric Gagne (2003), Francisco Rodriguez (2004), and Craig Kimbrel (2011) have faced 300+ batters in a season while performing as well or better than 1996 Rivera did on a rate basis.

While Rivera’s 1996 season might be the best example of how a non-closer relief ace can be deployed to maximum value, he’s not the sole example of a pitcher who was able to carry a significant workload while performing at an extremely high level in critical situations. While asking a pitcher to be that dominant while facing 600 to 700 batters in a season appears unrealistic, we have evidence that elite relievers can succeed while facing 300 to 400 batters in high leverage situations during a single season.

Last year, the 30 pitchers with 15 or more saves averaged 262 batters faced and 4.04 batters per appearance. These usage patterns aren’t just limited to the closer’s role either; the top four relievers in baseball by ERA- last year – David Robertson, Eric O’Flaherty, Scott Downs, and Mike Adams – each faced fewer than 3.89 batters per game, despite the fact that each showed they could get out batters from both sides of the plate and didn’t need to be used as specialists. Still, the evolution of set bullpen roles has led to not only limits on how many batters the closer faces, but the eighth inning setup man as well.

As Juliano notes, the goal shouldn’t be a return to 10-man pitching staffs simply for the sake of roster efficiency, but in deploying a strategy where the bullpen produces the most value overall. We’ve shifted from an approach that focused too heavily on quantity to one that focuses too heavily on quality. The best deployment of a bullpen isn’t from 1982 or 2011, but from the year directly in between those two.




Print This Post



Dave is the Managing Editor of FanGraphs.


47 Responses to “Reliever Usage Redux: A Follow-Up”

You can follow any responses to this entry through the RSS 2.0 feed.
  1. RMR says:

    To what degree does using poor relievers in low leverage situations result in the creation of more higher leverage innings? A 3-0 lead in the 8th protected by a relief ace is likely to remain a low leverage situation. Put a crappy reliever in that spot and you might find your lead down to 3-2.

    That is, has the changing usage pattern resulted in their being more/less higher leverage innings to protect?

    Vote -1 Vote +1

  2. Oliver says:

    Of course, the criticism was that Atlanta was wearing down Kimbrel and Venters, and it at least appeared that way. I think this approach makes a lot more sense, and I welcome the day when Sergio Romo is destroying 6.93 batters per appearance.

    Vote -1 Vote +1

    • Another option to consider when talking about roster construction is run environment. The Braves were bottom half in NL in scoring but very good in run prevention, which would make you think that they’re more likely to be in a lot of close games. Maybe the Braves benefit more from an extra pitcher or two instead of a pinch runner and 3rd catcher, compared a team that averged 5 runs per game and won by scoring 8 runs more frequently.

      Vote -1 Vote +1

      • Richie says:

        The closer the game, the more value a pinchrunner has. Or a pinchhitter, or anyone who projects on getting one AB or hitter, and nothing more. Higher run scoring environments require more pitchers, in order to sop up all the extra pitches needed to get through them.

        Vote -1 Vote +1

      • Baltar says:

        Oliver, your assumption that low-offense, high-defense teams are more likely to be in close games is dubious.
        Like so many other common-sense notions (nearly all of which turn out to be false), this needs to be proven.
        One counter-example, obviously not enough to prove anything, was the 2010 high-offense, low-defense Cincinnati Reds, whom, I’ve been told by Reds fans, led the majors in one-run wins (whether they were in the most one-run games, I don’t know).
        Does anybody know of any studies that have been done on this?

        Vote -1 Vote +1

      • Baltar says:

        Oops, my above comment was in response to Real Neal, not Oliver.

        Vote -1 Vote +1

      • Richie says:

        1. Most common sense notions do turn out to be true, rather than almost all false. We just remember the false ones much more vividly, especially when the ego gets at all involved.

        2. It’s pretty much a mathematical given that lower-scoring run environments will produce lower run margins.

        3. I do think you’re essentially correct, tho’, in that the effect ain’t all that big. For instance, I do recall saves being positively correlated to run environment, but very loosely. It was a very small factor.

        Vote -1 Vote +1

      • Baltar says:

        Richie, in response to your 3 points:

        1. You are wrong.

        2. I would love to see your mathematical proof, but I know there isn’t one; so perhaps a study would be in order.

        3. You are correct.

        Vote -1 Vote +1

      • Cidron says:

        perhaps, he meant, low scoring games.. not close games. alot of 2-1, 3-1, 3-2 games.

        Vote -1 Vote +1

    • brendan says:

      romo is something of a ROOGY, not sure he is a good guy to go 2 innnings

      Vote -1 Vote +1

  3. JMag043 says:

    This is why I dont understand why teams are still sticking with a designated 7/8/9th inning guy. Is it not more valuable to save your best relievers to preserve 1 run leads and ties, rather than bringing them in for saves in games where you are up by two or three runs?

    The Save statistic is really holding teams down from using their best relievers in situations where they could provide the most value.

    Vote -1 Vote +1

    • Slartibartfast says:

      This.

      While it is fun to claim things are “the next big market inefficiency” I (anecdotally) feel like this is ACTUALLY the next big inefficiency (thought not market related) that teams will begin to exploit. Reliever usage feels so whack right now.

      Vote -1 Vote +1

      • JMag043 says:

        To me it seems teams don’t value a tie game in the 7th inning and possibly even 8th inning as much as they should.

        Lets say you have a 3-3 game in the 7th, I think there is a case to be made you bring out your closer to pitch the 7th and 8th to preserve the lead, and hope your offense can give you some cushion in those two innings. Then if they do, you bring out your “set up” guy to finish the game.

        Vote -1 Vote +1

      • JMag043 says:

        preserve the tie*

        Vote -1 Vote +1

      • Barkey Walker says:

        The insane thing about this “high leverage situations in the 7th” thing is that if you do well enough that you don’t need a closer in the 9th, you didn’t need one in the 7th either. The reality is that at any score the later innings are always more clutch. Saving the closer means putting in the setup guy in the 7th and 8th and then if ether team blows the lid off, not using the closer in the 9th.

        Vote -1 Vote +1

  4. TK says:

    So how about:
    Reliever A comes in the 8th for 2 ip if you’re up one or tied
    Reliever B comes in for same if up 2 or 3 or down 1

    Insert Kimbrell and Venters. They’d pitch about 50-55 games, about 95-100 innings.

    Vote -1 Vote +1

    • Slartibartfast says:

      A. hard and fast rules are almost never a good idea
      B. if they can pitch more innings without seeing a decrease is effectiveness, you should at least pitch them up to that point, likely ~70IP.

      But the notion that you are driving at, I agree with.

      Vote -1 Vote +1

      • TK says:

        Ya, I meant as a general rule. If you tell guys they are generally going to be 2 IP guys they can prepare and if you have a framework so they know their role, it supposedly helps (RP say this, so I assume it’s true, though maybe it’s not). Also, Fredi is not going to sit there and calculate LI; this is a way he (and a lot of managers) might consider changing.

        Vote -1 Vote +1

      • Cidron says:

        to quote any pirate.. They are not rules.. merely guidelines.

        Vote -1 Vote +1

      • bstar says:

        TK,

        Further down in this article, you’ll find someone who said Bill James did a study and found that guys knowing their usage(pitcher A will be the 7th inning guy, pitcher B the eighth inning guy, and so on) actually DID help. If that’s true, what in the world did the Braves do wrong by pitching O’Flaherty/Venters/Kimbrel in the 7th/8th/9th of close games? You’re suggesting they should pitch with less frequency but throw more innings when they do. As you pointed out, this is going to lead to 100 IP for relievers. Wouldn’t that make the whole ‘Fredi overworked his bullpen’ effect even more pronounced?

        O’Flaherty didn’t allow a single run in Sept/Oct. Venters’ problem in September, upon closer inspection, was that he pitched very badly when given 3+ days of rest.(a 1.86 WHIP in Sept./Oct.) His BB/9 skyrocketed. He’s a rubber-arm guy who seems to lose his touch when given too much rest. Plus, the Braves DID play a lot of close games, and they didn’t have a reliable fourth option(Linebrink was shaky at best and Proctor was so bad he got outright released)

        About the only thing I agree on is they could have saved Kimbrel a few innings by not pitching him when up by 2 or 3 runs.

        Vote -1 Vote +1

  5. Sensei Kreese says:

    Dave, thanks for not providing a video link this time. Much appreciated!

    YEESH!

    Vote -1 Vote +1

  6. Dave, how do the 2010 Giants fit into your thought process?

    Vote -1 Vote +1

  7. Matthew Broderick says:

    Dave Cameron is a jedi!

    Vote -1 Vote +1

  8. Richie says:

    Bill James did some research strongly suggesting knowing ahead of time when you were going to pitch did help pitchers pitch more effectively. If that effect is so and is strong enough, it would justify the strategy of ‘you the 8th (maybemaybe the 7th) inning guy’, ‘you the 9th (maybemaybe the 8th) inning guy’.

    Vote -1 Vote +1

    • KDL says:

      Does “knowing when you were going to pitch” have to mean a particular inning. Couldn’t a good manager consistently deploy his ‘pen in a way that they all know when they are going to pitch?
      for example: We need you to take the 3-4-5 hitters the first time they bat after the 7th. You could do much more focused scouting…even pay extra close attention to the in-game ABs to help confuse/attack a given set of hitters.

      Vote -1 Vote +1

      • Richie says:

        Can’t see any reason why that wouldn’t work in that way. Except in one way, that my impression has been that hitters can scout pitchers better than vice versa. Don’t know if that’s been empirically validated.

        Vote -1 Vote +1

  9. Guy says:

    The easiest way to determine whether the change in reliever usage has been beneficial to teams is to look at WPA over time. Fangraphs publishes WPA data going back to 1974, and this is the approximate average reliever WPA for MLB per season:
    1974-86: +12
    1987-99: +24
    2000-11: +47
    If this data is correct, then it seems the move to shorter appearances has in fact been beneficial to teams (about 2 extra wins per team per season). Is there any reason to think the WPA numbers are wrong?

    Vote -1 Vote +1

    • Eric R says:

      Would part of that be that relievers are getting a bigger piece of the pie now than they did before?

      Looking in those time periods and counting pitcher stints that have no more than 10% of their games being starts:

      1974-1986: 103493 IP / 468825 IP = 22.1%
      1987-1999: 133235 IP / 498799 IP = 26.7%
      2000-2010: 143494 IP / 476373 IP = 30.1%

      Not sure if there is enough gain in IP there to be the whole difference in reliever WPA, but almost certainly a contributor…

      Vote -1 Vote +1

      • Guy says:

        The change in reliever IP over this time period is fairly small, as Dave C. documents in his first post. Also, as relievers throw more IP, that brings down league ERA and so serves to reduce their runs saved and WPA. The net effect on reliever WPA of giving more innings to relievers ends up being tiny, nothing that could explain the increase reported by Fangraphs.

        Vote -1 Vote +1

    • Will says:

      On a per team basis, total WPA is not as drastic as in aggregate. Also, I’ve done a follow post showing there are more games in which starters throw 5-7 innings than before, which very well could be the cause of higher WPA.

      Vote -1 Vote +1

      • Guy says:

        Why would starters going 5-7 IP produce more WPA for relievers?

        On a per team basis, the increase in WPA is very substantial. WPA has increased four-fold. The number of teams in MLB over the past 35 years has increased rather less than that.

        It may well be possible to improve current reliever usage. But it also seems clear that more relievers throwing fewer innings per appearance yields more wins.

        Vote -1 Vote +1

    • Mike Green says:

      I checked by decade (2001-10, 1991-2000 and 1981-1990). Reliever leverage index is pretty much constant, but WPA/LI is way up. You would expect that WPA/LI would track ERA pretty closely, but it hasn’t. One reason for this may be an increased tendency to bring in relievers at the start of an inning (where they don’t get the benefit of the run attribution rules).

      My own informal observation is that more resources (money, roster spots) have been devoted to the bullpen, and that this has led to somewhat better pitchers (on the whole) finding their way to the high leverage roles in the pen. In 1981-90, it was very common for clubs to have an ace reliever but no real set-up man, and some clubs did not have an ace. Correspondingly benches have become shorter, with the result that offensive players (like a Benny Ayala or Jim Leyritz) are less likely to come off the bench to pinch-hit for a weak hitter. This makes life somewhat easier for relievers than it was formerly.

      The other observation is that the presence of Mariano Rivera (the greatest reliever of all time by a large margin) for the entire decade of 2001-10 probably makes the results seem much larger than they would otherwise be. It is only the Yankees’ WPA/LI which is grossly outside the norms of the previous decade (although the Twins and Angels of 2001-10 would have led the leaderboard for 1991-2000).

      Vote -1 Vote +1

  10. BDF says:

    I’ve followed this on Fangraphs and The Book and am ready to nominate Dave for Most Intellectually Voice On the Internet. Thanks. Love it. This is one of the last frontiers in which a sabrmetric mindset has an opportunity to have a gargantuan impact on the game on the field, and it’s a great discussion to be having at such a high level.

    Vote -1 Vote +1

  11. CircleChange11 says:

    I haven’t really seen a good response to my comments regarding LI situations. Here it is …

    When know the LI situations of a game AFTER it is over. Only then can we look back and see that the highest LI situation in the game was in the 7th inning. But, in the 7th it was only a high LI situation.

    So, if the Braves are beating MIL 3-2 with Braun on 1st and Fielder coming up with 1 out, should ATL bring in Kimbrel?

    I ask because it’s obviously the highest LI situation up to that point.

    My contention is that they shouldn’t. ATL still has 2-3 innings to overcome any lead loss in this 7th inning situation.

    If they use Kimbrel in the 7th, then who do they use in the 9th when they’re still winning 3-2 and there’s a runner on with 1-out? THAT situation is even higher LI … AND … ATL has fewer innings to overcome a lead loss.

    It’s EASY to look at a game and see which inning they “should have” used their relief ace.

    It’s MUCH more difficult to look at the situation in the 7th inning and know that it’s going to be the highest LI of the game.

    So, what we might just see, if MLB adopted our influence, is that the relief ace simply comes into the game for the 1st really high LI situation of the game, and then the 2nd best reliever come in next, then the 3rd, etc. So, in HIGH leverage 9th inning situations, teams could be using their 3rd best reliever in the highest LI of the game, even by using “our” method.

    I don;t like much of the “relief ace” usage analysis because it uses hindsight as the primary determinant. Managers don;t get the value of hindsight, and it really doesn’t require much analysis.

    I also do not know of research that explores any “9th inning loss” affect, versus blowing leads earlier in games.

    In reality, teams are going to need to be like the traditional best teams, and have 3 really good relievers. They might not necessarily be the “7th inning guy”, setup man, and closer … but 3 guys that pitch in the 3 highest leverage situations.

    Really, I don;t think knowing when to use the relief ace during the game is as neat, tidy, and foolproof as many others need it to be.

    A team could arm itself knowing when (during the game) the highest leverage situations tend to occur and have a better general idea of when to use a relief ace.

    But in regards to any blown leads, you’re damned if you do, damned if you don’t. Just don’t blow any leads and no one will second guess.

    Vote -1 Vote +1

    • Eric R says:

      It comes down to a manager deciding whether he believes the fire in the 7th inning is more important than a flood in later innings.

      If they bring in their third or worse reliever in the 7th in a close game with runners on base and the opponents better hitters coming up, and that pitcher blows it, then they just may end up not needing #1 or #2 at all since the lesser pitcher just put the game out of hand.

      There are the same issues either way, since as you say, we don’t know what’ll happen next, all we can do is look at what is probable to happen next and use our best judgement based off of that info.

      Vote -1 Vote +1

    • Richie says:

      I believe Bill James looked for a ‘9th inning effect’ way back in the 80s, and concluded with full confidence that there ain’t no such critter.

      Vote -1 Vote +1

    • Cidron says:

      you could reverse that application, bring in #3 for the first crisis point, bring in #2 for the second, and #1 for the late/9th inning crisis point.

      The reasoning behind this would be that,

      If #3 blows it (say in the sixth or seventh) we got time to recover and retake the lead. So, its not the end of the world. He isn’t as good as 2 or 1 and it will not be a shock.
      Number two (current terminology “setup”) is a better pitcher, thus, he is more surer to lock down the second crisis, and more likely to have the team still intact with its lead.
      The number one (current terminology, “closer”) is almost a sure thing to lock down his crisis moment for the win.

      This way you also preserve the higher quality arms in the pen by having the 2 and 1 pitchers pitch less, as they would only show if there is a second and third crisis point.

      Vote -1 Vote +1

      • CircleChange11 says:

        You also cannot use your relief ace every game during the highest LI situation, so even that runs into problems.

        The basis of the problem (and as a result, the analysis) is that the manager has to place his bet in regards to when the highest LI situation is going to occur, and use his relief ace accordingly.

        MY big issue with the analysis part is that we’re using hindsight as our gauge, and that’s not responsible since the manager doesn’t get the benefit of using it to make decisions.

        I do, however, think that some guidelines could be set up so that any LI greater than X.YZ goes to the relief ace regardless of whether it’s the 7th, 8th, 9th inning … but scenarios with an LI less than that could go to any of the top 3 relievers, even if ends up being the highest LI of the game.

        Personally, I think we make far too much of this issue, and do so in a way that is not at the level of integrity that I would prefer. In short, I think we use hindsight to make ourselves look smarter than the managers.

        Anyway, I think a much better approach as opposed to trying to guess the highest LI situation during the game and using your relief ace then is to … wait for it … have 3 really good relievers that you can use in a variety of situations. I know it sounds obvious. But, if we really believe that almost anyone can be an effective reliever (as we say we do), then we really shouldn’t be making such a big deal about it.

        We CAN’T say both [1] closers are easily replaceable within an organization and [2] teams are doing themselves a big disservice by mishandling the usage of the closer. They’re contradictory terms.

        We’re basically saying on one hand that relievers talent and expected performance is very close to each other and then on the other hand saying that the relief ace’s talent is so much greater than everyone else’s that they should be used in the greatest LI situation even if it’s the 7th inning.

        IMO, in reality, it’s much more like batting order, where as long as the better relievers get the bulk of the usage, the rest is just details and may result as much due to luck and randomness as anything else.

        Reliever decisions occur at the end of the game, so they attract more attention than the performance likely deserves.

        Vote -1 Vote +1

    • Cream says:

      It’s an interesting variation of the secretary/fiance problem.

      For information, see: http://en.wikipedia.org/wiki/Secretary_problem

      Vote -1 Vote +1

  12. David says:

    A lot of these one inning pithcers are one-pitch only guys – pithcers with a limited set of skills. They come in and chuck as hard as they can for as long as they can. Their performance in a second inning may not be as effective as the first inning. So in that sense, their current usage may be optimizing their performance.

    Vote -1 Vote +1

    • Richie says:

      I recall Tom Tango saying there was an effect going from 2 to 3 innings, but he couldn’t vouch for there being one going from 1 to 2.

      Vote -1 Vote +1

  13. Jim Lahey says:

    It helps the RP know hes going into the game so they can be properly prepared vs. If you “might” pitch you won’t be in the right mindset. It’s I wonder if skip is gonna call my name.. Going to just get loose then crank it up

    Vote -1 Vote +1

  14. Joe C says:

    Bill James has an article in the revised ’03 Historical Baseball Abstract suggesting inning-run situations in which a relief ace would be optimally used. I don’t have it with me right now, or I’d look it up. Would appreciate if someone else would?

    Vote -1 Vote +1