On Friday, I spent some time talking about the change in bullpen usage patterns over the last thirty years, and noted that the shift to more pitchers making shorter appearances hadn’t led to an improvement in performance for relief pitchers in the aggregate. There were a lot of good responses left in the comments, and there’s some useful commentary on the issue over at The Book Blog as well.
Many of the responses focused on a similar point that I didn’t do a very good job addressing – that by focusing on aggregate data, we could miss value being added if the performance in extremely important situations was greatly improved due to the new usage patterns. The results as a whole might be similar, but if the new allocation results in better performance during important situations and worse results when the game is already decided, then teams would be drawing a benefit from using relievers in this manner. William Juliano expressed this view on the issue in a really good post he did at his own blog as a follow-up, and looked into the relative performance of the top tier of relievers from both 1982 and 2011. As expected, he found the quantity-for-quality trade-off, as modern day relief aces are pitching fewer innings but getting somewhat better results in those innings than their counterparts were thirty years ago. The two changes essentially offset, as he notes, and there’s only a small difference in WAR between the 25 best relievers of 1982 and 2011.
Juliano finishes with the following conclusion:
So, where does that leave us? It seems certain that as a group relievers are no better or worse today than they were 30 years ago. However, instead of advocating a return to the past with the goal of saving money and roster spots (after all, if not wasted on marginal relievers, they’d probably be squandered on below average position players), perhaps the focus should be on improving bullpen usage within the modern theory? The one thing we know for certain is today’s relievers do pitch in more games, but, unfortunately, managers too often defer to the save rule and waste many of these appearances in low leverage situations. If managers would instead commit to shooting their best bullets at the right targets (i.e., high leverage situation regardless of inning), the current philosophy of shorter outings might prove to be the most optimal. At the very least, this hybrid approach is worth trying, especially when you consider that a return to the past approach promises little more than the status quo.
He’s right – my advocation for a return to the days of using relievers like Stanley doesn’t appear to offer substantial improvement in reliever value, as it’s simply going the other direction on the quantity-quality scale. And, in fact, holding up Stanley’s 1982 season as the example of optimal reliever usage is simply incorrect. While he had a 1.57 gmLI (the average leverage index when he was summoned from the bullpen) that year, his pLI (the overall average leverage index of all batters he face during the season) was just 1.29. In other words, the Red Sox brought him in during pretty tight situations where he could help keep the game close, but then he racked up his massive innings total by staying in the game even after the outcome had become more obvious.
A perfect example of this was the game against Baltimore on August 15th of that year. After Mike Torrez pitched four scoreless innings, Stanley entered the fifth with the game tied at zero. He threw three more scoreless innings before the Red Sox put up an eight spot in the botton of the seventh, giving them an 8-0 lead and a 100% chance of winning according to WPA – after all, an eight run lead with six outs to go in a low scoring environment is nearly insurmountable.
Despite the fact that the game was essentially over, Stanley remained on the mound for the final two innings. Those innings were of no real value to the Red Sox, as anyone on the staff could have taken the mound and preserved the win. So, while Stanley came into the game in critical situations and threw a lot of innings, he still managed to pitch in a good number of inconsequential situations, and that’s not really an optimal usage of an ace reliever either.
The ideal usage pattern is not simply increasing the number of innings thrown by the best relievers by allowing them to stay on the mound after a game has been decided, but in using them for as many high leverage innings as possible throughout a season. Stanley should not be held up as the model – the 1996 version of Mariano Rivera is what teams should strive for.
At age 26, Rivera appeared in 61 games and faced 425 batters, 269 fewer than Stanley faced in 1982. Still, at 6.96 batters faced per appearance, he was staying on the mound about 60 percent longer than a traditional ninth inning reliever. For comparison, Rivera faced 4.56 batters per appearance in 1997, the year he replaced John Wetteland as the Yankees closer, despite being just one season removed from showing he could handle a heavy workload and sustain a brilliant performance doing it.
Rivera’s gmLI in 1996 was only 1.36, lower than that of Stanley. But because they essentially let him regularly work the 7th and 8th innings of close games, his pLI was 1.56, meaning that the situations got more important when he was on the mound. While Stanley came into close games, kept them close, and then racked up innings while the outcome was no longer in as much danger, Rivera was used almost exclusively in situations where the game was on the line. And, because of his ability to get everyone out, he racked up 107.2 innings, putting up a +4.4 win season that ranks as the third highest of any reliever in the last 30 years.
Now, I know that’s easy to just dismiss everything Rivera does as a massive outlier and write off anything that he’s done as impossible for other mortals to repeat. However, 1996 Rivera posted a FIP- of 40, which 13 relievers have matched or done better than in a season with at least 50 innings pitched since 1982. Rob Dibble maintained a FIP- of 38 while facing 384 batters in 1990. Duane Ward faced 428 batters in 1991, and his FIP- was 43. Even more recently, Eric Gagne (2003), Francisco Rodriguez (2004), and Craig Kimbrel (2011) have faced 300+ batters in a season while performing as well or better than 1996 Rivera did on a rate basis.
While Rivera’s 1996 season might be the best example of how a non-closer relief ace can be deployed to maximum value, he’s not the sole example of a pitcher who was able to carry a significant workload while performing at an extremely high level in critical situations. While asking a pitcher to be that dominant while facing 600 to 700 batters in a season appears unrealistic, we have evidence that elite relievers can succeed while facing 300 to 400 batters in high leverage situations during a single season.
Last year, the 30 pitchers with 15 or more saves averaged 262 batters faced and 4.04 batters per appearance. These usage patterns aren’t just limited to the closer’s role either; the top four relievers in baseball by ERA- last year – David Robertson, Eric O’Flaherty, Scott Downs, and Mike Adams – each faced fewer than 3.89 batters per game, despite the fact that each showed they could get out batters from both sides of the plate and didn’t need to be used as specialists. Still, the evolution of set bullpen roles has led to not only limits on how many batters the closer faces, but the eighth inning setup man as well.
As Juliano notes, the goal shouldn’t be a return to 10-man pitching staffs simply for the sake of roster efficiency, but in deploying a strategy where the bullpen produces the most value overall. We’ve shifted from an approach that focused too heavily on quantity to one that focuses too heavily on quality. The best deployment of a bullpen isn’t from 1982 or 2011, but from the year directly in between those two.
Print This Post