When Bullpenning Goes Wrong

Playoff baseball has changed in recent years. Where managers might have once employed tactics very similar to those used in the regular season, that has increasingly not been the case. In 2011, for example, Tony LaRussa used his bullpen just as often as he did his starting pitchers en route to a championship for the St. Louis Cardinals. A few years later, the Royals shortened games with an elite collection of relief arms on their way to consecutive World Series appearances. Last year, Andrew Miller appeared for Cleveland whenever the situation demanded, frequently throwing multiple innings.

Now everybody is using their bullpen more often. Relievers are accounting for a greater share of innings and wins in the regular season. This postseason has featured at least one game during which the winning team’s starter didn’t even survive the first inning. In terms of pitching effectiveness, it seems as though clubs are approaching an optimal state; however, there are some aspects that are getting worse, particularly when it comes to inherited runners.

This season, around 30% of inherited runners — runners whose earned runs would be credited to the previous pitcher — eventually came around to score. That’s a little bit higher than previous years, which were closer to 29%, but lower than the 32-35% that we were seeing more than a decade ago, per Baseball-Reference. It makes sense that the league figure would have increased in the last few years, mirroring the rise in offensive production across the league. That the number is much lower than it was a decade ago seems to indicate how relievers have gotten better over the last decade. Better relievers pitching more important innings in the playoffs is a good thing when it comes to preventing runs, but usage is still far from ideal.

Here’s how each of this year’s postseason teams performed in the regular season in terms of allowing inherited runners to score.

Playoff Teams With Inherited Runners
Club Inherited Runners IR Scored IRS% MLB Rank
Arizona 239 58 24.3% 1
Los Angeles NL 180 45 25.0% 2
Colorado 242 61 25.2% 4
Washington 197 51 25.9% 5
New York AL 223 61 27.4% 6
Minnesota 300 83 27.7% 9
Cleveland 215 60 27.9% 11
Boston 250 69 27.6% 13
Chicago NL 202 69 34.2% 24
Houston 240 90 37.5% 29
MLB AVG 236 72 30.3%
Playoff AVG 229 65 28.3%
LDS Team AVG 218 63 28.8%
SOURCE: Baseball-Reference

Generally speaking, the playoff teams were pretty good at preventing inherited runners. Only the Astros and Cubs recorded a below-average mark on a rate basis — and the Cubs allowed a sufficiently low number of opportunities that the total quantity of inherited runners that scored was below the league average. As some of the top teams indicate, it helps to have good starting pitching. During the regular season, there were 2.9 inherited runners per game, just under one of which scored. The numbers from this year’s playoffs reflect the urgency teams feel in high-leverage baseball. Postseason games have averaged 4.3 inherited runners per game. When pitchers have gotten in trouble, managers have intervened.

Although the urgency is admirable, inherited runners are scoring at a higher rate in the playoffs than in the regular season. In the playoffs, 1.6 inherited runners have been scoring per game, a 37% rate — that is, well above the regular season’s 30% rate and higher still than the 28% mark produced collectively by this year’s playoff teams. As for why that number’s higher, it could be just a case of randomness. We’re dealing with 29 games here compared with nearly 100 times that many in the regular season, and it’s something that might even out over time. Increasing the number of inherited runners by 50% every game is going to mess with any assumptions we try and make, but provide a lot more opportunities for scoring.

One thing we do know is that pitchers — relief pitchers, in particular — tend not to pitch quite as well with runners on base. Here are the numbers for relief pitchers with runners on versus bases empty, both overall and in high-leverage situations, this past season.

Relief Pitchers with Runners on Base
Situation wOBA FIP
Bases Empty .306 4.03
Runners On .316 4.30
Bases Empty, High Leverage .295 3.74
Runners On, High Leverage .309 4.24
SOURCE: Baseball-Reference

As you can see, there’s a decent gap between each of these situations, and the gap grows bigger with higher leverage. Relievers have a pretty big advantage over starters when it comes to getting outs. That advantage shrinks somewhat, however, when the reliever enters a game to face a batter with a runner on base. This past season, starters recorded the same FIP (4.48) with runners on base and bases empty. The difference in wOBA was just five points, .325 versus .330. That gap is the smallest it has been since at least 2002, the beginning of our splits leaderboards. That shrinking of that gap is due, in part, to starters pitching in fewer high-leverage situations. Managers are providing quicker hooks, perhaps realizing pitchers aren’t quite as good as the game goes on. From 2002 to 2016, starters pitched to an average of 7,400 high-leverage batters per season. That number was down to just over 6,100 this year.

That wider performance gap in high-leverage situations is a new phenomenon. Since 2002, on average, there has been no difference between relief performance with runners on as opposed to bases empty in high-leverage situations. Interestingly, while relief usage has increased greatly over the years, the number of batters faced in high-leverage situations, whether with runners on or bases empty, has remained roughly the same. Relievers’ share of high-leverage batters has increased, and the gap between their performances with runners on and the bases empty has increased as well.

We know that bringing in a reliever with a runner on shrinks the advantage a reliever has over a starter, but we also know that the third-time-through-the-order penalty stretches that advantage back out. Keeping a starter in with runners on base isn’t the right move. Bringing in the reliever is still more likely to yield a better result. However, there’s another option: rather than letting a starter get into trouble in the first place, teams would do well to remove those starters before the trouble even occurs. If a starter is going to be removed at the first sign of trouble the third time through the order, simply bringing in a reliever so that the new pitcher can operate with a clean slate is ideal.

Deciding when to take out one pitcher and bring in another is probably the most difficult — or, at least, most scrutinized — aspect of a manager’s on-field job. If the manager is successful, he might get some praise, but it probably goes mostly unnoticed. If the manager is unsuccessful, he’s likely to receive criticism, regardless of his decision.

Going to the bullpen is all the rage right now, and it probably will be for some time, but just going to the bullpen isn’t enough. The timing needs to be right. Taking a pitcher out once he gets in trouble is the easy call. Harder, but perhaps more fruitful, is taking a pitcher out before the trouble ever starts. Branch Rickey said, “It is better to trade a player a year too early than a year too late.” That same logic might apply to pitchers in the playoffs. It’s a manager’s job to put his players in the best position to win. If at all possible — and it won’t always be possible — that means bringing in pitchers with a clean slate.

We hoped you liked reading When Bullpenning Goes Wrong by Craig Edwards!

Please support FanGraphs by becoming a member. We publish thousands of articles a year, host multiple podcasts, and have an ever growing database of baseball stats.

FanGraphs does not have a paywall. With your membership, we can continue to offer the content you've come to rely on and add to our unique baseball coverage.

Support FanGraphs




Craig Edwards can be found on twitter @craigjedwards.

newest oldest most voted
Chris
Member
Member
Chris

The fact that relief pitchers, as a whole, pitch worse with runners on base does not mean that the average relief pitcher does so. Relief pitchers who allow lots of baserunners themselves are going to pitch a disproportionate share of innings with runners on. (Unless they also give up a lot of HR to clear the bases, I suppose.) That may seriously skew the analysis.

To take the top and bottom of the reliever leaderboards (WAR) as an example:

Kenley Jansen (best fWAR)
bases empty: 42 IP, .222 wOBA
runners on: 26.1 IP, .182 wOBA

Chris Beck (worst fWAR)
bases empty: 32.2 IP, .368 wOBA
runners on: 32 IP, .405 wOBA

“average”
bases empty: 37.1 IP, .285 wOBA
runners on: 29 IP, .304 wOBA

Jansen’s wOBA is .040 better with bases empty, Beck’s is .037 worse with runners on, and Jansen has pitched more innings — but when you just add the splits together, you conclude that, “on average,” they pitched .019 wOBA worse with runners on. Which makes no sense.

So if you really want to test the hypothesis that a “typical” relief pitcher pitches worse with runners on base than he does with the bases empty, you need to adjust your data set so each reliever weighs equally in both the bases empty and runners on subsets. Probably weighting both sides by total IP rather than split IP will give you the most meaningful answer.