# Same old, same old

Consistency. It’s an important word for people when it comes to baseball. Google “consistency + baseball” and you get 2,830,000 results. It’s also a loaded word. Years ago, for example, Red Sox fans were arguing that the Sox should sign Chan Ho Park because he was a consistent pitcher. Park had been third in the major leagues in quality starts that year, clear proof that he was a much better pitcher than his 15-11 record demonstrated. After all, teams win in about 67% of all quality starts, so Park should have had at least 18-20 wins.

How many times has a baseball announcer lauded a baseball player for his consistency, or mentioned that a player needs to be more consistent? Albert Pujols‘ torrid start has been so shocking, in part, because he *consistently* hits about 7-8 home runs a month. Matt Clement continues to get blasted in the Boston media for his inconsistent starts.

But how important is consistency? There are two parts to this question, actually. Firstly, is consistency a repeatable skill, or is it random (inconsistent)? And, secondly, do we care whether a player is consistent or not? Is a guy who goes out there and pitches seven innings giving up three runs every time out really more valuable than a guy who pitches six innings and allows five runs half the time, while holding the opposition to one run over eight innings the other half?

The second question seems ridiculous. It seems obvious you rather have a (good) chance to win every time out, but it’s not that simple. More on that later. First, I’m interested in whether or not consistency is a repeatable skill. For the sake of this article, I’m only going to focus on pitching, namely, Quality Starts.

The Quality Start is a statistic that was invented in 1985, by John Lowe, then a writer for the *Philadelphia Inquirer*. A pitcher is awarded with a Quality Start anytime he throws at least six innings and allows no more than three runs. Of course, a number of problems are immediately apparent with this statistic: First of all, a pitcher who goes the whole way giving up four runs has had a better game than a guy who goes six and allows three runs, but the latter will get a Quality Start, while the former will not. The same applies to a guy who goes five innings but only allows two runs or less.

Quality Starts will overrate certain types of pitchers—those who almost never go more than 6-7 innings, for example (Jeff Weaver has often been cited as a pitcher who gets a disproportionate number of Quality Starts, compared to the actual quality of his starts)—but overall, the Quality Start is actually a pretty sound statistic. It does a decent job combining effectiveness and longevity, and accounting for consistency. It also helps that the winning percentage in Quality Starts is so high. But here’s the question: Do pitchers consistently have more or less Quality Starts than one would expect from their statistics?

It doesn’t seem so. While the percentage of a pitcher’s starts that are labeled as Quality Starts correlates somewhat well from year-to-year (r = .187), once we control for ERA, that correlation disappears (r = -.085). In other words, *there seems to be no evidence that a pitcher can retain consistency above-and-beyond his pitching performance from one year to the next*. Jason Jennings, for example, had 30% more Quality Starts in 2004 than we would expect based on his ERA. In 2005, he had 12% *fewer*. Cliff Lee had 20% more Quality Starts than expected in 2004; in 2005 he had 16% fewer. In 2004, Kenny Rogers had 17% fewer Quality Starts than expected; in 2005, he had 16% *more*.

Simply put, there’s no use in looking at a pitcher’s Quality Starts if you have his general pitching line. It might tell you something about that particular season, but it is a useless statistic in terms of making any projections.

Given that, what might knowing a pitcher’s consistency tell us about his season? Was Chan Ho Park really a better pitcher than he looked because of all his quality starts? Are announcers right to gush over a consistent player, and admonish players who are very up-and-down all season? Conventional wisdom would say yes, because consistent pitchers give their team a chance to win every time out.

But statistics say otherwise, for the most part. Let’s do a little simple math. Let’s take two pitchers: Mr. Consistent, and Mr. Inconsistent. Both play for a team that scores 5 runs per game. Both have the same number of runs allowed, innings pitched, and starts. But Mr. Consistent turns in the same performance every time out, while Mr. Inconsistent’s starts can be split in two: starts in which he allows half as many runs as he does on average, and starts in which allows 50% more runs than on average. In other words, if Mr. Inconsistent’s RA (Runs Allowed per 9 Innings) is 4.00, he would allow two runs in half starts, and six in the other half.

So who would win more games? Well, it depends on just how good the two pitchers are. First let’s make two assumptions to simplify things: (1) Each pitcher completes every game, and (2) Each pitcher starts 30 games. Here’s the short answer: If Mr. Consistent and Mr. Inconsistent have RAs below 3.00, you’d rather have Mr. Consistent who will, at those levels, have a slight edge (no more than half-a-win). With RAs over 3.00, which is where almost all major league pitchers fall, you’d in fact rather have Mr. Inconsistent, whose advantage can be pretty great.

If both are average, for example (RA = 5.00), Mr. Inconsistent will win a game-and-a-half more than Mr. Inconsistent. As their RAs go up, so does the relative value of Mr. Inconsistent. Here’s the answer in graphical form, with RA on the x-axis, and how many extra wins Mr. Inconsistent will get on the y-axis:

So how is this possible? How is it that an inconsistent pitcher will generally win more games than a consistent pitcher? It’s actually pretty logical. Imagine the two pitchers were even more extreme. Imagine that Mr. Consistent allowed 5 runs every game he pitched. He would go 15-15. Now imagine that Mr. Inconsistent allowed *no* runs in half his starts and 10 in the other half. He’d win all his 0-run starts, but he’d also win the occasional 10-run start (about 18% of those starts, actually). Overall, he’d win around 18 games, three better than Mr. Consistent. As our example becomes less extreme, so does the difference. Nevertheless, it’s there.

In fact, this all goes back to the old axiom: A run saved is worth more than a run scored. Mr. Inconsistent will have more low-run starts, and more high-run starts. However, as long their RAs remain equal, Mr. Inconsistent’s low-run starts will add more value than his high-run values will lose him.

The difference can actually be pretty great, so why does conventional wisdom say otherwise? I think the answer is actually pretty simple. Consistency *seems* like a good trait. A consistent player looks like he knows what he’s doing, like he’s put it all together, like he’s a “professional.” Inconsistency, meanwhile, is associated with younger players and those who have yet to figure the game out. So we naturally tend to assume that consistency is good. More so, even if the quality start had never been invented, we would still notice that some starters pitch well day-in-day-out, while others are more up-and-down. The first group looks consistently good, giving the impression that it’s better, though that might not be the case.

But, in reality, we want our starters to be inconsistent. So Dodgers fans were right to complain about Jeff Weaver’s Quality Starts (eight more than expected over the past two years), but for all the wrong reasons. It’s not that the statistic overrates him (though that too may be true)—it’s that it would be *better* for him to be a little more inconsistent. So all hail Al Leiter (6 fewer Quality Starts over the past two years than expected), and remember the next time that Matt Clement is getting lit up that inconsistency is a good thing. As long as you occasionally toss a beauty as well.

**References & Resources**

The Griddle

Print This Post