The reason why this whole thing exists in the first place is because winning percentage as a function of runs is not a linear relationship. It follows a Weibull distribution. If you regress standard deviation of runs scored on winning percentage, you get a negative relationship. If you regress standard deviation of runs allowed on winning percentage, you get a positive relationship. Which confirms Matt’s article.

]]>You mention using Game Score as a potential metric to use. Have you given any thought to using Ron Shandler’s book’s PQS (Pure Quality Start) methodology? It is a saberized method for evaluating a quality start.

I’ve been using it to examine how teams do when their pitchers have a quality start vs. not, and obviously, they win a lot, but so far I’ve been finding win rates of 70-80%, which puts a number on how much it affects winning.

The PQS methodology looks at the percentage of time the starting pitcher is dominant (4 or 5 PQS) and this relates squarely with your study here, trying to examine how consistent a pitcher is in delivering a quality start.

It is shocking to me that a more predictably good pitcher like Cain could be penalized for being good at what he does, while a more variably good pitcher would benefit. I look forward to your future studies into this phenomenon.

]]>I started out with a hypothetical two starts, went on to find the win values of all of the different IP/runs allowed, and then went on to calculate pitcher wins above average and WAR. If only I would have seen this article earlier, instead of doing all the digging myself :)

You handled the information very well, though, good article which I’m glad I found so that I didn’t end up submitting something pretty identical on accident.

]]>The second one is probably not the case, but the first one cannot be the case.

]]>The name that jumped to my mind was Edwin Jackson. For some reason teams allow him to absorb more than his share of bad outings where he’ll pitch 5-7 innings and allow 6-10 runs, which makes his overall season stat lines look a lot worse than they could.

I also notice that when broadcasters use the term consistency, they really mean consistently high performance. No one seems to praise a pitcher that allows 4 runs over 6 IP every time out. But a pitcher that mixes in 2 shutouts with 2 stinkers will be lambasted for inconsistency.

Obviously managers like consistency because they can plan for it.

]]>Also If sample size is appropriate, think this could be showing that players with the lead early tend to coast, and are more prone to giving up a big lead in the latter innings?

]]>Say you have a starter that always gives goes 9 innings and gives up 4 runs, on a team that averages 2 runs a game that starter almost never helps them win games. A team that averages 6 runs a game would love to have that starter though.

On the flip side if you have a starter that alternates between 9 shutout innings and 9 innings with 8 runs allowed, the team that averages 2 runs a game would love to have that guy while he’d be useless on a team that averages 6 r/g.

]]>He only gave the chart with whole number because it would be too bug of a chart to post otherwise. Furthermore, I don’t think he rounded for the chart either, simply omitted the data for the other starts. That was my impression anyway.

]]>