Evaluating Defensive Projections

I’ve started a project to evaluate the various defensive ratings based on how well they project future defensive efficiency. For more background, read part one here if you want, or feel free to skip ahead to the results.

Since part one I’ve upgraded the projection program, it now uses position adjustments to allow for players who play different positions than they had in the past. This probably makes the biggest difference for Franklin Gutierrez. He played right field, with spectacular ratings, prior to 2009. The Mariners put him in center and he was the best defender in the league. The system now takes his right field ratings, applies a position adjustment, and gives him a center field projection going into 2009.

The sum of player projections, prorated to innings, are then compared to DER runs, park adjusted and adjusted for the pitching staff’s mix of batted balls allowed. The formula to find expected DER, based on hit types, is this: (GB*.265 + FB*.176 + LD*.726 + Pop*.023)/BIP. The park factors range from Colorado, which adds .021 to expected DER, and Fenway (.014) to San Diego (-.023).

I’ve added Dave Pinto’s PMR to the group. I had to calculate runs saved from the data, and only have 2007 and 2008 numbers, so it may be at a bit of a disadvantage to the other systems using 3 years data, but does pretty well anyway. I have not added the Fan Scouting reports. I’m very interested in doing so, but getting it into the same format as the rest of the data looks like more work than I have time for. With 5 contenders, here are the results, by correlation coefficient and root mean squared error:

System, correlation, RMSE
UZR, 0.11, 46
Plus/Minus, 0.31, 43
TotalZone, 0.22, 44
Zone Rating, 0.18, 44
PMR, 0.28, 43

That is better than the initial results, which showed almost no correlation. This is only looking at one year of data, it is quite possible that if I did another year UZR would do as well or better than Plus/Minus. For now though, John Dewan’s system can wear the crown as the best of the publicly available fielding systems. The RMSEs for all 5 systems are very close, if you were making decisions before the 2009 season on any one of these systems you would have done about as well as any other, and better than someone who ignores defense and just assumes everyone is about average. We do not know if you would have done better than a team that has no trust of the numbers, and bases all of their defensive decisions on their scouts opinions. And of course, just as defensive metrics can differ, so can the evaluations of scouts.

Averaging all 5 systems does slightly worse than PMR and PM alone. Averaging PMR and PM gives you a tiny improvement over using PM alone. The best correlation you could have gotten from a combination of these systems is .42, which predicts team runs using this equation:

-12.9 – UZR*1.06+ PM*1.39 + TZ *.686 – ZR *.458 + PMR *.86

I’ll give you a warning, that equation is simply a best fit for the 2009 data. There is no reason to think it will give you decent results for next year, or any year other than 2009. But it does suggest, again, that Plus/Minus ratings projected 2009 better than the rest.

The systems used are UZR, which is available on Fangraphs, John Dewan’s Plus/Minus, from Baseball-reference.com and Fangraphs, TotalZone, from Baseballprojection.com, Chris Dial’s Defensive runs saved, based on STATS zone rating and available at Baseballthinkfactory.org, and PMR, from Baseballmusings.com.

Print This Post
Sort by:   newest | oldest | most voted

Maybe I am not understanding, but why are the team’s weighted UZR, PMs scores being compared to DER Runs for 2009 that are based on a the pitcher’s batted ball data. Wouldn’t this be comparing the teams defence projection to a league average defence projection (if they played behind the same pitchers). Shouldnt you compare it to what actually happened (in terms of runs) ie: run values of actual singles, doubles, triples, ground outs, flyouts given up or made by the defence.

Sean Smith
Sean Smith

Not sure I understand the question.

Actual DER runs depend on ballpark + pitcher BIP + defensive performance.

I’m trying to adjust for the first two to get a better measure of the last one.


Sorry, read both articles. I get it now.