UZR Updates!

The first UZR updates of the 2010 season are in, and from here on out they’ll be updated every Sunday night.

There have been a few improvements made to UZR this year, which will also be reflected in prior year’s UZR data. The changes do impact a few players, but for the most part, each player’s UZR has remained unchanged or is within a couple runs of what a player was rated before the improvements. Mitchel Lichtman, the man behind UZR, outlines the changes below:

Park factors have been improved, especially for “quirky parks and portions of parks,” such as LF and CF at Fenway, LF in Houston, RF in the Metrodome, and the entire OF in Coors Field. Of course, park factors in general are updated every year, as we get more data in each park, and as new parks come into existence and old parks make material (to fielding) changes.

In the forthcoming UZR splits section, we will also be presenting UZR home and road splits, as a sanity check for those of you who are skeptical of park factors. Please keep in mind that regardless of the quality of the park adjustments, there can and will be substantial random fluctuations in the difference between home and away UZRs and it is best to evaluate a fielder based on as much data as possible (e.g., using home and road stats combined), as we do with most metrics and statistics.

Adjustments have been added to account for the power of the batter as a proxy for outfielder positioning, so that, for example, if an outfielder happened to have “faced” a disproportionate percentage of batters with less than or more than average power, the UZR calculations will make the appropriate adjustments (as best as it can). Obviously, these kinds of adjustments are more important for smaller samples of data than for larger samples, since, in larger samples, these kinds of anomalies (in terms of opponents faced) tend to “even out.”

For infielders, similar adjustments are made for the speed of the batter, as a proxy for infielder positioning and how quickly the infielders have to field and release the ball, as well as the speed of the throw.

When a “shift” is on in the infield, according to the BIS stringers, if the play was affected by the shift, the UZR engine ignores the play. As well, if an air ball hits the outfield wall and in the judgment of the BIS stringers, no outfielder could have caught the ball, the play is similarly ignored.

Also keep in mind that UZR does not include first basemen “scoops” or the ability of the first baseman to influence hits and errors caused by errant throws from the other infielders. According to my (MGL) research, yearly “scoops” numbers are generally in the 1-4 run range, which means that the true talent range of most first basemen with respect to “scoops” is probably in the plus or minus 2 runs per year range – i.e., not much.




Print This Post



David Appelman is the creator of FanGraphs.


56 Responses to “UZR Updates!”

You can follow any responses to this entry through the RSS 2.0 feed.
  1. John says:

    RF at the Metrodome, huh? ;)

    Vote -1 Vote +1

    • Nick says:

      The quirky part of RF in the Metrodome is that the Twins don’t even bother to put anyone there…they’re all too busy out playing in the sun

      Vote -1 Vote +1

      • Rally says:

        I assume that for 2010 UZR, you use no park factor for Minnesota until we accululate some data?

        Having improved PF for Metrodome at least helps make the historical data better. Have these been updated as well or is that just going forward?

        Vote -1 Vote +1

      • I would assume that’s the case with the new park, but MGL would be able to better answer that question.

        And yes, all the historical UZR data has been updated with the improved park factors.

        Vote -1 Vote +1

  2. delv says:

    “When a “shift” is on in the infield, according to the BIS stringers, if the play was affected by the shift, the UZR engine ignores the play.”

    And where does the distinction between the Ted Williams shift and constant IF adjustments according to batter (back, forward, and to either side) lie, exactly? What’s the threshold?

    Vote -1 Vote +1

  3. wobatus says:

    Um, Jay Bay’s 2009 UZR is now +1.9? Making him a 5 WAR player last year? This is kind of a big change from his UZR for 2009 previously listed (which seemed flukishly awful considering the other D ratings from Total Zone, Dewan, and just the flawed old stats of errors, putouts, assists, fact he isn’t that slow with his knee healed, etc.). Does this in any way change the whole set of articles of Bay vs. Willingham, Bay vs. Cameron, Mets grossly overpaid him, etc.? I know it is only 1 yar of UZR, but before he got hurt he wasn’t that bad in Pittsburgh. It’s just that so much was made of it (and we had the whole issue recently raised by baseballprospectus on UZR “bias” and the response here). But that park effect change had a big effect on Bay. I always argued too much precision was being placed on UZR and the WAR ratings derived therefrom. This kind of bears that out, to at least a small degree. But I am glad you guys are working to making it better.

    Vote -1 Vote +1

    • Sky says:

      Definitely agree. I’ll add that we shouldn’t be using one year’s data anyway. So those -18 seasons previously still bear out that Bay’s a bad fielder. Maybe just to a lesser degree than the old version implied. (And, of course, we should look at +/-, too.)

      Vote -1 Vote +1

      • wobatus says:

        Those 2 -18s were after a pretty bad knee injury. Yes, you need more than a year of data. He isn’t great, but he’s not Braun or Dunn.

        Vote -1 Vote +1

      • wobatus says:

        And you are right about +/-. That still has him a little below average last year. I think some people did realize that his UZR last year might be a fluke, despite the awful prior seasons. Maybe he was healing, maybe he got used to Fenway, maybe the +1.9 is just the outlier/fluke. It is great to be honing the metric, and people still should look to all the systems and put them in context.

        Vote -1 Vote +1

    • Yes Jason Bay’s and Ellsbury’s UZR for that matter had the biggest changes in the 2009 data. You should realize that the correlation between the old data and the new data is .95. With the except of very few players (10 or so), not much has changed.

      Second of all, to relate this change to anything that has to do with the BP article, seems silly. You realize that new data was added into the UZR equation, specifically information about whether or not the ball hit the wall.

      I’ve read more people’s opinions on UZR over the past year than I ever wanted to and there seems to be a whole lot of confirmation bias going on.

      People will use UZR to support their arguments and then the same person bash UZR when it doesn’t agree with what they have to say. All I need to do is search the sbnation comments section and I could probably pull up dozens of examples.

      Vote -1 Vote +1

      • wobatus says:

        Guilty as charged!

        But what the hell, last year I didn’t even know what UZR was. This just reminded me of the whole BP article issue, not that it directly relates to what that was about.

        Nevertheless, I always thought Bay’s 2009 UZR in particular seemed off, and that in my mind always colored my whole view of it. And Bay’s UZR was a major issue in several articles that were highly commented on here.

        That his changed in particular, even if most people changed very little, eliminates something that stuck out. And in general, as I concluded above, I am happy to see the system being refined and improved.

        Vote -1 Vote +1

  4. Sam says:

    Jason Bay +1.9. 5 WAR! Apologize, Dave and Matt.

    Vote -1 Vote +1

    • philosofool says:

      I don’t think they should apologize for using the best data available.

      Vote -1 Vote +1

      • wobatus says:

        Except that, in Bay’s case, the best data is now far more in line with Manuel’s data and Total Zone. He went from -13 to +1.9. Big change. And a big deal was made out of the data. I think Sam is teasing, but the fact is in this particular instance the “best” data is now substantially different, and somewhat, at least slightly, weakens some arguments made about Bay.

        Vote -1 Vote +1

      • vivaelpujols says:

        They should be weary of placing so much confidence in unstable data.

        Vote -1 Vote +1

      • Joseph says:

        They didn’t.

        Vote -1 Vote +1

  5. fire jerry manuel says:

    Seriously, 5 years ago we were all on firejoemorgan laughing at teams that valued defense. Hahaha, fools!

    Now we speak definitively, with no hesitation, about the accuracies of the models we use.

    Vote -1 Vote +1

    • Sky says:

      Who’s “we”? And do those who don’t speak definitively have to opt out of being part of “we”?

      Vote -1 Vote +1

      • fire jerry manuel says:

        the rank and file sabrmetric crowd, lead by a few choice people.

        yes, you’re more than allowed to opt out if you’re going to say, “well, hey now, wait a minute.”

        Vote -1 Vote +1

  6. Joelq says:

    When are pitchfx updated? Is that also weekly? What day of the week?

    Vote -1 Vote +1

  7. Neil says:

    So how do we determine which ‘quirks’ require adjusting? Do the walls in the gaps at Rogers Center count? Having watch many games since the video walls were installed, it’s clear (and has been admitted by Vernon Wells) that players will come up short on balls hit deep into the gap, in order to avoid crashing into the totally unyielding video screen. (I remember when they were first installed, the very first game, Johnny Damon dropped a ball at the wall when he slammed into it and cut his arm. It’s just brutal.)

    Vote -1 Vote +1

  8. Jabberwocky says:

    Could you clarify the adjustments that were made for Coors Field please?

    Vote -1 Vote +1

  9. Will says:

    Don’t look now, but Scott Hairston is on pace for a 40 WAR season!

    Vote -1 Vote +1

  10. Matt C says:

    Admittedly I don’t know what all goes into UZR so I was wondering if another fielder takes a ball that you could’ve gotten to does that effect your UZR? For example say a ball is hit to the gap and you’re the RF and you can get to the ball but the CF comes in and calls you off and takes the ball does that effect you at all? Another example is say you’re a SS and a ball is hit to you that you could easily get but the 3B cuts infront of you and takes it first.(As a Tigers fan I noticed Inge does this sometimes) I know these situations don’t happen too much so it might not be enough to notice either way but I was just wondering.

    Vote -1 Vote +1

  11. Parker says:

    I like UZR generally but in terms of what how it measured Fenway outfield defense, it was well past time to fix it. I feel bad for Manny and Bay though. Both aren’t great outfielders but they weren’t the worst outfielders ever to play the game like the numbers said they were. Yet their defensive numbers were put into calculations as though they were exact measurements and cratered their values. I mean, all you heard about Bay during the offseason was if any team wanted to risk signing a DH to play outfield for the next five season. It was just unfair.

    Vote -1 Vote +1

  12. pft says:

    On a team level in 2010 UZR has the Red Sox ranked as the 2nd best defensive team at + 6.6 runs above average, while DRS has them among the worst at around -7.

    I can tell you the Red Sox defense has been horrible this year. This does not mean the Red Sox have bad defensive players, but they have played poor defense by their own managers admission. The metrics should reflect, and UZR is not. Something is wrong.

    Vote -1 Vote +1

  13. MGL says:

    Some interesting comments. I’ll try and address a few. Feel free to use this thread to ask questions about UZR.

    Of course the park adjustments (such as RF at the Metrodome, which is short and has a high “wall”) apply to the 02-09 data as David A. stated. For 2010 and for now, UZR is using estimated park factors for Target Field based on the dimensions of the park. It is not using NO park adjustments.

    UZR does not “choose” which quirks to address in the park adjustments. It does park adjustments based on the out rates and batted ball values of the various sections of each park, as well as the dimensions and characteristics of the various sections of all the parks and the parks themselves. Obviously they will affect quirky parks and sections of parks more than non-quirky ones.

    Somehow people gratuitously accept non-park adjusted offensive and pitching stats OR park-adjusted ones, like OPS+ or ERA+, when those park adjustments are typically crude and inaccurate, yet they get all bent out of shape when park adjusted UZR stats are not to their liking. Trust me, if someone were to “tweak” the park adjustments that analysts and web sites typically do with respect to pitching and offensive stats, and actually make them better, you would see all kinds of changes in the value of various players. If you take ANY metric as the gospel, you are a fool.

    There is nothing wrong with writing articles and making statements about the value of players based on the existing metrics. It goes without saying that the existing metrics could be “wrong” for a variety of reasons. The authors of those articles and statements do not need to write a disclaimer every time they refer to a player’s value, based on these metrics or what have you. After all, that is the only way we can objectively measure talent and value in the first place.

    For the person who appears annoyed that people made statements and wrote articles about Jason Bay’s (and other players) value and now that value has “changed,” write down the following sentence and repeat it to yourself every time you read something about someone’s perceived value or talent, even if it is based on scouting data or observation:

    “The data, the methodologies, the observations, and the opinions are all subject to various kinds of errors, so, as always, I could be wrong. In fact, it is certain that I WILL be wrong in some percentage of my statements and conclusions.”

    Someone asked about Coors Field. The way the previous park adjustments were done, the Rockie outfielders came out worse than they probably should have been. For the most part, the new park adjustments will make them better.

    “I can tell you the Red Sox defense has been horrible this year.”

    Why would you even look at the data and the metrics? You can “tell.” You should market yourself to teams. They could scrap their statistical department and hire you and save boatloads of money! I wish I had your talent – I would have no use for these metrics!

    “I feel bad for Manny and Bay though.”

    Parker, somehow I think they’ll handle it. It you want to draft a letter of apology, I’ll be happy to send it. Maybe they’ll send back a response written on hundred dollar bills. I’m pretty sure they have a few they don’t need.

    Matt C., when one fielder takes a ball from another one, it affects both fielders, but not by much. For one thing, a fielder does not lose any credit if another fielder fields a ball (since we don’t really know to what extent that fielder could have also fielded the ball). Of course had he fielded the ball, he would have received some credit, so is losing SOME value, which he should, since, again, without watching the video of the play, we have no idea whether he could have fielded the ball also. The other thing, is that if it was an easy catch, a fielder does not get much credit for catching a ball anyway. So, for example, if a lazy fly ball is hit to RC, and UZR thinks that that ball, given all the parameters, is caught 95% of the time, no one gets much credit for catching it, so it doesn’t really matter whether the CF or the RF does. If it drops, someone (or more than one fielder) is going to get dinged a lot of course. How much each fielder (say, the CF and RF) gets dinged depends on how often each one catches that ball relative to everyone else, given that it is caught 95% of the time.

    “And where does the distinction between the Ted Williams shift and constant IF adjustments according to batter (back, forward, and to either side) lie, exactly? What’s the threshold?”

    Good question. You’d have to ask BIS about that. They merely indicate in the data whether a “shift” was on and if it “affected” the play (which is subjective as well). I don’t think it is much of a problem. It is pretty obvious when a “shift” is on. I don’t recall ever watching a game, hearing the announcers mention that “the shift” was on and thinking, “Hmmm, is that really a shift or just a normal positioning for an extreme pull hitter?”, or something like that. Or them NOT calling something a shift, and me thinking, “Wait, isn’t that a “shift?”

    Remember also that when no shift is indicated, which is most of the time, UZR is using some of the data as a proxy for positioning, such as the handedness of the batter, the outs and base runners, and the speed of the batter (which is new in 2010).

    I want to reiterate that when we say that something is “new” in 2010, as in changes in the UZR engine, it applies to the old (02-09) numbers as well – everything was recalculated for those years.

    Vote -1 Vote +1

    • wobatus says:

      I said in those very threads that the UZR data for Bay seemed off for 2009 at least and people were relying too heavily on it (although I agreed that Cameron was closer in value to Bay than many realize). The adjustments appear to bear me out.

      It is really kind of odd for you to tell me I should remind myself that the assertions made by writers here may be based on faulty methodology when I said at the time the arguments look like they may be overly reliant on what could be a faulty methodology, at least in this particular case, and now the methodology is adjusted to prove that I was right all along.

      I am happy for the adjustments, but I am not the one who needs to learn to put things into context. I am also already perfectly aware that you need to take pitching and hitting stats in context.

      Thanks for refining the metric and may you keep on striving to make it better and better.

      Vote -1 Vote +1

  14. Aaron says:

    Clearly, Dave Cameron’s assertion that balls off the green monster were accounted for in UZR was completely incorrect.

    UZR is responsible for a lot of confirmation bias, particularly when it came to Jacoby Ellsbury, Jason Bay, and Manny Ramirez (though he was clearly below average, just not THAT bad). UZR for Ellsbury in 2009 was blatantly wrong. UZR for Fenway’s outfielders has been wrong for as long as I’ve been following, and it really made it hard for me to give it much credit. As a stats oriented person, it really bugs me when other stats oriented people accept the stats as gospel when they don’t pass the smell test. Glad to see you guys are improving the data.

    Vote -1 Vote +1

  15. MGL says:

    Aaron, and everyone else, I can almost guarantee you that the park adjusted (or non-park adjusted) pitching and hitting stats for Rockies’ players are way off the mark, for example (I won’t get into the reasons). My point is that we accept hitting and pitching stats at face value because baseball gives us neat little buckets to represent hitting and pitching performance. If you want neat little buckets for fielding, use fielding percentage or even Zone Rating. If you want metrics that better reflect performance or talent that correlates well with run scoring and wins, you have to accept the fact the methodologies that produce them are imperfect in many ways. For some players, they will be very imperfect.

    Anyway, one of the reasons why we went with presenting home and road UZR splits this year is to allow people to look at a player’s road stats in order to mitigate the effects of imperfect park adjustments.

    Anyway, we appreciate the patience of the Fangraph readers, which are some of the best among all the baseball web sites I think . It is better to recognize and correct mistakes and make improvements rather than leave an inferior status quo just for the sake of consistency.

    Vote -1 Vote +1

  16. MGL says:

    And, balls off the Green Monster were “accounted for” in the past. They are just better accounted for now. And in the future, they will be better accounted for still. It is more difficult than you might think to account for things when all you have is written data, but that are obvious if you actually watch them play out in real time or on video.

    Vote -1 Vote +1

  17. MGL says:

    I honestly feel like the current incarnation of UZR is a very good product. Please don’t get caught in the trap (it is an insidious one) of expecting a metric like UZR to conform to your preconceived notions of a player’s (or team’s) defensive talent or performance. That is a powerful urge. Bill James (or someone else) once said that a complex metric that doesn’t surprise you with respect to some percentage of players or teams, or surprises you on too many, is worthless. He is right. Keep that in mind. And if you don’t believer the first part of that, don’t waste your time with these metrics. It will only aggravate you. And of course that is not to say that all metrics like these cannot be enhanced by observation and scouting. They can of course. And some informed readers, like scouts, can provide that kind of “data.” So if you watch a particular team or player a lot, and you have a keen eye or sense for defense, or you read a lot about a certain player or team, feel free to use that information and your intuition to enhance the value of these kinds of metrics.

    Vote -1 Vote +1

  18. Steven Ellingson says:

    Can someone explain how the Twins have 1 error on the season, and the Mets have 9, yet the Mets have more “error runs” than the Twins? How does that make any sense?

    Vote -1 Vote +1

  19. MGL says:

    “Can someone explain how the Twins have 1 error on the season, and the Mets have 9, yet the Mets have more “error runs” than the Twins? How does that make any sense?”

    Thank you for pointing that out. There might be a computing bug. We’ll check it out. That is why we love the Fangraph readers!

    Vote -1 Vote +1

  20. pft says:

    So MGL can’t always tell from the data but must watch some plays on video to tell, yet fans who watch every game their team plays can’t tell when the “data-metric” is wrong. Got it. Anyone believing the Red Sox were a plus team on defense through 13 games did not watch the games. Maybe the same computing bug.

    Vote -1 Vote +1

  21. MGL says:

    pft, if a team has a plus UZR after 13 games, there is an X chance that they actually played poor defense, even if the UZR methodology were perfect, given the shortcomings of the data. You put a number on x. The “bug” is in your understanding of sample error and the like…

    Vote -1 Vote +1

  22. Joseph says:

    I for one have watched every Red Sox game so far and they’ve been fine defensively. I don’t know about “plus”, and I’m certainly no scout, but I’m amazed that even intelligent baseball fans fall into the trap of only remembering the bad plays, or the plays that come in crucial situations. Mike Cameron dropped a soft fly ball hit DIRECTLY at him in the first inning Saturday, therefore he lost them the game and he’s been awful in CF this year (I’ll admit he’s been shaky on other plays, but he was also playing hurt).

    I try to make a note of it when a guy makes a nice running play in the gap (as JD Drew did last night in a situation that was relatively unimportant since I can’t even remember what it was). I’m not a computer so I don’t have all of these filed away, but that’s what the metrics are for.

    One guy I honestly think isn’t showing his usual plus range is Pedroia, I’m interested to see if the numbers bear this out (I know I know, confirmation bias, I’m just saying I’m curious).

    That said, I have to disagree with MGL’s assertion that it’s taken for granted whenever someone on Fangraphs writes a piece about a players value that the validity of the metrics in question is subject to change. If someone had responded negatively (but constructively) to Dave’s post about Bay vs Cameron, I can almost guarantee that is not the response they would’ve gotten from the author. Maybe that says more about Dave than anything else, but still. The numbers on Bay still don’t look great overall, but Dave was painting him as Adam Dunn-lite which is the only way anyone could think he’s a less valuable player than Mike Cameron (though I’ll admit the general perception was wrong in the opposite direction, that Cameron isn’t half the player Bay is).

    Vote -1 Vote +1

  23. MGL says:

    Joseph, I was trying to be cordial (somewhat) and understanding to the person who suggested that he “knew” that the Red Sox had played poor defense (and maybe they have) because he watches them every day. To be honest, I think the notion that someone can watch a team and tell us whether they play “good defense” from a theoretical run prevention perspective, is nonsense, unless they were kind of a savant. The human mind has to many biases and limitations, as you point out. I suppose someone could probably watch a season’s worth or games for a player or team and have a pretty good idea whether they were a +20 or a minus -20, but after 12 games to be able to tell whether a UZR of plus +6 runs is “accurate” or not? Well….

    As far as “those articles” are concerned, here is the dilemma: Let’s say that you have a deadly accurate, but not perfect metric. And let’s say that you were using that metric to make an assertion about a player – we’ll call him Bason Jay. Well, since your metric is deadly accurate, you would certainly be justified in making that assertion and if someone questioned that assertion, you would also be justified in defending your position.

    Now, even a deadly accurate, but imperfect, metric is going to be “wrong” some percentage of the time. It will even be very wrong some percentage of the time. But, we don’t know for whom that will be the case. What we do know, is that if we make 100 assertions about 100 players, only 1 or 2 of those assertions will be “wrong” which is why we are clearly justified in defending our positions against all naysayers. Now, after the fact, we can easily “cherry pick” an assertion or two that was wrong, and it will make our vehement defense of that assertion look foolish. But, again, we were justified in doing so.

    That is not to say that UZR or similar metrics are “deadly accurate”. Only Pecota fits that bill. ;) But, I hope you get my point. Writers and analysts like DC write about many, many players and situations. Even if he is justified in strongly defending all of his positions, in the end, some of them will look foolish. That is just the nature of the beast. Perhaps he should just put a footnote in all of his articles, saying, “There is a small chance that I am dead wrong.”

    Vote -1 Vote +1

  24. MGL says:

    Look, the bottom line is that Bay was likely underrated by UZR because of the difficulty of evaluating fielding in LF and to some extent RF at Fenway Park. Period. If you want to crucify David Cameron or myself for not recognizing that, go ahead. If you have a better fielding system, please let the world know. It will save me a lot of time and heart ache.

    Vote -1 Vote +1

    • wobatus says:

      I think you have a nice product that you have improved. The Bay (and Ellsbury, et al) data I thought was wrong and it has been fixed, thus fixing what I thought was a glaring problem. I am relatively new to UZR, so to see it so seemingly off the mark poisoned the well for me to an extent. The revision helps.

      And I don’t have a better fielding system. Nor do I have the time or inclination to come up with one. You have one, you have improved it. I can cross check with Dewan, Total Zone, my own lying eyes, etc.

      It is a great system. You can be proud. It is now even better. You can also take pride in that.

      Vote -1 Vote +1

  25. JCA says:

    Some folks on a Nats message board picked up a big drop in Ryan Zimmerman’s UZR relative to Evan Longoria. People’s recollection is that he was +18 or so at the end of last year and comparable to Longoria. Now he is +13 and 5 runs worse. Is that right? What would drive that sort of change? Did Zimmerman field chances from a lot of slow hitters?

    Vote -1 Vote +1

  26. MGL says:

    JCA, I have no idea. Honestly, there is virtually no difference between +18 and +13 for one year. If you are at that level, you are likely an elite fielder (with all the usual caveats, one of them being we are going to be “very wrong” about anyone a small percentage of the time).

    Now, a lot of the “very wrongs” can be mitigated with scouting reports (or informed observation or whatever you want to call it). If a player is +12 in one year of UZR and the scouts say he is a bad, mediocre or average fielder, then there is a much better chance that the UZR is “wrong” as compared to if the scouts agree with the UZR. (And keep in mind, that there is no bright line between “wrong” and “right.” If some metric has a player at +12 true talent and God comes down and tells us that he is a +8, were we wrong or right? How about if we say +2 and God says -2?

    That is basically how scouting and statistical analysis can complement one another. Additionally, the more data you have, the less weight you can put on scouting and vice versa, although there is virtually never a time to completely ignore one or the other.

    wobatus, thanks!

    Vote -1 Vote +1

    • JCA says:

      thank you for taking the time to reply and to explain the (in)significance of the adjustment. What you are saying is that the number still shows both are elite defenders by this measure, and is a reminder not to attach too much significance to +/- 5 runs in UZR, especially for a single season. I’ve heard that before but it is interesting I forgot it when it was discussed elsewhere.

      Vote -1 Vote +1

    • Trev says:

      When I go to heaven I’m going to ask God if I can see his Strat-O-Matic cards.

      Vote -1 Vote +1

  27. MGL says:

    As far as the error runs, the “problem” (it is not really a problem) is that UZR does not track pitchers and catchers, so the team UZR’s do not include catchers and pitchers, including their errors. And they make lots of errors. So you can’t compare UZR error runs with total team errors unless you remove pitcher and catcher errors from the team totals.

    Vote -1 Vote +1

  28. MGL says:

    “What you are saying is that the number still shows both are elite defenders by this measure, and is a reminder not to attach too much significance to +/- 5 runs in UZR, especially for a single season. I’ve heard that before but it is interesting I forgot it when it was discussed elsewhere.”

    Absolutely. That goes for any metric, really. Don’t be fooled by metrics which use data that are already classified, like most offensive metrics. Some people think that because a single is a single (there is no ambiguity), a double is a double, etc. that somehow a record of a batter’s offensive events is “pure.” It isn’t. All singles are not created equal, all HR’s are not created equal (some just make it over a short porch and others are bombs, for example), etc., and thus, an offensive metric is not really “pure” as far as a reflection of a player’s true talent or what he will do in the future.

    Of course, with all metrics, the more data they are based on, the more “pure” they tend to be. So, for example, while the difference between a +13 and a +18 (say per 150) fielder in one year is essentially meaningless, the difference between a +13 and +18 over 3 years is probably significant…

    Vote -1 Vote +1

  29. MGL says:

    Just to clarify what I mean, and this is an important concept:

    Let’s sat that we have a record of a player’s offensive performance, number of singles, doubles, walks, K’s, etc. Well, we know that is exactly what he did. Of course once we apply any kind of adjustments, like for park, that is no longer the case. But assuming no adjustments, there is no ambiguity.

    Not so with some defensive metrics – in fact, not so with the better ones. It is so with the simple ones and the bad ones. A fielder’s fielding average tells us exactly what the fielder did with respect to his errors and outs. Even simple ZR tells us something unambiguous – the number of balls fielded in a certain area as compared to the the number of balls not fielded.

    Now, some people think that the metrics or records that tell us indeed exactly what happened are better than the ones that don’t. Well, if your intention is indeed to tell us exactly what happened, at least in the box score, then that is so. But, from the standpoint of estimating true talent, context-neutral value, future value, future performance, etc., which are the things we are normally interested in, the fact that a metric captures unambiguously what “happened” is irrelevant. What would you rather have if you are interested in a fielder’s likely future value – fielding percentage, simple ZR, or UZR? IT should be obvious.

    Now, someone who has no idea what these metrics are all about might think exactly the opposite. After all, fielding percentage is pure. It tells us exactly what happened – there is no ambiguity. Same with simple ZR. On the other hand, a metric which is MUCH better, like UZR, is awful at telling us exactly what happened – at least the numbers it produces (UZR runs) are awful at that. But, in the end, it will produce a MUCH better result in terms of a fielder’s value to his team both in the future, and with respect to the actual fielding talent of the player.

    I hope that makes some sense…

    Vote -1 Vote +1

  30. MGL says:

    I want to give one more example of what I mean. Let’s say that UZR records a ball that is in an easy “bucket” to catch yet the fielder misses it and gets docked a lot for that play. Now if you watch that play, it could easily be that the ball was actually at the outer (more difficult) edge of that “easy” zone, and the fielder was positioned far away from that zone (because of the batter and pitcher), and UZR did not pick up on that positioning (remember that all UZR can do is try and estimate fielder positioning from the outs, runners, handedness, speed, and power of the batter). And in addition, the ball took a bad hop such that no fielder could have possibly fielded that ball. Well, people who watched that play are going to conclude that UZR is a terrible metric!

    On the other hand, let’s say that a batter gets fooled on a pitch, just barely gets his bat on the ball, and squibs a ball down the third base line, and the third baseman had no chance to make the play. Any offensive metric (other than the batted ball ones maybe) is going to record that ball as a single (obviously) and everyone is happy. After all, that is exactly what the batter did, so the metric must be a good one. Of course, that is actually a terrible way to record that batted ball (as a single) if we want to use that data to evaluate a batter’s offensive talent or to predict his performance in the future. It is just as bad as UZR docking that fielder for a ball that it thinks is easy to field but is not in reality.

    Don’t be fooled by metrics that record exactly what happens in the box score and don’t dismiss metrics which don’t. The only thing we care about it that a metric measures what correlates well with performance in the future and that performance correlates well with winning games.

    Vote -1 Vote +1

    • Luke in MN says:

      This thread is probably a little stale by now, but isn’t it true that while offensive metrics are flawed, we do have a much better idea of how good players are going to be offensively than we do defensively and that the reason for that is probably in large part because we have better ways of objectively measuring offensive performance than defensive performance? Some things submit fairly readily to objective measurement (the quality of hitters); other things do not (the quality of, say, poets). Defensive quality is more like poet quality than offensive quality is.

      In brief, my point is: The fact that objective offensive measurements are not perfect does not mean they’re no better than defensive measurements.

      Vote -1 Vote +1

  31. I think the 15 run swing in the Jason Bay fielding data is overblown. You should never look at 1 year’s worth of sample data. Try looking at 3 to smooth out the randomness and errors. No one’s ever preached otherwise.

    I’m glad to see the UZR updates; good job guys

    Vote -1 Vote +1

  32. curious says:

    are these adjustments based on data provided by the stringers Tom and MGL (or whoever it was) provided over the last year? I remember Tom mentioned at his blog someone was looking for game stringers to record defensive data. Did that data end up making these changes?

    Vote -1 Vote +1

    • Steven Ellingson says:

      Tom and MGL are not stringers. Tom was just relaying a message from BIS looking for stringers. They need new stringers every year, so this had nothing to do with the change

      Vote -1 Vote +1

      • curious says:

        no – there was something else that was being worked on having to do with defensive metrics. MGL would know what I’m talking about. It wasn’t BIS hiring, it was some other group that was working on park factors surrounding UZR.

        Vote -1 Vote +1

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>