Looking at the 2010 Fan Projections: Part 2

Yesterday I looked at the 2010 Fan Projections for position players, and specifically how much higher fans of a team projected players on that team compared to non-fans. It turned out to be by about half a win. Commenters to that post wondered which group did a better job projecting the actual performance of the players.

Tango found that the Fan Projections were in the middle of the pack compared to other projection systems: a respectable 10 out of 21, up against the big hitters like CHONE, CAIRO, and Bloomberg. But that was with the Fans as a whole, not split out by the fans’ favorite teams.

It turns out that the fans let their optimism get in the way and did a poorer job projecting players on their favorite team. The root mean square error (lower is more accurate) comparing the projected WAR to actual WAR was 2.1 for projections by favorite-team fans compared to 1.8 for all other fans. A more intuitive way of thinking about it is the mean absolute error — that is the average each projection system was off across all players. For favorite-team fans this was 1.7 wins versus 1.5 wins for all other fans — a difference of about two runs. So, overall it is not a huge difference.

Both groups were overly optimistic: 73% of players underperformed their favorite-team-fan projection and 64% their rest-of-fans projection. The favorite-team fans were, on average, one win high while the non-fans half a win high. Here is the actual WAR for the 206 position players plotted against the WAR projected by fans of that player’s team and WAR projected by everyone else. Again, the red line is when the two are equal. Dots above the line represent players who over-performed their projection; and those below, under.

You can see that the graph to the left has a great ‘spread’ from the line — that is, the poorer projection by favorite-team fans. The amount by which the ‘blob’ of points is centered below the lines shows the under-performance (or over-projection). You can see that both are centered below the line, but that the one to the left more so.

Quick note: it looks like there are not nearly as many projections for pitchers, so I am not able to run these same type of comparisons for them. With position players, there were 206 with at least 10 projections by fans of the team. But for pitchers there were just five players with 10 and just 25 with at least five projections. I am not sure why there are so many fewer pitcher projections.




Print This Post



Dave Allen's other baseball work can be found at Baseball Analysts.


17 Responses to “Looking at the 2010 Fan Projections: Part 2”

You can follow any responses to this entry through the RSS 2.0 feed.
  1. T says:

    One small thought – There are a lot of Mariner fans on this blog, and almost every player on that team had an unprecedented collapse. Any possibility that skewed the results at all?

    Vote -1 Vote +1

  2. Samuel E. Giddins says:

    Any chance it’s a selection bias? Only the better players get rated more often, and people are less likely to predict a collapse than a breakout?

    Vote -1 Vote +1

  3. AK707 says:

    I think its eerie how close the Bill James projections look to fan projections this year. Maybe the rate stat change helps bring people more in line?

    Vote -1 Vote +1

  4. Larry_Koestler says:

    Where can one find Bloomberg’s projection system?

    Vote -1 Vote +1

  5. tangotiger says:

    Right, it’s an ordered list from 1 to 600 or something.

    And I’m not sure what selection bias is being discussed. Dave is looking at each player and comparing his two forecasts: those from the hometown fan and those from everyone else. So, if Ichiro is forecast with a .360 wOBA from Mariner fans and .350 from everyone else, and he comes in at .340, then where is the issue? It’s not like Dave is going to weight Ichiro based on the number of submissions.

    I find it utterly fascinating that those that are closest are in a sense too close for objectivity. That is, whatever insights they may have has been overwhelmed by the lack of objectivity.

    Fascinating to say the least.

    Vote -1 Vote +1

    • Theo says:

      I’d put this down to that simple trait of most (all?) sports fans: optimism. No one wants to see their team’s stars regress, or their prospects not pan out, and will focus more on positives than on negatives.

      However, it is very interesting in the context of the argument that fans of a certain team are higher on that team’s players because they know more about their abilities, rather than simply being biased. With these numbers, it certainly seems that bias is a great influence than any kind of special insight they might have.

      Typing that, I realize it is, I think, exactly the same thing you just said. Ah well.

      Vote -1 Vote +1

    • Michael says:

      I’d be interested to know how well this holds up in the front office. Front offices are often overly optimistic about the players they have when considering trades (except, of course, when they’re overly optimistic about the guys they are signing or trading for…), and I’d love to get my hands on the open books of every team’s projections for a season and scour what’s there.

      Vote -1 Vote +1

  6. J. Cross says:

    One possible source of bias: what if forecasters who project a lot of non-favorite team players are simply more accurate forecasters than those who only project players on their favorite team.

    Vote -1 Vote +1

  7. Craig Glaser says:

    I wonder if this might have something to do with playing time projections. When i was looking at some of the fangraphs PT projections I noticed that there were bigger differences there than in rate stats, usually.

    Perhaps the fans of the team are more “sure” of who is going to start and award them a lot of PAs and that means that they overestimate playing time on average and that is why they are going over on WAR? Then again this year the average Mets fan projects Santana with significantly fewer innings than the average other fan so they should have some cases where it helps their accuracy.

    I wonder how this would look using WAR/PA or something else which could account for playing time.

    Vote -1 Vote +1

  8. Newcomer says:

    Can we see a comparison of how fans compared to non-fans in predicting playing time? We might see the reverse relationship there, and if so, it could be used to improve the final composite predictions.

    Vote -1 Vote +1

  9. AustinRHL says:

    What I would really like to know is the player’s actual WAR/150 versus the fan-projected WAR/150. It seems to me that implicit in most people’s projections of most players is the assumption of health, so when players miss significant time, the fan projections all overstate their WAR as a counting stat, but perhaps not as much as a rate stat. I’m still quite sure that fans are still going to be too optimistic, but I’m just as sure that the difference would be smaller.

    I’ve dutifully gone back through my own projections and found that they do seem, on average, to be about 0.5 wins lower than those of the average fan. That means that they’re still too optimistic, but I bet that in aggregate, they will be pretty close in WAR/150.

    Vote -1 Vote +1

  10. Lou Struble says:

    Dave,

    Any chance you’re going to do a post on which team’s fans had the most unrealistic projections from last year? I know it might be difficult because not all of the players received enough votes for projections, but it’d be interesting and I’d bet it’d generate a lot of discussion.

    Vote -1 Vote +1

  11. Al Swedgin says:

    It should be pointed out that the accuracy of the projections does not have all that much to do with success in the Forecasters Challenge. That is, I could create a set of projections in which every position player hits a minimum of 30 HRs and still win the Forecasters Challenge; what’s important is that the projections spit out good rankings. Put another way, to win the Forcasters Challenge (or to be valuable for anyone drafting the players) it’s less important that a person’s/system’s projections closely match each player’s actual output, and more important that each player’s projection is accurate relative to the other player projections within that set.

    Vote -1 Vote +1

    • Al Swedgin says:

      Let me add: what the above means is that if the Fan’s of a team “added value” to players in the same way across the board, then it should have no effect on the rankings of the players. Of course, they probably did not add value in the same way for each player, and certainly some teams/players were affected to a greater degree by “their fans” than others. This could upset the rankings.

      Vote -1 Vote +1

  12. tangotiger says:

    I love Jared’s explanation!

    Makes good sense. Who takes the time to forecast players other than their own team? Maybe those who spend alot of time forecasting in general? Who only has time to only forecast players on their own team? Maybe those who are very biased?

    So, to respond to Jared, we need to break up the forecasts into three groups:
    1. Hometown fans who ALSO forecast players on other teams
    2. Hometown fans who DON’T forecast players on other teams
    3. Non-hometown fans

    I. LOVE. IT.

    Great job Jared, applying selection bias as you did. You guys are awesome.

    Vote -1 Vote +1

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>