Comparing 2011 Hitter Forecasts

This article is an update to the article I wrote last year on Fangraphs.

This year, I’m going to look at the forecasting performance of 12 different baseball player forecasting systems. I will look at two main bases of comparison: Root Mean Squared Error both with and without bias. Bias is important to consider because it is easily removed from a forecast and it can mask an otherwise good forecasting approach. For example, Fangraphs Fan projections are often quite biased, but are very good at predicting numbers when this bias is removed.


RMSE(rank) RMSE(rank) RMSE(rank) RMSE(rank) RMSE(rank) RANK
Marcel 22.958     (1) 8.108     (2) 24.186     (1) 0.039     (3) 7.054     (6) 2.6
Baseball Dope 24.788     (3) 8.506     (6) 26.788     (5) 0.036     (1) 6.720     (1) 3.2
Will Larson 24.474     (2) 8.066     (1) 25.113     (2) 0.042   (12) 6.729     (2) 3.8
CAIRO 25.609     (5) 8.488     (5) 25.755     (3) 0.037     (2) 6.929     (5) 4.0
AggPro-System 25.729     (6) 8.377     (4) 26.829     (6) 0.040     (4) 6.889     (4) 4.8
AggPro-Category 25.107     (4) 8.153     (3) 26.316     (4) 0.041   (11) 6.779     (3) 5.0
RotoChamp 29.162   (11) 9.032     (9) 27.048     (7) 0.040     (5) 7.746     (9) 8.2
ESPN 27.575     (8) 9.455   (12) 28.568   (10) 0.040     (6) 7.257     (7) 8.6
Bill James 28.092     (9) 8.929     (8) 28.334     (8) 0.041     (7) 7.973   (12) 8.8
Razzball 26.766     (7) 9.331   (10) 28.791   (11) 0.041     (8) 7.961   (11) 9.4
Fangraphs Fans 30.001   (12) 8.918     (7) 32.646   (12) 0.041     (9) 7.532     (8) 9.6
CBS Sportsline 28.261   (10) 9.387   (11) 28.345     (9) 0.041   (10) 7.794   (10) 10.0

For the second straight year, the Marcel projections have the lowest RMSE, as is shown in Table 1. The simple weighted formula used to create the projections was in the top 3 for each category except for SBs. Baseball Dope and my (Will Larson) forecasts were 2nd and 3rd. Rounding out the bottom were Bill James, Razzball, the Fangraphs Fans, and CBS Sporsline.


Fit(rank) Fit(rank) Fit(rank) Fit(rank) Fit(rank) RANK
AggPro-System 0.295     (4) 0.421     (1) 0.326     (1) 0.195     (2) 0.696     (3) 2.2
AggPro-Category 0.318     (2) 0.419     (3) 0.326     (2) 0.192     (4) 0.698     (1) 2.4
Will Larson 0.323     (1) 0.420     (2) 0.325     (3) 0.187     (6) 0.692     (4) 3.2
Baseball Dope 0.287     (6) 0.400     (6) 0.297     (4) 0.266     (1) 0.675     (7) 4.8
ESPN 0.300     (3) 0.393   (10) 0.274     (8) 0.178     (8) 0.697     (2) 6.2
Fangraphs Fans 0.227   (11) 0.419     (4) 0.272   (11) 0.194     (3) 0.691     (5) 6.8
CAIRO 0.251   (10) 0.395     (8) 0.294     (5) 0.191     (5) 0.661     (8) 7.2
CBS Sportsline 0.291     (5) 0.396     (7) 0.278     (7) 0.165   (10) 0.660     (9) 7.6
Bill James 0.278     (7) 0.413     (5) 0.274     (9) 0.179     (7) 0.640   (11) 7.8
Marcel 0.266     (9) 0.393     (9) 0.279     (6) 0.158   (11) 0.642   (10) 9.0
RotoChamp 0.212   (12) 0.375   (11) 0.273   (10) 0.175     (9) 0.678     (6) 9.6
Razzball 0.274     (8) 0.364   (12) 0.262   (12) 0.124   (12) 0.624   (12) 11.2

This table is the r^2 of the simple regression: actual=b(1)+b(2)*forecast+e.  The b(1) term captures ex-post bias, allowing b(2) to better capture the information content in the forecast. After correcting for ex-post bias, the AggPro systems ended up in 1 and 2, followed by my projections. What is interesting is that all three of these systems used weighted averages of forecasts. It is also interesting that the Fangraphs Fans went from 11th in the RMSE comparison to 6th, indicating that, as with last year, fan projections are fairly biased. Marcel is near the bottom, showing that the Marcel projections are excellent at estimating average production and minimizing bias, but are not very good at predicting player-to-player differences. For most decision-making purposes, it is these player-to-player differences that are most important to measure accurately. This makes Marcel much less useful than something like ESPN or Fan projections, which are much better at forecasting than their overall RMSE measures would indicate.


Averaging works. If you want a forecast that’s easy to create, take as many forecasts as you can and average them. Extremes in each forecast will be averaged out, while still capturing variation. If you have access to more sophisticated methods, weighted averages using historic accuracy is best.

Next week I’ll do the same analysis for pitchers.

Happy forecasting!

–Will Larson


Last year, I created forecasts based on weighted averages of other forecasting systems. This is my forecast. Many commenters were skeptical that it would perform well. It seems to have worked, at least for last year.

All of the non-proprietary numbers in this analysis can be found at my little data repository website found at

Print This Post

31 Responses to “Comparing 2011 Hitter Forecasts”

You can follow any responses to this entry through the RSS 2.0 feed.
  1. neil says:

    Hey, Will – when you do you plan on posting your forecasts for the 2012 season?

    Vote -1 Vote +1

  2. Will Larson says:

    My plan is to get the 2011 pitcher forecast evaluations and the 2012 hitter projections done this coming weekend. I’ll post here when I do. Thanks for reading!

    Vote -1 Vote +1

  3. attgig says:

    wow. can’t wait to see the 2012 numbers. thanks for the work.

    Vote -1 Vote +1

  4. stratusjeans says:

    any particular reason ZIPS was left out of the analysis?

    Vote -1 Vote +1

  5. brian fawcett says:

    Are the PECOTA’s just so far off the mark that you didn’t even consider them?

    Vote -1 Vote +1

  6. Will Larson says:

    Both PECOTA and ZIPS projections would be great to consider, but…

    @stratusjeans The ZIPS folks wanted me to link to their files instead of host them myself. Since then, they’ve taken away the link. So, no ZIPS data for projection evaluation.

    @brian fawcett: I didn’t want to pay for PECOTA projections. The forecasts I considered were freely available with the exception of Bill James, which are pretty cheap.

    Vote -1 Vote +1

  7. J. Cross says:

    Will, good work with the bias adjustment.

    The one this I’d like to see is how much of this ordering is due to playing time projection. In other words, who had the most accurate PA projections and who had the most accurate R/PA, RBI/PA etc.

    Vote -1 Vote +1

  8. dee says:

    what about steamer? if im not mistaken fangraphs did the same comparison last year and had steamer at the top. i believe it s free today

    Vote -1 Vote +1

  9. Will Larson says:

    @dee: I hadn’t heard of the steamer projections. I’ll check it out when I run my numbers this weekend. Good find!

    Vote -1 Vote +1

  10. dee says:

    *too .. not today ..

    and it wasn’t last year but couple weeks ago

    just realized that the creator is posting in the comments. i’m stumbling a lot

    Vote -1 Vote +1

  11. dee says:

    but i definitely appreciate the category breakdown.. can’t wait to see the forecast for the 2012.

    Vote -1 Vote +1

  12. J. Cross says:

    Heh. Thanks for the plug! I’d be happy to send the Steamer Projections along.

    Vote -1 Vote +1

  13. Will Larson says:

    @J. Cross: Nice website! Send me an email at

    Vote -1 Vote +1

  14. TheBrad says:

    This is like the enigma decoder from U571. Please hide this web page from view if possible. ;)

    (I look forward to your projections, Will! I just bookmarked this page.)

    Vote -1 Vote +1

  15. Nilsilly says:

    Great work, very excited to see your projections go up!

    Vote -1 Vote +1

  16. Ross Gore says:

    For those interested in learning more about the AggPro projections I recommend these two papers:
    Aggregating Projections Systems Works
    Aggregating Projection Systems at the Correct Granularity

    Vote -1 Vote +1

  17. ML says:

    Isn’t the accuracy of the counting stats a function of the playing time projection? Is that accounted for here or does it matter?

    Vote -1 Vote +1

  18. Will Larson says:

    @ML: Absolutely. And it is accounted for, but indirectly, as any counting stat is a function of Playing Time (PT) and Rates.. My goal with this exercise is to compute the best forecasts as cheaply (in terms of time) as possible. I haven’t tried to tie together PT and Rate projections to make counting projections, but it could be better. There’s a very good chance, however, that it could be worse, because then I’d be introducing forecasting error in TWO places, and then multiplying it.

    It’s a good idea though, and one I’d encourage someone to check out.

    Vote -1 Vote +1

  19. stratusjeans says:

    thanks a lot Will! I always enjoy interesting information!

    Vote -1 Vote +1

  20. MDL says:

    Will, thanks for the analysis! The BPP website looks like a fantastic resource… definitely going to explore around some more this weekend!

    I just wanted to point out something about the footnote you linked to on the BPP website: the Prospectus Projections Project was co-authored by one “David Cameron”. Assuming that this is FG’s own, just contact him about any potential naming issues.

    Vote -1 Vote +1

  21. damageincorp2001 says:

    I’m still kind of new here and this probably isn’t the place for the post but could someone tell me where I can find write ups and projections / bios for the players drafted in the 2011 First year player draft. I just got G. Cole for my dynasty team but want to read up for further rounds before I pick again.


    Vote -1 Vote +1

  22. JDanger says:

    I really enjoyed this article and appreciate the work that was put into it, so I don’t mean to offend anyone, but I really feel like it’s incomplete without ZiPS. It’s really the only one I, and most people I know, use.

    It’s like ranking the coolest dinosaurs and not including the velociraptor.

    Vote -1 Vote +1

  23. Will Larson says:

    If you can get ZiPS to put their projections for 2011 back up somewhere, then I’ll include them!

    Vote -1 Vote +1

  24. wynams says:


    Are your 2012 projections available? I would love to upload them to LastPlayerPicked and create a draft cheatsheet for 3/18.

    Agree with simple averaging good projection systems is the way to go. I’m sort of getting that from the Zeile over at FantasyPros, and would like to see where that system (which averages up to 11 sources) differs from your own.

    Nice site btw & cheers

    Vote -1 Vote +1

  25. tfish says:

    Anyone know if AggPro numbers were done for 2012?

    Vote -1 Vote +1

  26. CHRIS says:

    Very interesting stuff, Will. Is there any chance you could add the At-bat totals for your projections?

    Vote -1 Vote +1

  27. Maxim Ice says:

    Will – any reason why you left off Michael Morse and Mike (Giancarlo) Stanton off of your hitters projection for 2012? Thanks for the great work!

    Vote -1 Vote +1

  28. mattdi02421 says:

    To an idiot who hasn’t taken econometrics in a while, what is ex post bias?

    Vote -1 Vote +1

  29. Will Larson says:

    @Chris: Sorry, I didn’t do at bats this year. Maybe next year.

    @Maxim Ice: Good catch. When names are different in different projection systems it causes problems sometimes. The files should be fixed now!

    @mattdi: ex post bias is just bias that can’t be predicted, so all we can do is control for it after it happens. For this, it basically means that if a year has more or less runs than normal by chance it will introduce bias into every projection. Also, some projections are just simply biased. Adding a constant term into a regression takes care of both of these issues.

    Vote -1 Vote +1

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>