If you’re using 2010 projections, you don’t care about the guy who was there in 2009.
The point is that you start from a brand new foundation, not supposing that anything that happened last year will happen again next year. You then project 2010 performance the best you can, based on all available data. What happened in 2009 only matters in the way that it affects your projection of what will happen in 2010.
I think I see what you mean. I was thinking that it would be perfectly acceptable to think of the 2010 team projected with Player X rather than Player Y. What you’re referring to is a comparison between 2009 team with Player X and 2010 team with Player Y.
I see what you are getting at, but I think you need to look somewhat at what is meant by ‘improvement’. Correct me if I am wrong, but one could validly say that a team has ‘improved’, whilst referring to that improvement as being over the level the team was at (projected) entering the off-season, and if that is as such, then would not a lot of writers be using ‘improvement’ in this context? I agree with your point in the context of someone saying a team has improved in some way over their preformance the previous season, but I don’t know if most writers (particuarly in the sabermetric community) would necessarily mean this.
To be fair, unexpected deviations are probably normally distributed and tend to cancel each other out across a 25-man roster. So while Dave makes a valid point, it’s a bit overstated. It’s kind of like chastizing someone for using ERA to assess team pitching performance across an entire year, but to a lessor degree.
It amazes me to see people think like this. There are just too many variables that happen in a season to just rely on taking last year’s team and simply swapping out players to project next year’s team (ex: Cliff Lee is only replacing the Washburn/Bedard tandem from last year. Therefore, it must be a wash.)
Bill Bavasi is a former GM that has been universally crapped on by fans but I wonder how many people that do this kind of analysis realize that this is straight from the Bavasi playbook of roster construction?
I wouldn’t be comfortable thinking like this given all the information we have available these days. Much more reliable to start from scratch and build the team out based on true talent level then play around with the variables from there.
Great piece but can you offer more on simplest/most efficient projection constructions? I’ve been considering doing my own at 2/3 2009 1/3 2008.. A few other systems go 5/3/2 down past years. What method of stacking historical data has been proven to work best?
Comment by White Rickey — December 28, 2009 @ 9:17 pm
Also don’t fall into the trap of believing player X came into spring training having lost 15 pounds, gained muscle, best shape of career, found god, settled down, learned a new pitch…all of that is bull. Players are who they are, with slight variation in peripherals year to year. This applies to all sports for that matter
It might not be worded very well, but there is merit to the concept. The larger the sample size, the smaller the expected sample error. So while 1 player may experience large deviations from their expected performance due to sample error, the deviation of 25 players, on average, will be much smaller. It just may be (I don’t know if it is or isn’t, just raising the possibility) that 25 players over the course of a season, on average, produces a small enough error that projections using the system Dave is railing against won’t really be that far off in forecast accuracy….
So it’s not that it “cancels out”, but more that we [may] have a large enough number of observations that the error is pretty small and this kind of logic actually produces pretty good results.
I am not sure I follow. There is nothing to suggest that deviations of individual player performance always cancel each other out in every case, but deviatons as a proportion of the mean performance are lower at the team level than they are at the individual level. The point is that the whole progression/regression to the mean argument is not as relevant at the team level as it is at the individual level. I am not trying to say it is completely irrelevant.
By the way, I think it is a mischaracterization to imply that Lewis was making ad hominem argument against scouts. What he was railing against was the use of the “make-up” philosophy, which was very common among scouts (and a first class ticket to dogma-based bias).
Nice sounding theory, but it holds no weight. If anything, it is kind of the opposite. A guy or two having an abnormal year is more likely to help/hurt the production of those around them through situational play and such.
Look no further then the 05 WhiteSox for proof abnormalities dont even themselves out within a clubhouse though.
Honestly, while I do agree with the theory behind this post, I’m not sure I agree with it at all. It wouldn’t surprise me to find that the exact type of thinking you’re disparaging actually produces fairly accurate forecasts, Dave. Obviously not nearly as accurate as the type of stuff the more advanced saber community is doing, but in terms of just doing a quick and dirty projection that doesn’t take much effort/though, I wouldn’t be surprised if just taking additions/subtractions into account did very well. At the team level, I think (though I may be wrong, I do not have any evidence of my point) for most teams any abnormal effects will largely cancel out (injury effects less so, but those are pretty easy to take into account), leaving us with a pretty useful method of forecasting…
So what I’m saying is while theoretically it is dumb, practically I think there’s a good chance that thought process actually makes sense if you don’t want to go in depth into the subject.
That is certainly not true in basketball, where a players fitness and weight can vary wildly from year to year. A basketball player can also add new facets to his game that he didn’t have the year before i.e. Kobe Bryant gradually becoming a dominant player with his back to the basket. Your point is well taken in baseball, but I’m not sure it applies to other sports as well as you think it does.
Or Dayton Moore’s ability to see no problems with Yuniesky Betancourt.
Comment by robbbbbb — December 29, 2009 @ 11:37 am
“I have no idea what you’re trying to say here, and I have a feeling you don’t, either…”
I can only assume this was intended for mydquin and not myself, correct? As my post seems about as clear as day – abnormalities in clubhouses dont just work towards canceling themselves out; and if anything can lead to more same direction deviations in a clubhouse.
You seem to be suggesting that one person over or under-performing what they should have done will have an effect on his teammates, causing them to over or under-perform, respectively, as well? If that’s what you’re suggesting, I think you’d need to find evidence to support your claim, because I don’t think there’s any reason to think it holds true. For instance, it’s a similar concept to lineup protection, which is basically a myth…
I would call it even more true in basketball. Not that your point is wrong – of course a player can develop some skill over the offseason or physically change their body in some positive way – it’s more that during summer, when writers have nothing else to write about, they ALWAYS come up with articles about how Player X worked so and so hard (ignoring that everyone in the NBA works extremely hard in the offseason) and/or gained weight/gained additional speed/explosiveness and such. So it’s not that those things you’re talking about don’t help, it’s that the articles you read about it are mostly bogus fluff pieces to get readers when there’s nothing else to write about…
Since variance is a prevalent factor in baseball and that baseball is way behind the learning curve in regards to modern sports training, I’d say this is a shortsighted comment. Olympians who train for years to get tenths of a second faster go on to win medals. Look at Dana Torres. At 40, to compete and medal in a sprint race is exceptional since speed is the most neurologically complex athletic attribute and therefore hardest to maintain against the affects of age. Watch MMA. You routinely see fighters who improve their weaknesses go onto beat the same opponent they previously lost to in a rematch.
Comment by pounded clown — December 29, 2009 @ 5:44 pm
It isnt a myth at all. Players hit better/worse all the time when protection is there/removed. Look at Wright last year – .333/.432/.483/.915 through May when everyone was still around, .293/.368/.428/.796 the rest of the way after everyone hit the DL. Look what happened to the Dodgers lineup once Manny was added at the 08 break – .256/.321/.376/.697 prior to, while .281/.355/.443/.798 after. Ethier (hitting directly in front of him) was sitting at .274/.338/.442/.779 at the time and went .368/.448/.649/1.097 the rest of the way. Protection is also quantifiable – the difference comes mainly in pitch selection and field positioning.
As far as teams seeing a team wide fluke, look at the 08 Rangers – only one player with 100+ AB saw an OPS+ under 90, while the team total went from a total 97 (07) to a Godly 115 (08) before going back to 95 (09). 08 Cubs see basically everyone on the club have a career (or near-career) year before the 09 Cubs see only Derrek Lee worth a darn. Similarly, pitchers will often peak as a group – no one wants to be the weak link! Like mentioned before, the 05 WSox are a prime example of this. 06 Tigers, 07 Padres, 08 Jays – happens all the time. And it’s completely unpredictable. We can see fairly mediocre teams become unstoppable juggernauts (08 Rockies) and Juggernaut teams unable to buy a win (08 Tigers).
Abnormalities just dont even themselves out, and team-wide abnormalities are absolutely everywhere you look.
Anecdotes aren’t real evidence. You can find anecdotes to prove any point, no matter how wrong it is. Your theories don’t seem to have much basis in reality when people actually look at the evidence to see if it’s true or false. For instance:
Of course abnormalities don’t even themselves out. You get a big enough sample, though, and they become small and mostly inconsequential, which is the whole point. Simply put, you should treat each individual as independent of their teammates (as evidence they’re correlated basically doesn’t exist), and once you do that, from a statistical standpoint, the “abnormalities” you’re talking about become much less meaningful.
“Anecdotes aren’t real evidence. You can find anecdotes to prove any point, no matter how wrong it is”
Oh, so we shouldnt look for evidence against this “works toward canceling itself out” theory – we should instead just say it probably works toward leveling itself out and call it a day? Right? Yeah, that sounds logical. Dont give yearly examples of entire teams fluking at the same time; just assume (and in turn say) they dont do that!
Then, did we really need a study to show guys like Brady Anderson, Milt Cuyler, Joe Orsulak, Gary Pettis, Luis Polonia, Luis Rivera, Dale Sveum, a washed up Alvin Davis or other crummy hitters didnt hit very well when given protection? Not in the least! Why would they be expected to be able to do anything with a hitters pitch? They were generally over matched at the plate when facing a pitching machine. So why then would anyone think they would to turn into studs when challenged? Seems an illogical study to say the least.
But getting away from the random “some sucky guy didnt hit well in front of random guy with a high SLG%” side of the argument, lets get to its base – does hitting in front of a stud result in a more favorable hitters situation? Yes, yes it does. (need evidence? http://mlbresearch.blogspot.com ) Protection exists even if some hitters are not able to take advantage of it.
“Of course abnormalities don’t even themselves out.”
Not according to your argument – you say we should expect them to works towards that goal. Specifically, you gave us this: “the deviation of 25 players, on average, will be much smaller.” Meaning, if we take the 8% of the league that represents one team then the couple of those 8% of guys who see abnormalities, we will see an overall deviation that is “much smaller” then if we accounted for the individual flukes on the team on their own. Just take the previous team outputs and they “won’t really be that far off in forecast accuracy”… Well, according to your words.
but now you give
“Simply put, you should treat each individual as independent of their teammates, and once you do that, from a statistical standpoint, the “abnormalities” you’re talking about become much less meaningful.”
If treating each and every abnormality independent of eachother, there would be no merit what so ever to the side you are arguing for. They would not be working towards or away from a common expectation, and instead will result in the complete and utter randomness that we actually see in real life. At that point statements like this (full version this time):
“25 players over the course of a season, on average, produces a small enough error that projections using the system Dave is railing against won’t really be that far off in forecast accuracy…”
would be completely invalid. In reality, forecasting accuracy will not be close at all unless taking each and every player individually and accounting for their specific deviations – like the article originally stated.
“Oh, so we shouldnt look for evidence against this “works toward canceling itself out” theory”
Nah, not what I’m saying at all. Evidence is good. What I’m saying is anecdotes aren’t real evidence. Thanks for the link – that’s the kind of evidence we should be looking for – real research. I hadn’t read that before, so I definitely found it interesting. It looks like there’s a lot more research that needs to be done on the subject, as we can now see that the inputs do change somewhat, so the next step is to figure out how that changes the outputs (since the outputs are what really matters). So maybe there is enough correlation between players not to treat them as independent, maybe there isn’t. I’m not sure, though if I had to guess, I would guess that “protection” isn’t significant enough and/or doesn’t exist often enough (as most hitters aren’t good/bad enough to really alter the pitchers actions in any meaningful way) to really make much of a difference. That’s just a guess, though.
As for all your other points, nothing I said contradicts itself. The whole point is as the sample size grows, the variance decreases. It’s simple regression to the mean – it’s not that anyone is working towards or away from a common expectation, it’s just that the sample, on average, will get closer to the mean as the sample size grows as you move from the individual level to the team level.
“forecasting accuracy will not be close at all unless taking each and every player individually and accounting for their specific deviations – like the article originally stated.”
Well, as I said, I don’t know that my statement you’re responding to is true, I did make the point that it’s my opinion and it may be true (implying it may not be, as well). As for this quote – yes, I agree with the theory of the article that doing each player individually is the better/more accurate way of doings things. It’s just time consuming and takes a lot of effort/thought. The point is just because theoretically the “lets look at last years output and adjust it for changes” attitude isn’t well thought out doesn’t mean it can’t be a fairly accurate method of forecasting. Lots of times in statistical forecasting you see things that have no basis in theory actually work pretty well – like just taking a random walk, for instance. This method essentially is just a random walk forecasting method, adjusted for expected changes.
You really don’t have any more evidence it performs poorly than I do it performs well (basically, we’re both at 0 evidence) – just because it theoretically doesn’t make a whole lot of sense doesn’t mean it actually performs poorly. So I’m just raising the possibility that this thinking that Dave is arguing against performs well enough that it’s worthwhile. It won’t be as good as the approach Dave is advocating, of course, but it may be good enough that given the lack of effort that needs to be put into doing it, it’s actually useful and/or the next best option for people who aren’t going to go through all the effort of projecting everyone individually.
“Lastly, moving beyond this discussion on protection, I want to be clear about my broader argument. The sabermetric community will benefit as it moves away from its relatively strict reliance on outcomes and outputs. Events on the field of any sport involve a great deal of processes. While outcome data (e.g., much of what you find online at great sites such as retrosheet and baseball-reference) have generally been more widely available, a full picture of economic analysis in the future will rely much more heavily on whole processes and their inputs.”
From the link you gave us. I’ve been waiting to hear something like that for a while now. Definitely a great point. Glad you linked to that article.