here is some food for thought:

1) Playing time projections are super important to get right. the community playing time projections, which were incorporated into LPP very late in the game last year, produced the best valuations I think. the earlier you can start to get these incorporated, the better

2) a system which averages out projections also seemed to work the best. going with something like CHONE by itself can produce wacky results for players that CHONE loves but nobody else does. a blended projections using zips/marcel/tht/whatever is preferable, especially for the superstars. these are the guys you can’t go wrong.

3) how will this be different from LPP? by the iterations and standard scores?

]]>I described all that stuff in my excel sheet before but never mentioned I’m also in H2H leagues, not roto. I use projected season totals instead of trying to simulate H2H because given enough iterations, the simulation should converge based on the totals anyway. As I understand it this is how pythagorean winning percentage works — the more you outscore your opponent on average, the higher your expected winning percentage.

My thinking is that if you know the mean and standard deviation for each category, then if your totals for the year are 1 z-score above the mean for each category you’d expect to win nearly every category every week. If the team category totals for the year have roughly a normal distribution then you’d expect to win 5 of every 6 weeks.

The distribution is probably not normal, projections are just projections, and things like MLB matchups, injuries, and your H2H schedule lead to a lot of volatility in the weekly data. It doesn’t make this type of analysis invalid, it just means when you look at a few weeks’ results over the season it won’t necessarily represent the whole.

If all you have is a static valuation for each player, that only helps you so much. The valuation should change as the market changes — as players are picked in the draft, or later, free agents are picked up and dropped, players go on the DL, etc. So I think tango’s units above replacement method only works to rank players before the draft. Even if the market didn’t change during the draft you’d still have to account for the change in marginal value of a HR, RBI, etc. as you accumulate higher or lower (projected) numbers of them during the draft.

]]>The right way to calculate these is to find the standard deviation in the weekly totals, and thus the standard deviation in the difference between my weekly total and my opponent’s weekly total. (The whole point is you want this difference to be positive, as that means a win for you.)

I.e., if my weekly totals for HR are 5, 8, 3; and my opponent’s weekly totals are 6, 4, 4; the differences are -1, 4, -1 for a standard deviation of the differences of about 3.

That SD becomes your divisor for the category. So a 15-HR hitter is worth 5 Wins, and then you just have to subtract off replacement value to find his overall value.

So that’s the theoretically correct way to do it (I believe) but of course you don’t have all the data up front. So I can do it for my league based on past data of average weekly differences for each category. I don’t know how any of that can be made more universal, though.

]]>For appropriate player pool I usually use any player with a projection in all the projection resources I use (all the ones on your site), I haven’t seen much benefit to limiting the player pool. I also use a weighting system of 33% prior year and 66% projections (I’m not entirely sure why)

I also use a position scarcity factor (based on average of the starting positional lineup spots) instead of using 0 since we have bench/utility players and just because a player is the xth best at a postion doesn’t mean he has the same value as a different player that is the xth best at their position.

Readjusting the rankings based on players taken is something I have tried in the past but seems to breakdown later in the draft (think about steals and a player projected to steal 20 but with no other value when very few, if any steal players are left). I end up utilizing a “best player available” theme but also pay attention to my lineup construction and I have the highs in each category listed above my projected roster totals so I don’t overdraft a category. Basically its beyond my capabilities to have a truly automated system (which is why you need to create this monster).

I don’t use ADP in the actual rankings but have it next to the ranking so I can gauge whether I can wait. If I used it I’d use it as an additional category that would basically inflate or deflate a players score based on their pick position.

I use an indexed score but will probably experiment with the z-score.

Lastly, I use a weighting system per category to account for the large fluctuations in some categories year to year (so pitchers K’s are more heavily weighted than W’s and L’s) and to take into account the historical biases people in my leagues have. I will sometimes tweak this if I’m going heavy in a category early on.

This would be really cool if you guys build this David.

]]>However, combining the z-scores with a z-value-over-replacement has worked relatively well in general. However, calculating replacement often depends highly on the depth of the rosters in your league. This is especially true for daily lineups without play time limits (valuable bench players can be useful). I think the real key is deciding on this level.

I’ve also toyed with making the value increases non-linear (to get to philosofool’s question above).

Pitchers are a little tricky, especially when considering rate stats and play time weighting for these and the ability to add and drop relievers at will for ERA/WHIP help.

]]>