## Win Values Explained: Part Four

Okay, so, in the first three parts, we’ve covered Batting, Fielding, and Position Adjustments, and hopefully you’ve been able to see how we’re arriving at the values used for each component. By combining those three parts, you get runs above or below average. However, as I mentioned at the end of the last post, we don’t really know how much average costs, but we do know how much replacement level costs, so we prefer to value players above replacement, as that gives us a fixed baseline of $400,000 in salary – the league minimum.

For a great read on replacement level, check out this article by Sean Smith. In it, he uses his CHONE projections to figure out the offensive production that a team could expect from players not projected to be good enough to make a major league roster next year. These guys have fallen into that Four-A category, where they show more ability than your average Triple-A veteran but not enough to hold down a major league job. They’re usually available every winter as minor league free agents, via the Rule 5 draft, or as cheap trade acquisitions where a team can acquire one of these players without giving up any real talent in return.

As Sean showed in his article, and has been shown elsewhere, the expected value of a replacement level player is about negative 20 runs per 600 PA. Or, to phrase it a bit differently, if you lost a league average player and replaced him with a freely available guy, you’d lose about two wins. That’s why the replacement level calculation in our Win Value formula is 20/600*PA. If you get exactly 600 PA during a season, your replacement level adjustment will be +20 runs. If you get 700 PA, your replacement level adjustment will be +23.3 runs. The more you play, the higher the replacement level adjustment, because you’re filling a larger quantity of playing time and that chunk won’t need to be filled by anyone else.

The replacement level calculation serves to do two things in our calculations – adjust the scale so that the baseline value is $400,000 at zero wins, and rewards players who stay on the field. For instance, Chipper Jones was outstanding in ’08, posting a .446 wOBA and a +4.9 UZR. However, he only racked up 534 PA, so the Braves had to give approximately 66 PA to people who weren’t Chipper Jones. Therefore, Chipper’s replacement level adjustment is just 17.8 runs – we presume that the folks who filled in for him were about 2.2 runs below average in those 66 PA, and that comes out of Chipper’s replacement level adjustment. Players who stay healthy and can take the field everyday have value above and beyond their rate statistics, and scaling the replacement level adjustment to plate appearances rewards them for that extra value.

If you’re having a tough time visualizing what a replacement level player looks like, there’s probably not a better example in baseball than Willie Bloomquist. Over the last three years, he’s racked up 644 PA – just barely more than one season’s worth – and accumulated the following totals:

-16 batting, -3.8 fielding, +0.9 position adjustment = -18.9 runs below average. He’s not a very good hitter, but he can play a bunch of positions, run the bases okay, and doesn’t cost much. He is, essentially, the poster boy for replacement players. By adding in the replacement level adjustment, we’re simply adjusting from saying that Chase Utley is +58 runs above average to +78 runs above Willie Bloomquist. And, since we know that players of Bloomquist’s quality are available for $400,000, we can then value Utley’s performance based off that baseline.

So, that’s wRAA+UZR+Position+Replacement. It comes out as Value Runs, and tells you how many runs above a replacement level player each position player was. Tomorrow, we’ll talk about the runs to wins conversion and the wins to dollars conversion.

Print This Post

Using a comparative baseline is crucial, but is “replacement level” the best one? I’m intrigued by WSAB (Win Shares Above Bench) use of a bench player as the comparative baseline. I know Tom Tango has said some positive things about using an average bench player as the baseline. I don’t really have an argument here that one is the better baseline than the other. Do you have any thoughts on the issue?

It doesn’t really make a difference, honestly. You can calculate wins above bench, but then you have to calculate bench above replacement in order to figure out what a bench player is worth, and add the two together to figure out how much a player should be paid.

I don’t know, since 2002 Griffey seems like the quintessential replacement level player :)

Thanks for the comment about WSAB. I’m someone who doesn’t believe that “replacement level” has to mean “freely available talent” (which is how I interpret what Fangraphs is doing). There are two levels of discussion:

- Salary. Freely available talent is available at the minimum, but so is a lot of bench talent. It’s true that lots of free agents are signed for $1 million or more to be a bench player, but it’s also true that there are superstars who are paid the minimum cause they’re first or second year players. I think that the “end of the roster” construct is too severe for salary comparisons.

- Playing time. The real value of using a replacement level is being able to compare players with different levels of playing time. Why use the replacement level of someone who isn’t even on a roster for a playing time baseline? I’d rather use someone on the bench who doesn’t get to play a lot. Just makes more intuitive sense to me.

Having said all that, the exact replacement level doesn’t matter a whole lot. In fact, it may not matter at all, as long as you’re within a reasonable range (and I think WAR is). What’s much more important is that your replacement level makes sense across all positions, particularly pitchers vs. non-pitchers.

The way I look at it is, replacement level (or FAT, if you like) is the point at which a player stops providing value to a team, due to the opportunity cost of giving him playing time over some other player that it doesn’t cost the organization any more to obtain. Think of it as the Mendoza Line.

If a player has negative value according to a baseline of “average” or “bench” he still has value to some teams at some points in time, because a typical ballplayer (or even a typical bench player) isn’t easy to find.

That’s why FAT is the appropriate baseline to use for salary estimation – because 0 WAR is worth the league minimum salary, nothing more or less. (Hypothetically, at least – these is room to argue whether or not a particular WAR baseline is set correctly or whether or not a player’s true-talent level is 0 WAR based on a sample of performance). And this is because the rules do not allow you to pay a player below the minimum or to have fewer than 25 players on the roster for a game.

But the corollary isn’t true. The league minimum salary is worth more than 0 WAR. If you had a team of players who made the minimum salary — let’s call them the Marlins — they would presumably include some fine first- and second-year players, and they would win at least 60, maybe 70 games. The economics of baseball is more complicated than a simple FAT baseline would lead you to believe.

I don’t see any reason why replacement level has to equal “absolute zero” from a major league roster perspective. It’s really just a matter of preference.

Replacement/Bench level is about marginal gains, so it makes sense to compare what would happen if that player were not signed, and no player were signed instead. Most likely either a bench player would be promoted to starter and a AAAA player would be promoted to get more at-bats on the bench. Either that or a AAAA player would be signed. It’s clearly a judgment call.

The best way to approximate it would be to figure out which level of replacement/bench gets the best fit to free agent salaries to wins above bench. When Baseball Prospectus started doing MORP, they used WARP which uses an impossibly pathetic replacement level player that was nowhere near good enough. As a result, they came to the wrong conclusion that an 8-win player costs more than twice as much as a 4-win player. In reality, they probably should have just called the “8-win player” a “6-win player” and the “4-win player” and “2-win player”. That would have made the wins/dollars line a better fit instead of the convex curve they drew through the data. It’s relatively tough to believe that wins/dollars wouldn’t be a straight line if replacement level were clearly defined and you could remove the noise from performance. Otherwise, there would be arbitrage opportunities on the free agent market, which shouldn’t happen. Ultimately, I think a simple OLS regression on wins above average and plate appearances would help solve the proper point of comparison for replacement/bench level.

I see a real problem with these positional adjustments. The numbers make sense for a NL team as they all sum to Zero. But, for an AL team they sum to -17.5 runs because of the DH. Are you then saying that the pitchers on an AL team are worth +17.5 runs compared to their NL counterparts because they don’t hit and have to pitch to a DH? Seems to me that these numbers just do not work in the AL or accross the two leagues.

However, these replacement numbers for a team are simply the total of all their players, thus making the DH positional adjustment argument moot.

My point is that every AL Team totals -17.5 Runs while every NL team totals 0 Runs in positional adjustments. These Runs devided by 10 have been used here at this site to predict Wins. I am just looking for how this -1.75 Wins (-17.5 Runs) every AL starts out with is accounted for in the Win predictions for AL teams. It seems to me that it would be easy to make this adjustment but it also seems to me that this adjustment has not been made in the win predictions I have seen here todate.

Right – when summing up a team’s individual win values, there need to be some adjustments made. I’d encourage you to point that out in those respective posts.

Shouldn’t this disadvantage be cancelled out by the fact that a DH (non-Vidro division) is worth more than 17.5 runs more than a pitcher with a bat in his hands?

I’m a bit confused when looking at how many wins a team is projected to have when you add up all the players. You use a baseline of 50 wins and then add on what each play has for projected Win Values? So each player to start out is worth 20 runs, or 2 wins. In Eric’s article on the Giants, it starts with 50 wins and then adds each players ‘value’ to this 50. Does this double count the replacement level value? Or did is the replacement level value not in his calculations for each players projections?

A team made up of all replacement players would be expected to win 50 games. That is why you start at 50 and add on

It’s actually more like 47 or 48. But, yea, it’s close 50.

The problem with most of the valuations using replacement level to judge off-season moves is that replacement level is too low a standard to set at this time of year. The “replacement player” is the guy who is still available in June because he wasn’t good enough to make an MLB roster. But there are scores of players out there right now, available for league minimum, who will still be good enough to find a spot by the time the season starts.

Good GMs aren’t paying full rate for every win above “replacement” level. A more realistic baseline for off-season moves would likely be at something like 85%-90% of average runs. What is being used here for the most part is about 80% (maybe a bit less).

A team that paid all 25 men on it’s roster the minimum would have a salary payroll of about $10M. The average team salary payroll was about $80M above that. If you were really paying for every win above 50, the average 81 win team would then be paying for 31 wins. So 80/31= $2.6M per win.

Now, set the FA baseline instead at 90%, and you get a more realistic result. At 90% a team wins about 66 games. So 15 wins below average. Thus, 80/15=$5.3M per win.

But there are scores of players out there right now, available for league minimum, who will still be good enough to find a spot by the time the season starts.Sorry, but no, this isn’t true. Trying to set replacement level at 10-15% below average is just as wrong as WARP setting it as the 1899 Cleveland Spiders.

The 1899 Cleveland Spiders, based on pythagorean W-L, would have been expected to win 28 games in a 162 game season.

How about your team of replacement players who you say are -20 runs per 600 PA? Actual run scoring in MLB last season was 72.2 runs per 600 PA. So if you are at minus 20, you are at 52.2/72.2, or a replacement level of 72.3%.

Assuming your pitching/defense is equally marginal, you will also allow 72.2/.723= 99.9 runs per 600 PA. Plug that into pythag and you have a 38.4 win team. You have improved on “WARP”, but not by a heck of a lot.

What happens if you go into the FA market now and try and compete spending $4M per win above that replacement level? Well, assuming an average payroll of $80M above MLB minimum, you can buy yourself an additional 20 wins. Now you’ve got a 58 win team. Congratulations, you are better than the 1899 Cleveland Spiders. The problem is, you still aren’t better than the 2008 Washington Nationals.

But, this is the result you should expect if you are valuing the Tim Reddings of the world at over $6M a year (which I see some of these systems doing).

The 85% level I am suggesting would set the baseline for 600 PA at about -10.8 runs. I don’t think that’s far off for moves right now. And, keep in mind, I am suggesting you might use a lower replacement level during the season. A contending team at the trade deadline, for example, may well find it worth pay for even small improvements above replacement level, given the supply of talent available at that time. But in December and January, you really need to aim higher than to be shelling out $5M+ per year contracts to below average players.

You’re making the same flawed assessment that doomed Clay – a guy who is both replacement level at hitting and replacement level at fielding isn’t a replacement level player – he’s a Double-A scrub.

A replacement level player is, in general, replacement level with the bat and league average at defense. Obviously, there are replacement level players who are better hitters and worse fielders, but the mean of the group is something like -20 offense, +0 defense.

Our replacement level is set at right around a .300 win% for a team, or ~48 wins per season. It’s the generally accepted replacement level among every serious analyst.

If you want to argue that replacement level should be a .400 win%, you need to establish that there are enough freely available/league minimum players to support that assertion. I’m really confident that there aren’t, but if you want to try to re-do all the work that’s already been done, knock yourself out.

That’s accounted for.

I realize runs allowed comes down to both pitching and defense; but the pitching numbers being used in these analysis already generally assume league average defense.

You’ve got 25 roster spots. If you fill 14 spots with position players who average -20 runs per 600 PA (assuming average defense), and then fill the pitching spots with pitchers who also average that level of performance (assuming average defense), that’s still a 38 win team, not a 48 win team.

My 85% level still only gets the winning percentage to .358. A 90% level gets to a win % of .406.

So we agree the level should be about a .400 win percentage; I just don’t think that’s what you will get if you allow both the hitting and pitching to each perform at a -20 run per 600 PA level, assuming average defense.

Again, no.

-20 runs per 600 PA * 6250 PA = -208 runs. The average AL team scored 775 runs last year. 775 – 208 = 567. That’s your offensive replacement level for a team.

Average FIP for an AL team was 4.35, and the average AL team threw 1450 innings. Based on that, an average AL pitching staff was responsible for 700 runs. A replacement level FIP set at 5.40 (combined starters and relievers, obviously they are different baselines for different roles) * 1450 innings = 870 runs allowed.

567 RS^2/(567 RS^2+870 RA^2) = .298.

.298 * 162 = 48 wins for a replacement level team.

Replacement level is about .300 for a team. That does not mean that replacement lever pitchers are .300 and that replacement level hitters are .300. There are different replacement levels for position players and pitchers, but the a team full of replacement players would be around .300.

OK, my last paragraph above is incorrect. I shouldn’t say agree it should be .400. You say .300; I say .400. That’s mostly a philosophical difference. You are right that the burden is on me to show that there are enough “freely available/league minimum players” to support that; but I also think it’s common sense that there would be more available in December/January than there are in June. And I don’t think it’s shrewd GMing to tie up a limited number of roster spots at this time of year with below average players on high salaries.

On the math, two points:

1. I think you are dropping defensive runs allowed from the calculation above. That is, 5.40 FIP * 1450=870 runs allowed. But you still have to add those defensive runs back in as well. FIP was designed to tie in with ERA, not RA. The average AL RA last season was 4.72. So 870/(4.35/4.72)=944 RA.

2. I think you need to be careful about applying AL specific measures to NL players like Chase Utley and Chipper Jones. Your calculation above using NL numbers:

The average NL team scored 734 runs last year. 734 – 208 = 526. Thatâ€™s your offensive replacement level for a team.

Average FIP for an NL team was 4.28, and the average NL team threw 1446 innings. Based on that, an average NL pitching staff was responsible for 688 runs.

A 5.40 FIP team would have allowed 868 runs. Since FIP/RA for the NL was .9187, I would use 868/.9187=945 runs allowed.

[526 RS^2/(526 RS^2+945 RA^2)]*162 games = 38.3 wins.

That said, a 1.8 exponent is probably better, and that will get you to about 42 wins. I don’t think the real disagreement here is much over the math (but I do appreciate you taking the time to explain how the numbers were derived).

Acerimusdux,

If I understood you correctly, you say that spending $80MM above league minimum is only 20 wins and therefore seem to conclude that replacement level should be closer to 61 wins. This misses the point– every team has a significant number of players who are above replacement level but have less than six years of service time. Especially for those players who have not yet reached arbitration, they are especially valuable. For these players, you pay less than their free market value and still get their production. Cole Hamels was probably worth over $20MM last year and provided his services for $0.5MM.

I suspect that for players with less than six years service time, the average teams gets about 20 wins above replacement level more than their salaries would dictate. Add that to the 48 wins that a replacement level team would get and figure that about $52MM of that payroll is spent on free agents, and you conclude that about 13 wins are bought via free agency. Adding those together, 48+20+13=81, and on average you have 81 wins.

The standard deviation in team performance is about 11 wins. Of this, there is a mixture of randomness due to a binomial distribution over 162 games– if all teams were the same the standard deviation would be about 7 wins by chance. Therefore, a little statistics tells us that the standard deviation in team ability is approximately 8-9 wins. I would guess that most of this 8-9 win discrepancies is probably due to variance in the savings on players with less than six years of service time more than it is due to variance in amount spent on free agents.

This is where being “strong up the middle” can be extremely valuable because the upside of outperforming the average offensive value is so large. Take the 2008 Phillies for example. Jimmy Rollins played SS and provided $24.1MM of value for $8MM, Chase Utley played 2B and provided $36.8MM of value for $7.8MM, Shane Victorino played CF and provided $17.9MM of value for $0.5MM, and Carlos Ruiz played C and provided $2.8MM of value for $0.4MM. That’s $81.6MM of value for $16.7MM. I imagine most playoff teams have huge discrepancies like that. Only Rollins has six years of service time among those four.

Yes, I understand you have all that below market rate talent. But most of those guys aren’t star players like Cole Hamels. Most of them are even below average players. And, every one that you carry also does take up a roster spot.

So you are trying to add those 13 wins with a limited number of FA roster moves. And, even then, you aren’t really trying to be “average”, you are trying to win it all. So really you are still trying to find a way to add more like 20 wins when you start allocating those dollars (and they don’t all go to FA, some go to getting players like Utley and Rollins under contract).

For me, the availability of a significant pool of non FA talent that will perform more cheaply makes it even more important that you generally allocate those FA dollars and remaing roster spots towards players who are at least average or better. I would find it hard to justify spending much more than MLB minimum on a player who was expected to be 1 win below average. A roster of 25 of those guys would only win 56 games. That’s something you can probably accomplish for near minimum.

Your definition of replacement level is really leading you astray here.

If you are a borderline playoff team and you replace a replacement level player with a player who is 1 win better than replacement level but 1 win below average, you increase your chances of making the playoffs by about 5%. That and the extra win end up adding to your revenue by about $4-5MM.

The idea is not to get 25 guys who are below average. The idea is to fill in the 10 spots or so that you have no pre-FA above replacement level players with free agents. Econ 101: spend if marginal revenue is at least equal to marginal cost. Here, marginal revenue comes from the marginal revenue of an additional win (in expectation) times the number of wins extra that you will receive by signing a guy. Marginal cost is dollars that could have been allocated elsewhere.

Don’t think of signings as “taking up a roster spot”. Think of them as replacing a player who is replacement level with a player who is not replacement level.

“Assuming your pitching/defense is equally marginal,”

Not a good assumption.

I use roughly:

2.25 wins per 700 PA (non-pitchers) above replacement for the average nonpitcher, of which there are 8.65 per team

+.11 wins per 9 starter innings, for the average starter (65% of innings)

+.05 wins per 9 relief innings, for the average reliever (35% of innings)

Add it up: 2.25 * 8.65 + .11*162*.65 + .05*162*.35 = 34 WAR

Since the average team has 81 wins, then the replacement level would be 34 below that, or 47 wins, for a win% of .290.

(Because wins are not totally linear, an actual such replacement team will win closer to .300.)

Willie Bloomquist is my main man as the perfect explanation of a replacement level player. The only reason he’s been allowed to accumulate as many PA as he’s had is because of his “local flavor”. This is good for us as sabermetricians, because he gives us the sample size we need to prove our point. Most true replacement players will be over and done in under 500 PA.

Why do we use 600, and not 700, PA’s? It seems to me like the average team would need 700 PA from each slot, be it lineup/ position.

????????????????, ? ?????? ????????? ??????????? ?? ????, ? ?? ?????? ???????? ???? ??????? ?? ?????? ? ?.?.

Let me start off saying I really appreciate all the work done on this site, visit it every day and enjoy it a ton. Keep it up guys

One question: have there been studies done to try to look at the position-specific assumptions for a replacement-level player?

It would seem to me that the skill set required to play a replacement level corner OF or 1B is one that is more plentiful than say, SS or CF….mostly from a defensive perspective.

Now obviously the league average offensive numbers are higher at the former two positions (so the replacement level “unskilled” player would have to hit more than the skilled player), but again just subjectively, it seem like there are guys bouncing around the minors or on benches who have isolated power and/or isolated walk ability and maybe some platoon struggles or whatever but who would not murder your team defensively in LF, but not exactly tons of guys with average defense at those more skilled defensive positions like SS or CF.

Moreover, if you could find a roughly average stick in the minors to play one of these “unskilled” spots, it doesn’t seem (to me again) likely that their defensive deficiencies (given the demands of the positions like LF or 1B) would be as large as the offensive deficiencies of a league-average defensive CF or SS replacement. Just on the basis of skill scarcity, after all there are less SS and CFers on the planet than 1B/LF/DHs.

Again maybe these are erroneous preconceived notions, but in general it seems reasonable IMO to expect an injury at a more scarce, skill position to have more of a negative effect on a team than an injury at a LF/1B position. This would then obviously have an impact on the replacement level value calculation WRT positions

I am assuming you all have given this some thought; any studies/articles that really crystallize the decision to use a universal replacement value across positions?