I’d like to see this correlate to the bottom half of the first round/supplemental picks such as teams get for Type A free agents. You always hear fans of a team say that they’ll just “take the draft picks” rather than take a couple of good prospects who have had success at higher levels. What are the odds that these picks become good players? How many of them become 12+ WAR players as compared to those that become busts or average players? It’d be interesting to know on average whether it’s better to take good prospects (as opposed to the team’s elite prospects, since we know it’s a no-brainer to take two of a team’s best prospects) or the draft picks. Of course, you’d have to define what a “good” prospect is, what level (AA, AAA, etc.), and other factors so it’s probably a little complicated, but it would seem to me that would be a very valuable tool for GMs.
we don’t have any jason kendall types in pittsburgh anymore. he inherited the working man’s torch from van slyke. garrett jones is the closest we’ve got at this point, but just not the same. cutch and pedro could make that bottom bullet point, but they come from the school of stardom, not the workhorse school of van slyke and kendall.
If you can find a copy of BP’s Baseball Between The Numbers, there’s a whole article on that subject. The short version (from memory) is that there’s a relatively smooth curve of rapidly declining value as you go from the 1st overall pick to the 30th pick. #1 overall draft picks on average do produce more value than any other player in their draft.
I’d be very interested in whether this has been changing over time. Has drafting and scouting improved from 10 years ago, indicated by an increasing correlation, or are we still just as (in)accurate in guessing how each person will do?
Erik, thanks for reminding us quantitatively what prospects are worth, even really good ones. I did something kind of similar a few years back using BA’s top 100 prospects lists, though I didn’t use WAR at the time: http://goo.gl/kzzk
The top 3 were the Jays, Mariners and Red Sox while the 3 worst were the Giants, Dodgers and… Royals (surprise?).
The A’s were among the top 5 thanks to Chavez, Mulder and Zito, but just based on their first round picks it wouldn’t be prudent to draw any generalization about their ability to draft and develop players
I’m glad that more recent studies using better available data still validates my overall conclusions.
That is something that most people miss, which is that the odds of a first rounder working well is pretty bad.
Another conclusion that I covered was that the odds of success in selecting a good player was exponentially better between the top 5 picks and the last 10 picks of the first round, basically the picks that playoff or near-playoff teams got in the draft.
The difference (roughly 4X) is significant enough to suggest that winning teams have a real disadvantage versus losing teams in terms of rebuilding via the draft, and thus why playoff teams have regularly signed players who had higher first round talent but had fallen due to signability issues. I would like to see this data sliced up that way.
This is also why comparing how teams did in the first round is a useless exercise for the most part because of the low probabilities involved for the latter picks. At roughly 12% probability (sum of top two above) of finding a good to great player (main reason people interested in the draft), after 5 picks there is still over 50% chance of coming up with no good player selected. And even after 10 picks, there is still nearly a third chance (28.5%) that a team ends up not selecting a good player, simply from random luck.
Winning teams find little success in drafting in the back of the first and thus had to adjust and sign those considered unsignable in a later round. The Giants appear to have started that trend with their signing of Travis Ishikawa in a much later round for roughly first round money, and teams have moved up until now teams regularly select prospects who fell due to signability issues in the late first round.
In addition, the teams who are considered successes tend to be those who had those early picks, like the A’s. Because they were losers at that time. Compare that with how well the A’s did during the Moneyball draft and subsequent drafts, when they were winners, to see the difference. Same guy and organization, much different results.
Lastly, I would add that my categorization methodology is probably a better way to analyze the draft than comparing by WAR. It is not like teams can control how good a WAR their best players picked are, it is a crapshoot just to find one, let alone the best one. So saying that one team has the most WAR does not mean anything if they lucked out and selected the player who happened to generate the most WAR.
Basically, the categories here are like the ones I chose. Above 18 WAR are the stars. 12.1-18 WAR are the good players. 6.1-12 are the useful player. 1.6-6.0 are the marginal players. And 1.5 and under are the busts (I was kinder and used “The Rest”, but it is what it is).
They seem kind of pre-selected though, so I think it would be better to go through the list and create ranges that holds together better as a methodology, whether you use standard deviations (say, what was the mean WAR anyhow?), look at how the data grouped together, or whatever (I defined statistical levels plus playing/usage time standards).
Though I must say that the splits here are roughly like what I found for the first round, so perhaps I’m being too picky about this.
Another good study of the draft, using BP’s VORP, was by someone on Son Of Sam Horn’s blog-site a few years back.