Overcoming Imperfect Information
When a team trades a veteran for a package of prospects, only minorleague data and the keen eye of scouts can be used to assess the likely future majorleague contributions from those particular players. Teams have accurately relied on the trained eyes of scouts for generations, but of course the analytics community wants its foot in the game too. Developments such as Chris Mitchell’s KATOH systems make some strides, as it is helpful to compare historical information. Does prospects rank on MLB.com’s or Baseball America’s topprospect list really indicate how productive a player will be in the major leagues? Of course, baseball players are human, and production will always vary due to the result of numerous factors that could potentially change the course of someone’s career. Perhaps a player meets a coach that dramatically changes his game around, or a pitcher discovers a newfound talent for an impressive curveball that jumps him from low fringe prospect to MLB ready. The dilemma of imperfect information will always be present, so team must use the best resources available to them to tackle the problem.
To start my analysis of imperfect information, I look at the top 100 position prospects from 2009 using data from BaseballReference.com. I break up the prospects into three groups based on their prospect ranking, which are position players ranked 110, 1120 and 21100. I then look at the value that those prospects contributed in their first six seasons in the major leagues, as well as their todate total contributions using fWAR. I choose to look at the first six seasons of a player’s career because that is how long a player is under team control before reaching free agency. This study does not take into account any contract extensions that may have been given before a player reached free agenteligibility. For players who have not been in the MLB for six full seasons, I look at their total contributions so far. The general idea for this study was inspired by a 2008 article by Victor Wang that looked at imperfect prospect information.
I convert the prospects’ production into monetary value based on the relative WAR values that were commanded in the freeagent market that year. I use fWAR to encompass the best measure of total value. When teams trade for prospects, they understand that they are trading wins today for wins in the future. Since baseball is a business and teams care about their performance on the field each year, I need to account for that fact in my analysis. In order to do that, I assume all else equal, a win today is more valuable than a win in the future. I apply an 8% discount rate to each prospect’s WAR value and create a discounted WAR value (dWAR). The value of the discount rate can be debated, but the 8% rate seems appropriate for the time framed looked at.
From here, I break up the prospects into a few different subgroups based on their average WAR contributed over their first six seasons in the major leagues. I follow some of the guidelines laid out in other studies with some slight modifications. Players with 0 or negative WAR per year are labeled as busts. Players with slightly above 02 WAR are contributors. Players with 24 WAR are starters and players with 4+ WAR are stars. Like described previously, I estimate the players’ monetary savings to their team by taking their monetary value based on WAR performance and comparing it to what similar production would command in the freeagent market for that year. There seems to be some debate on the value of one WAR in the freeagent market, however my calculations show that about $7 million bought one WAR leading up to the 2009 season. Victor Wang suggests that the price for one WAR had about a 10% inflation rate from year to year. I find the present value of each player’s WAR, then divide it by the $7 million dollars per WAR that would have been commanded in the freeagent market in order to find a player’s effective savings to their team based on production.
Position Prospects Ranked 110
Bust  Contributor  Starters  Star  AVG WAR/Y 
1  2  5  2  2.83 
10.00%  20.00%  50.00%  20.00% 
Bust  Contributor  Starters  Star  
WAR/Y  0.43  1.53  2.73  5.17 
Probability  10.00%  20.00%  50.00%  20.00% 
PV Savings/y (in millions)  1.88  8.46  10.91  27.98 
Interestingly enough, this prospect class panned out quite well compared to some other recent draft classes. The only bust in terms of discounted WAR turned out to be Travis Snider of Toronto, who was ranked the sixthbest prospect in 2009 but only managed to accumulate a cumulative WAR slightly above 0 in his first six seasons. Though the top 10 positionplayer prospects from this class feature names such as Jason Heyward and Mike Moustakas, the player that contributed the greatest WAR over his first six seasons from the top 10 ranking was Buster Posey of San Francisco, who posted nearly 6 WAR a year. It is important to understand that the savings a player gives to his team based on his production does not indicate any “deserved” salary for that player. Instead, it merely indicates the amount of money the team would have had to spend in the freeagent market to acquire that exact same production. The top 10 positionplayer prospects from this prospect class turned very productive to their respective teams, having a 70% chance of being either a contributor or star.
Position Prospects Ranked 1120
Bust  Contributor  Starters  Star  AVG WAR/Y 
5  2  1  2  2.158950617 
50.00%  20.00%  10.00%  20.00% 
Bust  Contributor  Starters  Star  
WAR/Y  0.67  1.6  3.56  5.71 
Probability  50.00%  20.00%  10.00%  20.00% 
PV Savings/y (in millions)  3.21  8.36  19.10  30.90 
The next group is the 1120 ranked position players. As perhaps expected, there are more busts in this group of ranked prospects. The variation of the small is sample is spread through the rest of the categories. Giancarlo Stanton, the 16^{th} ranked prospect, and Andrew McCutchen, the 33^{rd} ranked prospect, turned out to the be the two stars from the list. As the chart shows, the probability of getting a bust at this ranking of prospects is much higher than the 110 rankings. The variance does show, however, that player outcomes expectancy can also be promising at this ranking level. There was an identical chance of player becoming a star in this group compared to the first group, and a 50% chance of them being at least a contributor. In total, four of the top 20 prospects from 2009 turned out to be stars to this point in their careers, though not all have reached six full service years in the majors.
Position Prospects Ranked 21100

The next group of charts shows the rest of the top 100 ranked position players. The chart shows there is much more potential for busts to be found in this ranking; however, we must keep in mind that the variance will be different in this group automatically because of the larger sample size than the first two groups. Nearly 40% of position players ranked 21100 turned out to be busts. In addition, only Freddie Freeman of Atlanta managed to get above the 4+ dWAR/year threshold to qualify as a star. In fact, the most common category of these ranked position players is a bust. When drafting a player, a team never knows for certain the production that the pick will produce in the major leagues, no matter the pick number of the draft pick. In addition, prospect rankings based on minorleague performance is still not a completely accurate indicator of future MLB productivity. Higherranked prospects in 2009 did have higher probability of contributing more to their majorleague club, though rankings are understandably volatile. A variety of factors play into the volatile nature of prospect outcomes and the prospect risk premium. Part of the reason I chose to only look at position players is because they are traditionally safer from injury than pitchers, and therefore carry slightly less of a risk premium.
Looking at the variance of dWAR for the prospect group, the distribution is skewed left, which is to be expected because not all prospects will turn out to be as equally strong, and most will not become stars. It also makes sense because in any given year, only a few top prospects will become very strong players, while most will hover around average. We also see that the inner quartile range is about from 0.5 dWAR per year to slightly above 2.5 dWAR per year. Therefore, it could be expected that a team get production in that range from a given prospect ranked 1100, varying sightly in what rank group they are in. A useful analysis would be to make a distribution chart of each rank group, but in the interest of brevity, I do not do that here.
New ways of evaluating both minor league and amateur players to relieve some of the prospectrisk premium is useful, although risk will always be present. In the next part of this study, I will try to discover statistically significant correlations between college and majorleague performance in order to try to reduce the noise of prospectrisk premium. One of the great things about the baseball player development structure is that it allows players with the right work ethic and dedication, as well as others who were overlooked in high rounds of the draft, to prove themselves in the minor leagues. That can seldom be said it other professional sports. The famous example of this was Mike Piazza, who was one of the last overall picks in his draft class and worked his way to a Hall of Fame career. With perfect information, the graph would be perfectly skewed left, with each ranked prospect achieving a higher dWAR than the next ranked prospect. Some may attribute the imperfect information dilemma to drafting or the evaluation of minorleague performance, and some may attribute it to differences in playerdevelopment systems. Some may also rationally say that both the players and the scouts are humans and will not be perfect. Prospects rankings for a given year are based on several factors, including a player’s proximity to contributing on the majorleague level. The most talented minorleague players could be at a lower ranking in a given year because of their age or development level, which could cause some unwanted variance in the data. Looking at the just the 100 top prospects helps somewhat eliminate this problem, but will not make the problem completely disappear. It is difficult to know when teams plan on calling up prospects anyway, and it really depends on the needs of the team. Some make the jump at 20, while others make the jump at 25, or even later.
This type of analysis could be useful for things like estimating opportunity cost of a trade involving prospects for both financial tradeoffs and present versus future onfield production. A lot of factors play into the success of a prospect. When evaluating any player, things such as makeup and work ethic are just as big of factors as measurable statistics. Evaluating college and highschool players for the annual Rule 4 draft can be especially difficult because of the limited statistical information that are accessible. Team scouts work very hard to accurately evaluate the top amateur players in the United States and around the world in order to put their team in a good position for the draft. Despite the immense baseball knowledge that scouts bring to player evaluation, statistical analysis on college players is still explored and used to complement traditional scouting reports. Prospectrisk premium will always be something teams must deal with, but efficiently allocating players into a majorleague pipeline is essential for every front office.
There have been a few other articles on sites such as FanGraphs and The Hardball Times on statistical analysis of college players. Cubs president Theo Epstein told writer Tom Verducci that the Cubs analytics team has developed a specific algorithm for evaluating college players. The process involved sending interns to photocopy old stat sheets on college players from before the data was recorded electronically.
Though I do not doubt the Cubs have a very accurate and useful algorithm for such a goal, the algorithm is not publicly available for review, and understandably so. However, for the several articles which tackle this question on other baseball statistical websites, I think there is some room for improvement. First, the multiple of different complex statistical analysis techniques to compare college versus MLB statistics yield about the same disappointing results as the other, meaning that some of the models are probably unnecessarily complicated. Second, though the authors may imply it by default, statistical models in no way account for the character and makeup of a college player and prospect. Even in the age of advanced analytics, the human and leadership elements of the game still hold great value. Therefore, statistical rankings should not be taken as precise recommended draft order. In addition, they do not take into account injury history and risk of a player. Teams can increase their odds of adding a future starter or star over a player’s first six seasons by drafting position players, who have been historically shown to be safer bets than pitchers due to a lesser injury risk.
The model in this post attempts to find statistically significant correlations between players’ college stats and a player’s stats for his first six seasons in the MLB. Six seasons is the amount of time a team has a drafted player under control until they reach free agency and the player is granted negotiating powers with any team, like we’ve gone over. However, the relationship between college batting statistics and MLB fWAR can only go so far because of the lack of fielding and other data for college players.
The first thing I did was merge databases of Division I college players for years 20022007 with their statistics for their first six years in the MLB. There is some noise in the model since some payers in the MLB who were drafted in later years in my sample have not spent six years in the MLB, which is accounted for. I only look at the first 100 players drafted each year. I then calculate each player’s college career wOBA per the methods recommended by Victor Wang in his 2009 article on a similar topic. However, since wOBA weights are not recorded for college players, the statistic is more of an arbitrary wOBA that uses the weights from the 2013 MLB season. Since wOBA weights do not vary heavily from year to year, it will do the trick for the purpose of this analysis. For MLB players, wOBA compared to wRC and wRC+ have a 97% correlation (varying slightly on the size of the sample) so I did not feel it was necessary to calculate wRC in addition to wOBA. In fact, when using ordinary least squares and multiple least squares regression techniques, I would have experienced problems with pairwise collinearity, so calculating both statistics would have proved pointless. Along with an ordinary least squares regression technique, I also use multiple least squares and change the functional form to double logarithmic. (A future study I hope to tackle soon is to use logistic regression techniques to calculate the odds of a college player ending up each of the four WAR groups for their first six season in the majors.)
Due to the limitations in the data as well as the restrictions on the amount of top 100 picks that actually make it to the MLB, the analysis is somewhat limited, yet still produces some valuable results. Interestingly, though perhaps unsurprisingly, my calculated wOBA for each player’s college career showed a strong and statistically significantly relationship with wOBA produced in the MLB. To a lesser extent, college wOBA also indicates a statistically significant relationship with MLBproduced WAR, even though this study does not take into account defense, baserunning, etc. Looking at a collinearity matrix, I find that college wOBA and MLB wOBA have about a 25% pairwise collinearity. In addition, the matrix shows a similar pairwise collinearity of about 25% between college wOBA and MLB WAR, though at a lower level of confidence. Using an ordinary least squares regression, I use different functional forms to further evaluate the strength of the relationship between college and MLB statistics.
The first model confirms a fairly strong and statistically significant relationship at the 1% level between college and MLB wOBA with a correlation coefficient of about .25. College strikeout to walk ratio is also statistically significant at the 1% level albeit without a strong correlation coefficient. Even so, looking back at the matrix indicated that players who are less prone to the strikeout in college, on average, see better success in the MLB. Interestingly enough, college wOBA and strikeout to walk ratio are about the only two statistically significant statistics that I can find by running several models with different functional forms. Per the model, we can also say that it is likely that college hitters with extrabasehit ability have better prospects in the majors. The Rsquare for model one is about .20, which is not terrible, but certainty not enough information to provide a setin stone model. The constant in the regressions seem to capture noise that is difficult to replicate, lending insight to the extreme variance and unpredictability of the draft.
For model 2, I use a double logarithmic functional form with a multiple least squares linear regression in order to see the variance in MLB wOBA with college wOBA and strikeout to walk ratio. The results of this regression are slightly stronger and look a bit more promising to the conclusion that the calculated college wOBA is a strong predictor of MLB wOBA.
According to the results of the double log model, a one percent increase in MLB wOBA corresponds to about 36% increase in college wOBA, all else equal. (Since the model is in double log form, the interpretation is done by percent and percentage points.) We can more simply interpret this that a player, on average and all else equal, will have a one percent higher wOBA in MLB for every 36% increase to their college wOBA compared to other players. The coefficient is significant at the one percent level. In addition, a one percent increase in MLB wOBA corresponds to about a six percent decrease in college strikeout to walk ratio. Again, I get about a Rsquared of about 0.20.
Perhaps the most interesting thing that these regressions have shown is that college batting average has almost no correlation with MLB success. This may be a little misleading because hitters who get drafted in high rounds and who do well in the MLB will likely have high college batting averages, but the regressions show that there are other things teams should look for in their draft picks besides a good batting average. Traits such as low amounts of strikeouts, especially relative to the number of walks, helping indicate a player’s pure ability to get on base. When evaluating college players, factors such as character build, work ethic and leadership abilities will be just as good as indicators for success for strong college ball players. Perhaps the linear weights measurements used in wOBA calculations are on to something. Accurate weights can obviously not be applied to college statistics without the proper data, but the comparisons using MLB weights for college players can still be useful. In addition, it is also well known that position players are traditionally safer higherround picks than pitchers due to injury risk. I would argue that strong college hitters are often times the most productive top prospects, while younger pitchers who can develop in a team’s playerdevelopment system can be beneficial for a strong farm system and pipeline to the major leagues. Many highupside arms can be found coming out of high school, rather than taking power college pitchers. In addition, arms from smaller schools often times are overlooked due to the competitive environment they player in. Nevertheless, hidden and undervalued talent exists that could result in highupside rewards, both financially and productively for teams.
Print This Post