We’ve spent the past few weeks taking a look at combined rankings for each organization, going division-by-division. I wasn’t really sure what I was going to find, but my goal was to take a look at the two main overall aspects of a prospect – his talent/reasonable ceiling and his risk of getting there – and see how farm systems graded out. The traditional 1, 2, 3 ranking system is fine because we’re ultimately looking at an educated subjective process, but a simple list doesn’t show the audience where the real gaps lie and where there’s negligible difference. My hope was to begin to approach a way to see these differences, and while there is certainly room for improvement, I believe it has led to some interesting results.
Averaged Publication Lists
|New York (NL)||16||10||12||14||14||7||12.17||12|
|New York (AL)||11||14||14||10||16||12||12.83||13|
|Los Angeles (NL)||19||21||19||18||18||22||19.50||18|
|Los Angeles (AL)||30||30||29||30||30||29||29.67||30|
The above chart is just an averaged ranking of the publications that I used for each prospect list. The “AVG” column is there average placement on these lists, and the “RANK” column is the list method of ranking.
There don’t seem to be any real surprises as you move down the list. St. Louis was ranked first by every publication but Bullpen Banter, and it has a pretty clear advantage over Texas and Seattle, who seem to have very fairly farm systems. Moving down the list, the next three are pretty close and can probably considered interchangeable in the 4, 5, and 6 spots. There seems to be a drop off from those systems to the next 3, and it drops off again to the next 3-7 systems. Baltimore seems to be a cut-off point for the upper half of farm systems, though it’s not the exact halfway point.
The bottom half starts with five teams that are pretty close together, and the next 3 after that follow suit. Milwaukee, Chicago (AL), and Detroit are a bit worse than the others, and the Angels come in dead last by a substantial margin. The Angels grabbed the bottom ranking on 4 of 6 lists with Chicago and Detroit getting the others.
Putting Those Against the Averaged Grades and Risks
|Team||RANK||Sys Gd||Sys Rs||50+||50+ Gd||50+ Rs||60+||60+ Gd||60+ Rs|
|New York (NL)||12||50.530||2.788||25||52.300||3.000||3||64.167||2.167|
|New York (AL)||13||52.000||2.767||23||54.130||3.000||4||61.250||3.125|
|Los Angeles (NL)||18||50.000||2.467||19||52.895||2.842||1||62.500||3.500|
|Los Angeles (AL)||30||47.333||2.817||16||51.094||3.188||1||62.500||3.000|
This is the part about which I was very curious. The publications rank the systems, but how do they look when broken down? Again, the grades and risks are only based off of two lists – Baseball America and Baseball Prospectus – but they give us a decent overall look. Just for giggles, let’s look at how each column correlates with the overall ranking from the above section.
|Sys Gd||Sys Rs||50+||50+ Gd||50+ Rs||60+||60+ Gd||60+ Rs|
When looking at the “grade” and number of prospect columns, we expect to see negative correlations because, as the ranking goes up, we expect to see lower grades and number of prospects. When looking at the “risk” columns, we expect to see positive correlations because the risk will theoretically grow as the ranking grows.
The highest correlation belongs to the 50+ grade – the averages of all 50+ grade prospects in a system. This grade looks at the quality of the prospects who are more likely to be average major-league players, and these are theoretically the most likely to make the majors in some capacity.
The next highest was a bit of a surprise to me – the number of 60+ grade prospects. What evaluators seem to be valuing is simply having these higher ceiling players in the system, and as you can see, the risk of these players doesn’t seem to matter that much as it nets the lowest correlation. Higher ceiling prospects are so valuable because they are the players around which a team can build a franchise. Some will certainly bust, but the ones that make it through the attrition can become core contributors, while the lower-ceiling prospects likely won’t have the same impact on the organization. Having more is obviously a good thing as those AT&T commercials tell me.
It is important to remember what the correlations are comparing. They are comparing what the publications seem to favor when ranking the overall farm systems, not the effectiveness of each column. Other research will need to be done into whether or not they favor the correct things, and that future research is one of the reasons I started this project. It’s a starting point. What really is more valuable between ceiling and risk? What types of systems flourish? Should a team sell out for ceiling, or is there a certain degree of risk that’s not worth taking? And do teams have a particular strategy, and how effective is it?
I don’t have the answers to these questions at the moment, but I hope that I have begun the process in a valuable way. Hopefully, more lists will begin to note the overall grade and risk of each prospect because more grades will help even out the outliers and give us a better measurement. If anyone has any comments or suggestions on how to improve this project for the future, please leave a comment.