On Monday, we’ll roll out the beginning of our annual Organizational Rankings series – the plan is to do three teams a day for 10 days, concluding with the top three teams on April 4th. Before we begin discussing each franchise, however, I thought it’d be helpful to explain some of the changes we’ve made to the system this year.
Last year, we made one key change that we’re carrying over, and will likely remain in place going forward: our authors were asked to grade individual inputs rather than overall organizational strength. Grading by components adds a layer of transparency that is important, as you can see exactly why a team ended up placing in the specific spot they land.
However, we’re upgrading the implementation of that grading scale this year. Last year, I asked all of our authors to assign letter grades to each organization in three categories: Present Talent, Financial Resources, and Baseball Operations, and then had the three guys on staff who specialized in prospect analysis grade them out in terms of Future Talent as well. We then converted the letter grades for those four variables into scores, assigned weights to the individual categories, and used the weighted averages to create a total score.
There were a few problems that arose by doing things that way, however, and we’re doing our best to fix those issues this year. Here’s how:
1. The grading scale has been changed from letter grades (A-F) to the 20-80 scale that is commonly used by baseball scouts. The 20-80 scale is actually a pretty good system, with 50 as a clearly defined average and each 10 point increment representing one standard deviation from the mean. We asked each of our writers to use this scale for their grading, and to grade in five point increments (50, 55, 60, etc…), and to ensure that their average overall score for each category came out very close to 50. With the letter grading scale, the overall average was simply too high, and this allowed us to easily adjust the grades if an author’s overall average came in too high. This scale allows more consistency between authors, and gives us a better representation of what we’re trying to communicate about a team in a certain area.
2. Present Talent and Future Talent have been more clearly defined, and will now be referred to as “2012 Outlook” and “2013+ Outlook”. That category is not just current prospects or recently graduated prospects, but also the expected future value of every player under team control beyond this season. We want to give teams credit not just for their prospects or young Major Leaguers, but also for the quality pieces they already have in place for the next several years. We also realized that only having our prospect writers grade out this category didn’t make sense and led to several unintended consequences, so everyone participated in the 2013+ Outlook category this year.
3. The weights placed on each category have been modified slightly, with the biggest change coming in the level of importance placed on a team’s baseball operations department. Last year, the weightings were 30/30/25/15, with Present Talent and Financial Resources carrying the most weight, then Baseball Operations, and Future Talent coming in as the least important variable. This year, the weightings are 35/35/15/15, so we essentially reduced the baseball ops importance by 40% and divided those points equally between 2012 Outlook and Financial Resources. This was done for a couple of reasons:
a. Baseball Operations is the most fluid part of any organization, and can be changed relatively easily. While a team cannot quickly overhaul its talent base or instantly begin generating new revenues, a front office can be turned over in short order and new processes put into place fairly quickly. The Astros are the perfect example of this, as a year ago they graded out as having the worst baseball operations staff in the league, but have since made significant strides to change the ways they make decisions. Since this category is so fluid, it doesn’t make sense to credit or penalize a team so strongly for something that isn’t static.
b. This is also the area we know the least about. We can measure player talent with some degree of confidence, and we can measure financial resources very easily, but without sitting in on meetings that we’re not privy to, it’s impossible to know exactly what kind of inputs are going into a team’s decision making. We can make some some educated guesses based on the kinds of transactions that a team completes, but we also don’t see any evidence of decisions that result in no transaction, and what we end up drawing conclusions based off is a very small part of what a baseball operations staff actually does. At this point, every team in baseball has analytical capabilities, and the differences in how they operate are mostly decided by how much emphasis is placed on those tools at the executive level. That’s just something we can’t really know from the outside, so our uncertainty should serve to make us less confident in our evaluations of this category, and as such, it should carry less weight.
Making these changes should help deal with a few of the issues that have arisen in the past. That doesn’t mean these rankings will be perfect, or that you’ll agree with all of them, but we hope that you’ll be able to see why the consensus of the group led to a team being positioned in a specific spot within each category.
Finally, we’re auctioning off the #6 spot on the list*. After the Mariners in 2010 and the Twins last year, we now see that placing sixth on this list is a near guarantee of epic failure, so if there’s a team that you’d like to see crash and burn in 2012, simply send us a large sum of money along with the franchise that you’d like to sabotage. The largest donor of the day will get to ruin the organization of his choice.
*Okay, we’re not really doing this. But we thought about it. Maybe next year.