Dollar Sign on the Scout

I’m stating nothing new when I say that the popularity of Michael Lewis’s Moneyball did much to introduce its readers to the splendors of quantitative analysis in baseball. Nor is it inaccurate to say that Lewis — whose capacity for narrative is more or less unrivaled — characterized the sport’s older guard of talent evaluators (read: scouts) less as invaluable members of baseball’s front offices and more as mouth-breathing luddites.

For a number of reasons — most of them having to do with common sense — this image of scouts has disappeared almost entirely. Scouts are very clearly essential to the health of a baseball organization, and, generally speaking, it’s those teams that seek to use the best possible information — both visual and quantitative analysis — that experience the most success.

Still, even as the sabermetric community has acknowledged the importance of scouts and the act of scouting, there’s been no attempt (so far as I know) to measure the actual worth of individual scouts to their respective organizations.

This represents an attempt to do just that — to put a dollar sign on the scout, as it were.

Thanks to excellent work by Victor Wang, we have a sense of how productive a prospect might be (in terms of wins) given his place on Baseball America’s annual top-100 prospect list. Thanks to further work by colintj of Beyond the Boxscore, we know what the values of said prospects are in terms of WAR. These two sources represent the foundation of the present study.

To isolate the contributions of today’s scouts, I first started by taking BA’s top-100 prospect lists from each of the past five years (2006-10). For each of the prospects on those lists, I recorded (using a combination of Baseball America, the Baseball Cube, Cot’s Contracts, and reader input) both the bonuses paid out to said prospects and also the area scouts credited with their (i.e. the prospects’) respective signings.

Using Wang’s research (converted to WAR from WAB), I projected the likely yearly WAR totals for these prospects over their first six (i.e. cost-controlled) years. In cases where a prospect appeared on multiple lists (like, 2006 and 2007, for example), I took the highest of the extant rankings. Multiplying the WAR totals by $5MM (i.e. the current per-annum dollar amount per win), I was able to find a prospect’s “value” to his club.

After finding the values, I set out to find the costs of each prospect, too. While there are likely a number of costs associated with the development of a future major leaguer, the three main ones on which I settled were: (a) a player’s signing bonus, (b) three years of league-minimum salary (or, $1.2MM total), and (c) likely salaries during arbitration years (using the 40, 60, 80 method given a player’s projected yearly WAR values). Adding up these three components, I settled on the “cost” of each prospect to his respective organization.

To find the overall surplus value of a prospect to his organization, I subtracted the cost from the projected value. Using that method, here are the top-10 most valuable prospects from the last five years:

Roughly speaking, this is a list of top-10 batting prospects in inverse order of signing bonus, starting with Carlos Santana ($75K bonus), Desmond Jennings ($150K), and Mike Stanton ($475K). Nor is this shocking. For, while top-10 pitchers generally average around 6.00 WAR over the first six years of their MLB careers, top-10 batters average 13.50 WAR over the same span. This, of course, isn’t to say that hitters are always twice as valuable as pitchers, but rather that, owing to injury, etc., pitchers are much more subject to attrition.

To find the value of the scouts for this study, I connected the names and surplus values of all the prospects from the past five years to the scouts credited with their signings. Adding up the values connected to each scout, we get the following as a top-10 list — a sort of scouting all-star list:

So far as I know — and I’m more than willing to be corrected — here are those names with their respective organizations: Bill Buck, Detroit; Dave Jennings, Baltimore; Todd Blyleven, Colorado (although he left the organization to start a baseball academy in 2007 or ’08, I think); Sean O’Connor, San Francisco; Fred Costello, Arizona; Tim McDonnell, Florida; Jose Serra, Chicago NL; Ryan Fox, Florida then Washington; Fred Repke, Tampa Bay; and Randy Taylor, Texas.

Now that you see this list, there are certainly some caveats to make, as follows:

• Signing a player is not the business merely of an area scout. All the players you see on this list were also likely subject to the scrutiny of national crosscheckers, scouting directors, and general managers. In other words, signing a prospect is a team effort. In some cases, like with Justin Verlander, for example, who went second overall, the effort will be that much more communal. In any case, the point remains: for a scout’s name to appear on this list, he requires the support and trust of his organization.

• On that note, it’s also the case that sometimes multiple scouts are credited with the signing of a single player. How one deals with this is a question I’ve not answered perfectly. For example, four scouts (Ramon Pena, Ismael Cruz, Sandy Rosario, and Juan Mercado) were credited with the signing of Met farmhand Jennry Mejia. Do we attribute the surplus value of Mejia’s signing (ca. $13MM) to each scout in full? Do we give each scout a quarter of the value? For the present work, I’ve attributed the full signing to the first name on the list. So far as I can tell, none of the “supporting” scouts have been left off the top-10 list above as a result of this choice.

• By definition, we’re dealing with small samples here. Using five of BA’s top-100 lists means that there can be no more than 500 total prospects and no more than fifty top-10 hitters (i.e. the most valuable sort of prospect). In fact, there were about 340 total prospects, and the number of top-10 hitters is probably more like 25. Furthermore, no scout has more than four names appear on the combined lists, so each is being credited with value from just a few data points.

• This version of the study does not consider inflation — not in dollars per win, not in bonus values. So there’s no adjustment being made for a player like Verlander, for example, who made his debut in 2006, when a win was worth somewhere between $3.5M and $4.0M. Nor is there an adjustment made for players like Minnesota’s Aaron Hicks or San Diego’s Casey Kelly — i.e. players who’re unlikely to make their debuts in 2011. The idea has merely been to express all figures from the past five years in “present day” value.

• This study is only as good as the data that informs it. Baseball America (from whom I took most of the data regarding area scouts) is the gold standard so far as the business of prospect-mavening goes, but they’re subject to errors like anyone. Also, because I’m a careless, slovenly man, it’s possible that I, myself, have made manual errors.

• Please believe that I’m incredibly flexible so far as the methodology of this study goes. My main intent is to call attention to the work done by scouts and attempt to estimate what their worth might be to their respective organizations. I’m very clearly not the most talented of today’s baseballing analysts, but rather merely a curious fan/writer attempting to build on much more important work. As such, I regard this not as a definitive work, but as an attempt to start a conversation.

I’ll very likely treat the information here more fully in a later post. In the meantime, however, let’s acknowledge and recognize the names on this list.



Print This Post



Carson Cistulli has just published a book of aphorisms called Spirited Ejaculations of a New Enthusiast.


Sort by:   newest | oldest | most voted
eastsider
Guest
eastsider
5 years 6 months ago

Very interesting. I think this is a great idea.

I am going to ask some dumb questions because I don’t really understand the scouting and signing process, but ignorance has never stopped me before.

How much credit should go to the scout for signing the top 100 prospects? It seems like it is pretty much agreed that these are the best, presumably even I could have seen that. If I get Tulo to sign, isn’t that more of a sign that my organization is willing to put up the signing bonus than my ability to evaluate talent? Would it be more interesting to find scouts who found diamonds in the rough in terms of late round draftees or foreign players?

Albert Lyu
Member
5 years 6 months ago

Would it be too much if we increased the sample size by charting as many draft picks in later rounds as possible, including WHERE they were drafted out of (college/high school), then stipulating how well a certain team scouts in a certain region, giving credit to area scouts for a specific team in a specific region?

I guess that’s a long-winded way of saying: since we don’t have perfect information on which scout was most responsible for which signing, why not try the next best thing by seeing how well certain organizations draft in certain geographic areas compared to other organizations, and giving that team’s area scouts credit for that?

That was also long-winded, but there are plenty of interesting extensions from this idea of crediting individual area scouts for their work.

TheUnrepentantGunner
Guest
TheUnrepentantGunner
5 years 6 months ago

Great work Carson. I think even where you have limited data from some players (ie: Carlos Santana), you could plug in their actual results instead of projected results as a proxy….

I also think you are right on the sample size. If im not mistaken BP has letter grades for their non top 100 prospects, and they have worth too. It would increase the sample size, but also your work load and probably is an exhaustive undertaking. You should also look into negative values for washouts, realizing some might just be unlucky/injury related. So take every player signed for >>1 mil, of those no longer in the game, take their career WAR, multiply by 5 mil, and know that THAT is their total positive side of the ledger.

IE: that san diego top pitcher that washed out (who’s name eludes me on 4 hours of sleep) should get a big fat minus sign attributed to that scout.

Andy
Guest
Andy
5 years 6 months ago

Wow Carson

This is really cool

Ben Nicholson-Smith
Member
5 years 6 months ago

I think if we’re going to judge amateur scouts, it makes sense to rate them on the rankings they submit before a draft. The scouting director or GM will have the final say, but that scout’s ranking card will tell you how they evaluated all players, not just the ones their bosses drafted.

Oscar
Guest
Oscar
5 years 6 months ago

Wow, Carson. I am pleasantly surprised.

Have you considered adding the concept of replacement or average value? Meaning, 73.6M of surplus value seems impressive, but there’s really no frame of reference. And without the concept of a replacement level scout, we don’t learn anything substantive, because it’s probable that many scouts could find and sign, say, Matt Wieters. So crediting said scout with said surplus value is probably not completely accurate.

Tasintango
Guest
Tasintango
5 years 6 months ago

Seems that a replacement level scout would be a replaced scout. But that’s just my opinion.

rlwhite
Member
rlwhite
5 years 6 months ago

I recall someone (at BA?) posting a historical analysis of average career value per draft slot. I would think that that would make a useful baseline for a drafted player’s value in this scheme. Rather than asking how many wins a player signed by a scout contributed, we would be asking how many wins that player contributed compared to those typically available in that draft slot.

I think the current scheme works well for the international market where money is the primary limited resource in acquiring players.

Dwight Schrute
Guest
Dwight Schrute
5 years 6 months ago

Great work Carson. In regards to Bill Buck though I don’t think he deserves too much credit because 2 of those guys(Verlander and Porcello) were consensus no brainer picks. Verlander was considered my many to be the top prospect in that draft but slipped to number 2 due to signability concerns, considering the Tigers were desperate for pitching I could’ve been the scout and they would’ve picked him. Same thing with Porcello, he was considered my many to be the best prep arm in the draft and arguably the 2nd best pitcher overall after David Price, he too slipped due to signability concerns. Again I think I could’ve been the scout and the Tigers would’ve drafted him. As far as Maybin goes maybe the Tigers would’ve been better off listening to somebody else because the guy that was drafted right after him was another HS centerfielder by the name of Andrew McCutchen.

joe
Guest
joe
5 years 6 months ago

Eh,I think you have to count the successes, there’s plenty of consensus busts, should you throw those out because “everybody” got it wrong?

mmoritz22
Member
5 years 6 months ago

This is an icreadibly amazing idea. This could actually be start of a brand new phase of the baseball observing and analysis. I always knew that scouts were incredibly important and I have kind of thought about who the best scouts are but never in full detail. This is a fantastic idea!

Mike
Guest
Mike
5 years 6 months ago

Aren’t you mostly measuring how similar a scout is to BA scouts, rather than how good a scout is at predicting future baseball value. I understand that this is more or less the most useful information we have for rating players that haven’t played at least a few years in the majors, but should scouts really be getting credit for top players just because BA “missed” on Alex Gordon and Delmon Young too?

Clark
Guest
Clark
5 years 6 months ago

No, the analysis doesn’t compare how similar a scout is to a BA scout, because the scout judges the talent prior to professional baseball experience and BA analyzes the results- years after pro ball experience.

Brad Johnson
Guest
Brad Johnson
5 years 6 months ago

Excellent!

I kicked around this very idea using the Prospect Handbook a couple years back. I never got so far as assigning dollar values, nor did I think to leverage Victor Wang’s research, good job there.

The biggest difficulty, which you acknowledged in full, is the ‘team’ nature of these signings. From our outsider perspective, it’s very difficult to ascertain just how much to credit each scout with a signing.

philly
Guest
philly
5 years 6 months ago

The only player signed by the top 10 scouts you have listed who wasn’t a high rd pick is Jonthan Sanchez. It’s probably fair to say that every other player credited to the signing scout was predominantly a “team” evaluation. Sean O’Connor is the winner!

As others have pointed out that’s the key issue in this kind of research and it would still be true if you corrected for draft slot. It’s the nature of the draft that a very few exceptions are the successes that drive the valuations.

An area scout will turn in reports on 100s of players in a given year and his team will draft 0-5 at most. I think it’s great to give these guys credit for the successes that come their way, but an area scout could be right about 90 players his team didn’t draft and never receive any credit by these methods. Or perhaps worse, be wrong about a 1st rd pick and be considered as having had a bad year.

No matter what, external observers are only seeing a small tip of the iceberg.

WilsonC
Guest
WilsonC
5 years 6 months ago

Are pre-draft lists from a public source like BA available? That might add some useful information when comparing where a guy was taken compared to the consensus. One way of looking at it would be to compare players taken with the “consensus” best available picks, to see which scouts helped or hurt their teams by bucking the trend.

I think one of the challenges in using a system like this to measure scouts is that we’re really looking at predictive public scouting data to measure value of prospects, whereas a scout’s worth is built around his ability to give better information than the public sources. Take Brandon Wood as an example. He was a highly regarded power hitting prospect, but some of the scouting reports wondered if he’d hit enough to actualize his power. Does the scout deserve credit for Wood’s impressive prospect ranking, or do the Wood skeptics deserve credit for his failure to actualize his tools?

As well, consider a team like the Blue Jays. A number of their players – Romero, Marcum, Hill, Cecil – were never listed higher than the 60’s or 70’s on BA, and went on to become solid regulars. Romero in particular was a widely criticized pick and seen as a reach, but his development as a major league player is more in line with his draft slot than his prospect rankings. I’d give a scout more credit if he was rightly high on a player and remained confident in that player against the public opinion, as opposed to one who was right about a player that everyone knew would be good.

Clark
Guest
Clark
5 years 6 months ago

Another value of scouts, which may be impossible to gauge, is not only the ability to identify talent, but also the ability to identify flaws; therefore encouraging a team to take one player instead of another. In other words, a good scout could achieve the highest rating just by recommending EVERY player he sees. There is value is avoiding crappy players and ergo negative value in saying crappy players are good.

obsessivegiantscompulsive
Guest
5 years 6 months ago

First, I must say that this is a great article and idea, and well executed and thought out. I had wondered about this too, you did a great job.

However, and you knew it was coming, it is greatly affected by a major flaw: the draft is skewed greatly.

My study of the draft found that even within the first round, the odds of finding a good player is very low by the time you get to the back of the first round, roughly 10% per my data, and while not that great up top, roughly 40-45%, that is still four times the likelihood of a scout being successful in a draft, if one is signing someone in the top 5 of the draft while the other scout signed someone in the last 10 picks of the first round.

And, obviously, it gets even worse beyond the first round.

Evening the tables would mean that all first round draft picks be removed from your dataset, which would pretty much gut your dataset. Even if you took out, say, the first 15 picks of the first round, that would still make your dataset pretty sparse, I believe. At minimum, you would need to remove the first 5 picks overall, as that is the greatest difference in odds between picks, and hope that you still have enough data to say something interesting.

Still, nice job, I see this as version 0.1, not that you didn’t do a good job, but just that this is so new an area to look into that will need further refinement with the methodology. I look forward to your 0.2 version.

The Ancient Mariner
Guest
The Ancient Mariner
5 years 6 months ago

No, this doesn’t require removing first-round picks. What you would need to do, I think, is add one additional layer: compare the “net” column to the expected net value for the player’s draft slot, giving you “Net+” (a la OPS+, etc.). For international signings, you’d probably want to figure an expected rate of return per $ of signing bonus, and compare to that.

Sam
Guest
Sam
5 years 6 months ago

Unfortunately, there is so much that goes into the scouting and selecting of a player that is completely out of the area scouts hands. As a few readers have posted, should one really get credit for Tulowitzki or Longoria? As Clark mentioned, if scouts are going to get credit, should they also be penalized for all the guys they recommend who don’t pan out? Factoring in the low rate of drafted players who actually make a career in the ML’s, most scouts would come out looking bad if you saw their recommendations after the 2nd round. Unfortunately, the only way to really know how well a scout evaluates is to look at his “Pref List” 3 or 4 years down the road.

The Ancient Mariner
Guest
The Ancient Mariner
5 years 6 months ago

Yes, one should. Had the Seattle area scout done a better job, Tulowitzki would have ended up a Mariner. Had Colorado’s SoCal scout done a poorer job, the Rockies could have taken Wayne Townsend. The area scout’s ability to evaluate and sell the players matters immensely.

AJS
Guest
AJS
5 years 6 months ago

This is impressive and interesting work, but any list that includes Brandon Wood as one of the 10 most valuable MLB prospects seems pretty suspect.

Wouldn’t it be better to look at older BA top prospect lists, so we can find which players produced the most value during their first six years under team control, and reverse-engineer from there?

Meaning, instead of judging based on hypothetical/projected value, we judge based on actual recorded value?

Steve
Guest
Steve
5 years 6 months ago

I hate to be negative here, because it’s a very cool idea, but I think Ben is correct; there’s simply no way to do any kind of meaningful study without having the pref lists of the area scouts.

For instance, Posey was an outstanding pick by the Giants, so we give the area scout $x in value for that pick. But what if he had been selected before the Giants picked? The scout could have known EXACTLY how good Posey was going to become, yet not gotten any credit for it.

On the flip side, the best value pick in the 2009 draft appears to have been Mike Trout with LAA. Where did LAA have him on their pref list? If they had him high on their list, that would tell us that they did an outstanding job scouting the player and they deserve immense credit. But what if Florida, Houston, and LAA all had Chad James, Jio Mier, and Trout on their list, in that order. All three teams had it wrong; how much credit do the Angels get for having the last selection of those teams and stumbling into Trout?

Crediting a scout with signing a player is really an outdated concept; when scouts used to travel the country, discover a player, and instantly sign him, it meant everything. Now, as you astutely point out, they are really organizational decisions.

If you were a scout trying to appear good in this system, you would want to inflate all of your grades to make players look good, as the only way to get credit is to draft players. Sometimes the best thing a scout can do is tell you NOT to draft a player. Let’s say the Angels were torn between Trout and Eric Arnett, who was the next pick. The scout that correctly had a less impresive grade on Arnett is part of the reason that they were able to make the correct pick.

To make any kind of fair evaluation of scouts, you need to have their entire pref list, something that is obviously not possible. If a scout has 75% of the players in his area wrong, but has the right guy in the right place when the pick comes up, he’ll look like a genius. Scouts evaluate hundreds of players per season; to judge them just based on the 1-3 that their boss decided to select will lead to far too much uncertainty to draw even a remote conclusion. It would be as if we compared MARCEL and Oliver and PECOTA by picking four players from each system out of the 1,000 they project, and seeing which system did the best.

Again, I think it’s an awesome idea, and it’s really cool that you’re trying to tackle a new frontier. I just don’t see any way that in today’s game you can produce a statement with any confidence given the limited data we have access to.

The Ancient Mariner
Guest
The Ancient Mariner
5 years 6 months ago

Thing is, though, the scout is the one who sells the org on the player; the area scouts are the primary inputs into those organizational decisions. That’s like saying that letters of recommendation don’t matter when you’re looking for a new job because the folks writing the letters aren’t the ones who’ll be hiring you. True, they’re not, but they influence the decision. Now imagine that the folks recommending you *were* part of the hiring committee . . .

Josh
Guest
Josh
5 years 6 months ago

If I’m not mistaken there has been work on the value of a given draft pick. Perhaps the average value for the pick that the player was taken can be subtracted out to get closer to the surplus value created by the scouting?

Carson’s analysis factors in the signing bonus which should even things out for players that got drafted lower for signability concerns.

Chris McCoy (YourSports.com)
Guest
5 years 6 months ago

This is ground-breaking analysis.

Keep going with it but take it to all players over last 10 years.

Will change how fans not only appreciate scouts, but understand the value of homegrown talent vs. the FA system.

Can extend this to positional coaches too, at all levels of the process.

Keep going with this data. It’s really important for the scouting community.

Chris

Ratwar
Member
Ratwar
5 years 6 months ago

Sorry to be negative, but this article is worthless. Just look at the teams the scouts are from. From 2003 to 2010 (and I didn’t check 2002), those ten teams (using a combination of Florida and Washington for Fox) have never combined to win 50% of their games. Their average winning percentage is .479. The teams have 12 seasons with 90+ wins, 16 seasons with <70 wins. Either good scouts don't help their teams win ball games, or good scouts aren't identified by this analysis.

All you've proved is that early first round draft picks tend to be much more valuable than anything else to gain top quality prospects and that teams in general can identify top talent. I applaud your effort in the area, but right now I don't think you've produced any worth wild information.

The Ancient Mariner
Guest
The Ancient Mariner
5 years 6 months ago

And of course, Felix Hernandez should never have won the Cy Young, because he only won 13 games! Why, C. C. Sabathia won 20!

Ratwar
Member
Ratwar
5 years 6 months ago

I’m tempted to write a long winded rebuttal based around sample size, (and that the goal of sabermetrics is in fact to win baseball games), but instead I’ve decided to take a different tact.

Felix Hernandez was over .500. These teams with ‘great’ scouts aren’t.

The Ancient Mariner
Guest
The Ancient Mariner
5 years 6 months ago

You’re comparing apples to banana peels. Saying an individual scout can’t be great because the team which employs him loses more than it wins is exactly the same logic as saying that Felix el Rey can’t be great because his team lost 100 games.

Even if it worked logically, your argument is further undermined by the fact that these franchises have accounted for five WS appearances and three other postseason appearances in the last five years, and seven of the players named here have played in the postseason for the team that drafted them. So, the conclusion you’re attempting to assert is far stronger than the data actually warrant. Yes, there’s a lot more work to be done to make this all valid — but that’s far from justifying your unsupported declaration that it’s “worthless.”

But then, I wouldn’t expect any better from someone who doesn’t know the difference between “tact” and “tack” . . .

Dan Rosenheck
Guest
Dan Rosenheck
5 years 6 months ago

Yeah, the other big thing that leaps to mind here (as one other poster said) is that you can’t only look at a scout’s successes. You’d have to look at all the guys he signed and penalize him for the bad ones. The proper method, it seems to me, would be:

1. Calculate the MLB surplus value of a player
2. Subtract the player’s signing bonus
3. Subtract the average surplus value-minus-signing-bonus of guys in the same draft neighborhood (this is 0 for international signees)

This would give you Surplus Value Above Average. Not sure if you have to factor in a discount rate at any stage.

joe
Guest
joe
5 years 6 months ago

Is it a problem is there’s a significant cost to the scout who advocates busts, does it skew things?

colin
Guest
colin
5 years 6 months ago

doesn’t this give the scouts with higher picks an advantage?

philly
Guest
philly
5 years 6 months ago

I did the same kind of study several years ago measuring actual produciton from the 1987-1996 drafts. That helpfully gets away from the Brandon Wood is super valuable issue. It’s not a measure of potential value, but actual MLB production. I never published it, but the way I did it also takes care of some of the other criticisms that have come up. First, I calcualted a slot value for each draft slot. Back then I used BP’s WARP, but I’ve since changed over to rWAR. This is still in WARP. Then I determined a pre-FA slot value. Then found the pre-FA production for every player from those drafts. So for each player there was a surplus production above expected slot production.

Here are the 11 most productive area scouts from the 1987-1996 drafts.

Data is name, pre-FA slot WARP, actual pre-FA WARP, Actual – Slot WARP, some key players.

1. Tim Kelly; 6.0, 90.7, 84.7 WARP; Tim Salmon, Troy Percival, Chad Curtis
2. John Ramey; 10.9, 86.7, 75.8; Marcus Giles, Joe Mays, Jim Mecir
3. Joe Delucca; 22.1, 95.5, 73.4; Manny Ramirez, Charlses Nagy, Pete Harnisch
4. Guy Hansen; 11.0, 83.6, 72.6; Kevin Appier, Jeff Conine, Mike Magnante
5. Luke Wrenn; 26.8, 95.9, 69.1; Tino Martinez, Mike Hampton, Nomar Garciaparra
6. Roy Clark; 17.7, 86.5, 68.8; Kevin Millwood, Greg McMichael, Jerry Dipoto
7. Scott Trcka; 18.8, 81.4, 62.6; Scott Rolen, Jeff Jackson
8. Matt Sczezny; 20.3, 82.0, 61.7; John Valentin, Mo Vaughn
9. Marty Esposito; 32.3, 92.7, 60.4; KNoblauch, Mark Guthrie, Todd Walker
10. Ken Madeja; 4.5, 64.1, 59.6; Derek Lowe, Matt Mantei
11. Erwin Bryant; 2.1, 60.8, 58.7; Jeff Bagwell

I went to 11 so you can see signing one great player from a low slot (4th rd) is going to shoot a scout way up a list like this. In actuality all of these guys signed 2-3 good players tops.

And that’s over a 10 year period when some of these guys turned in a thousand reports or something like that.

My conclusion is that this is a great way to draw some attention to area scouts, but for all the reasons mentioned it’s just not a good way to rank scouts.

You need those preference lists for that.,

Ben
Guest
Ben
5 years 6 months ago

Is there a way to account for actual MLB performances other than just projected WAR off of the prospect’s ranking? I think that doing this analysis with the prospect lists from the 2000-2005 BA book would make more sense, and give you real data to weigh towards the scout’s value. The ultimate goal of a prospect/draft pick/signee is to eventually contribute to the Major League club, and that is what the scouts are looking for. A guy like Hermida was ranked high on these lists, and looks good for the FLA scout, but by your formula has only compiled 3.4 WAR in his six MLB seasons so far. That he’s only been worth $13.9 million has to count for something, right?

Eugene
Guest
Eugene
5 years 6 months ago

Hi, I didn’t read all the comments so I’m not sure if this has already been brought up but:

Most of the scouts on your list belonged to teams with consistently high draft picks over the years in question (such as Baltimore, Tampa, Washington) with exception/s.
The scouts on these teams have an inherited being employed by the worse teams that own more draft picks and without taking that into account this study has, in my opinion, a fairly large hole in it.

Other than that, interesting and coooool man.

Greg Andrew
Guest
Greg Andrew
5 years 6 months ago

This methodology might work well for international scouts. But trying to evaluate scouts based on who the team drafts and signs is like trying to rate players by shoe size, especially for high-round picks. The information you need to rate scouts is simply not in the public domain.

John Franco
Guest
John Franco
5 years 6 months ago

It’s a pretty awesome idea, even if you’re just rating scouting departments and not individual scouts.

Sure it has holes (sign one Bagwell and you’re high-ranked for life) but that’s the way it was in the 50s and 60s… scouts could make a living for years after finding the one big score. So this actually does jive with how scouts used to be regarded (and maybe still are).

For example – Tom Greenwade

Major Leaguers Signed [2]:
Mickey Mantle [1951-1968]
Bobby Murcer [1965-1983]

And really, if you have 40 scouts in your organization and one guy gives you one Carlos Santana (and nothing else) and another guy gives you one Mike Stanton (and nothing else), you’re still doing OK.

wpDiscuz