Despite the fact that they are generally cited as probably providing little in the way of predictive power, the batting lines of prospects in the Arizona Fall League are also frequently cited by baseball writers in discussions of those same prospects. Nor is this entirely surprising: one wants to make some sort of comment about Kris Bryant, for example, who’s just finished his own AFL season with six home runs and a .727 slugging percentage. Even after noting that he recorded those figures in just 92 plate appearances, one is compelled to suggest that Bryant’s performance was impressive. And it was, certainly, within the context of the 2013 season of the Arizona Fall League.
The present author, attempting to behave somewhat responsibly, has produced statistical reports for the AFL this fall which utilize an offensive metric (called SCOUT+) that combines regressed home-run, walk, and strikeout rates in a FIP-like equation to produce a result not unlike wRC+. By isolating and regressing those metrics (i.e. not BABIP) which become reliable in smaller samples, one reasons, it’s possible to reduce the noise otherwise present in slash lines — and perhaps to better identify how performances from the AFL might inform future major-league production.
“How successful is this (theoretically) more responsible and (definitely) more nerdy attempt to measure AFL production, to the extent that it might hold within it some manner of predictive power?” one might, perhaps already has, wondered. “Not very,” appears to be the answer.
To test how well recent AFL performances might correlate with major-league performance, I began by isolating player-seasons from the top and bottom 25% of the SCOUT leaderboards from each of the last three AFL seasons (2010-12, that is). With slightly more than 100 batters in each season, that gives us about 26 “good” hitters and 26 “bad” hitters from each of the past three years — or 79, precisely, of each.
To give the reader a sense of the difference in performance between the best and worst hitters by this methodology, here’s a table featuring the respective performances of each:
The 79 hitters from the Best group homered about 2.5 times more frequently than, walked twice as often as, and struck out about half the rate of their counterparts in the Worst group. The strengths and weaknesses of each group are exhibited in the average slash lines, as well. The Best group posted a higher batting average (by about 50 points), higher on-base percentage (by about 100 points), and a higher slugging percentage (by nearly 130 points). Conveniently, one finds that the BABIPs for each group are almost identical as well (.340 and .335, respectively), so one needn’t make any other sort of allowances for that.
That the hitters from the Best group have been about half a year older than those in the Worst isn’t entirely suprising. All things being equal, batters who are both (a) young but also (b) closer to their peak age (i.e. about 27) are likely to perform more ably than those who are less close to their peak age.
To get a sense of how these AFL performances might be predictive of major-league performance, what I did was to identify all the players from both groups who’d graduated to the majors. Of the 79 hitters from the Best group, 41* have recorded at least one major-league plate appearance. Of the 79 hitters from the Worst group, only 31 (i.e. 10 fewer) have recorded at least one major-league plate appearance.
*Or, 42 if you count Derek Norris, who qualified as one of the best hitters in two separate AFL season.
That the former group has graduated more of its constituents to the majors probably merits some investigation. Among the possible explanations, it would seem the most likely ones are either (a) the players from the Best group are actually better, (b) the players from the Best group, being older on average, were closer to the majors anyway, or (c) the groups are equally talented, and chance is the cause for the uneven distribution.
Whatever the explanation, it’s worth noting that a higher percentage of the Best group have recorded fewer than 100 major-league appearances. Using that figure (i.e. 100 PA) as the threshold for consideration, one is left with 28 hitters from the Best group and 23 hitters from the Worst group — i.e. a difference of only five, which is less significant.
The 100-plate-appearance threshold is also convenient, insofar as it represents a sample size at which metrics like walk and strikeout rate are beginning, at least, to reflect true talent. Below are the major-league figures thus far for both of the groups in question.
First, from the Best group:
|Ryan Lavarnway||Red Sox||291||1.7%||5.8%||23.4%||.256||.208||.258||.327||55|
And next, from the Worst group:
|Anthony Gose||Blue Jays||342||0.9%||6.4%||28.1%||.336||.240||.294||.361||77|
What one notices — besides the fact that Mike Trout was somehow the worst at something one time* — is that the groups are more or less pretty similar in terms of major-league performance.
*His line in the 2011 edition of the AFL: 111 PA, 1 HR, 4.5% BB, 29.7% K, .245/.279/.321 (.347 BABIP).
For reference, here’s each group’s average major-league line, one next to the other:
Whereas, previously, one noted rather large differences between the two groups so far as their AFL home-run and walk and strikeout rates were concerned, here — at the major-league level — those differences have disappeared almost entirely. The Best group has homered slightly more often and — for reasons that are probably interesting, but which I’ll ignore presently — have recorded a kinda much lower BABIP. Otherwise, the offensive output has been rather similar — and, in the case of park-adjusted hitting relative to league average, precisely the same (as denoted by the 81 wRC+).
Is the Arizona Fall League good for something? Almost certainly. Are the stats produced by the players there — even those stats which become reliable in smaller samples — predictive of future major-league performance? It’s possible, yes, but if that is the case, it’s difficult to detect by this methodology.
Print This Post