Different Aging Curves For Different Strikeout Profiles

What follows will look at aging curves as they relate to players with specific strikeout profiles. Specifically, we will look at how wOBA ages for players that strikeout more than the league-average strikeout rate and less than the league-average strikeout rate.

Through the research that is presented in this post, two points will be proven:

  1. Players of different strikeout profiles age—their wOBAs change—at different rates.
  2. The aging curve for players of different strikeout profiles has changed over time.

Before I present the methodology, the research that was conducted, and their conclusions, I want to give a big thank you to Jeff Zimmerman, who has not only done a lot of research around aging curves, but has also helped me throughout this process and pushed me in the right direction several times when I was stuck. Thank you.


In order to give a non insignificant amount of time for a player’s wOBA to stabilize, but not place the playing time threshold for plate appearances so high that we artificially limit the population even more than it naturally is at the ends of the age spectrum, I looked at all player season from 1950 to 2014 where a player had a minimum of 600 plate appearances for the first aging curve in this post. The second aging curve in this post looks at all player seasons from 1990 to 2014 with a minimum of 600 plate appearances.

Now that we have our population, we need to split our population into two groups: players that strikeout more than league average and players that strikeout less than league average.

Because the league average strikeout rate of today is very different than it was 65 years ago, we can’t look at a player’s strikeout rate from 1950 and compare it to the league average strikeout rate of today.

In order to divide the population into two groups, I created a stat that weighs a player’s strikeout rate against the league average strikeout rate for the years that they played. For example, if a player played from 1970 to 1975, their adjusted strikeout rate would reflect how their strikeout rate compares to the league average strikeout rate from 1970 to 1975.

Players were then placed into two buckets based on their adjusted strikeout rate: players that struck out more than league average and players that struck out less than league average.


There has been a lot of discussion over the years about the correct methodology to use for aging curves. This conversation has had altruistic intentions in the sense that it’s aim has been to minimize the survivorship bias that is inherent in the process, and, through the progress that has been made over the years, this study uses what the author has found to his knowledge to be the best technique to date. This article by Mitchell Lichtman summarizes a lot of the opinions.

While there is a survivorship bias inherent in any aging curve, the purpose of the different techniques used to create aging curves is to minimize the survivorship bias wherever possible.

What We Don’t Want In an Aging Curve 

An aging curve is not the average of all performances by players of specific ages. For example, say you have a group of 30-year-old players that have an average of a .320 wOBA and group of 29-year-old players that have an average of a .300 wOBA.

The point of an aging curve is to see how a player aged, not how they played. The group of 30-year-old players has a high wOBA because they are a talented group of players; they lasted long enough to play until they are 30. As they aged from the previous year, when they were 29 to their current age 30 season, they lost the bottom portion of players from their player pool. These are the players that couldn’t hang on any longer, whether it be because of a decline in defense, offense, or a combination of both. This bottom portion of players lower the wOBA of the current 29-year-old population through their presence and raise the wOBA of the 30-year-old population through their absence.

At the same time, the current 30-year-olds aged from their age-29 season to their age-30 season. Sure, there may be players who had a better age-30 season than age-29 season, but the current group of 30-year-olds, as a whole, still played worse at 30 than they did at 29.

When you look at the average of a particular age group, in this case 30-year-olds, you only see the players that survived, and, because they no longer play, you leave behind the players that are hidden from you sample. The method that follows resolves this issue to an extent.

What We Do Want In an Aging Curve

This study uses the delta method which looks at the differences of player seasons (i.e. a players age 29 wOBA minus their age 28 wOBA) and weighs those differences by the harmonic mean of the plate appearances for each pair seasons in question.

I would explain this further, but Jeff Zimmerman does an excellent job of this in a post on hitter aging curves that he did several years ago. While Jeff Zimmerman looked at RAA, which is a counting state, the methodology is basically the same for our purposes and wOBA, which is a rate stat:

In a nutshell, to do accurate work on this, I needed to go through all the hitters who ever played two consecutive seasons. If a player played back-to-back seasons, the RAA values were compared. The RAA values were adjusted to the harmonic mean of that player’s plate appearances.

Consider this fictional player:

Year1: RAA = 40 in 600 PA age 25
Year2: RAA = 30 in 300 PA age 26

Adjusting to harmonic mean: 2/((1/PA_y1)+(1/PA_y2)) = PA_hm
/((1/600)+(1/300)) = 400

Adjust RAA to PA_hm: (PA_hm/PA_y1)*RAA_y1 = RAA_y1_hm
(400/600)*40 = 26.7 RAA for Year1
(400/300)*30 = 40 RAA for Year2

This player would have gained 13.3 RAR (40 RAA – 26.7 RAA) in 400 PA from ages 25 to 26. From then, I then would add all the changes in RAA and PA together and adjust the values to 600 PA to see how much a player improved as he aged.


Below is an aging curve by strikeout profile for all player seasons with over 600 plate appearances in a season from 1950 until 2015.

Screen Shot 2015-04-18 at 1.23.52 PM

We can see several findings immediately:

  1. Players do age differently based on their strikeout profile.
  2. Players that strikeout more than league average peak at 23.
  3. Players that strikeout less than league average take longer to hit their peak—their age 26 season.
  4. Players that strikeout more than league average age better than players that strikeout less than league average.

From a historical perspective, this graph is fun to look at, but the way the game was played over half a century ago is eclipsed by societal evolutions that today’s players benefit from.

To give us a more realistic idea of how today’s players age relative to their strikeout rate, I made another graph the at looks at player seasons from 1990 to 2014.

Screen Shot 2015-04-18 at 1.40.36 PM

What we find in this graph, which is more current with today’s style of play, is that players still age differently dependent on their strikeout profile, but not in the same way that they did in the previous sample.

Players that strikeout more than league average still peak earlier than players that strike out less than league average, but in this more current population of players, players that strikeout more than league average peak very early—their age 21 season. This information would reciprocate the sentiment that has been conveyed through recent work that suggests that the aging curve has changed to the point that players peak almost as soon as when they enter the league.

The peak age for players that strikeout at below league average rates is still 26, but whereas this group aged more poorly than the strikeout heavy group in our previous population, players that strikeout at below league average rates now age better than their counterparts.


This information can make material differences for our overall expectations and outlooks on players.

Previous knowledge would suggest that players like George Springer and Kris Bryant—players who have exorbitant strikeout rates—are still on the climb as far as their talent goes, but this information shows that these players may already be at/close to their peaks or on the decline as far a their wOBA is concerned.

This information also shows that we should be patient with prospects that have a penchant to put balls is play; while they peak more quickly than they did in the previous population, they take longer to develop than players with more swing and miss in their game, and when they do start to decline, there isn’t much need to worry, because their climb from their peaks will be gradual.

Like many other studies that have looked at new aging curves, this study confirms that players/prospects peak earlier now than at any other point throughout history, but it also shows that a player’s trajectory upward and downward is dependent on characteristics specific to their approaches at the plate.

Devon Jordan is obsessed with statistical analysis, non-fiction literature, and electronic music. If you enjoyed reading him, follow him on Twitter @devonjjordan.

Print This Post

newest oldest most voted

It would be interesting to run this data as a 2-way repeated ANOVA, that way we can see if there is a significant interaction between K% and time (i.e., age) on wOBA.

Bill but not Ted

This obviously took a not insubstantial amount of work to put together, but this is brilliant research and we need more…

I love the community pages!!


Great article. I really liked the succinct explanation of survivorship bias in the section “What We Don’t Want In An Aging Curve”.

Some random thoughts:

– I would guess that the reason for the low K players underperforming the high K players historically would be the prevalence of “small ball”, where high-contact hitters would be expected to make “productive outs” such as grounding out to the right side with a runner on 2nd and no outs, which would harm wOBA.

– I would also guess that in earlier years, high K players who produced a low batting average were more likely to find themselves unemployed than low K players with a middle-of-the-road batting average. In other words, a high K hitter with a career .270 BA/.370 wOBA might have been on his way out when he fell to a .220 BA/.320 wOBA, while a low K guy with a career .300 BA/.320 wOBA might still keep his job even after slipping to a .250 BA/.270 wOBA. This might be part of the reason for the faster drop-off for the low K hitters historically

– I wonder if the reason for the hump in the historical aging curve simply came from less-feared hitters making the adjustment to MLB pitching successfully for a good stretch before the league’s pitchers updated their “book” on them. It’s very interesting that the high-K hitters in the historical sample had a flat early-career aging curve, similar to the full current aging curve – this may be that teams would put more scouting effort into containing the Harmon Killebrew’s entering the league than the Nellie Fox’s.

– For the 1990-2014 curve, it looks like the only differences between the curves can be chalked up to actual biological aging:

o For the low K hitters, the moderate increase in wOBA from the early to mid 20’s can probably be chalked up to getting stronger in early adulthood

o For the low K hitters, the moderate increase in wOBA from the early to mid 20’s can probably be chalked up to getting stronger in early adulthood


* The last “bullet” should read

o For the high K hitters, the drop-off in wOBA beginning in the middle 30’s could simply be the result of decreasing bat speed (which would both decrease power and increase K%)


you also might want to consider separating 1990 – 2005 vs. 2006 – 2014. Steroid vs. less-steroid era.


Surely you have sample size issues at the ends of the curve. For example, how many players are in the 21-22 year old bucket? It can’t be many.

I agree with the previous comment about biological aging. Hitting curves are mostly about physical aging. Players with power (and high K) will be more physiologically advanced and thus will peak earlier although I am by no means convinced that a large class of players peaks at 21 or 23.

The most interesting part of the 1990-2014 curve is the late career drop off, assuming that is not a small sample issue which it could be.
As that same commenter said, there also could be different survivorship biases in the two groups. The author didn’t explain that using the delta method does not get rid of survivor bias. Players that play in 2 consecutive years will always tend to have gotten lucky in year 1. If players who K a lot fluctuate more than players who don’t (and I don’t know wether that’s true to any significant degree), then they may have a larger survivor bias which would bias their curve downward at all ages.

Good work.

Mean Reversion
Mean Reversion

I think there might be may be an inverted survivor bias of sorts in this data. The best players are the ones playing in the majors at age 21, so it follows logically that those samples are have higher median outputs than the samples of older age-seasons. In fact, I would probably guess that with each age-season sequentially through the 20’s, there would be more samples as more players are breaking into the majors in their 20’s than falling out of it. If you track the median production across time, you can probably isolate the actual aging curve. But saying Kris Bryant will peak at age 23 is not the same as saying players that strike out as often as Kris Bryant does in his age 23 season see their performance fall further and further away from Kris Bryant’s age 23 season as they age.


“The best players are the ones playing in the majors at age 21, so it follows logically that those samples are have higher median outputs than the samples of older age-seasons.”

You are not understanding the delta method. When a player plays his first year in the majors his performance is an unbiased sample of his true talent. Given a large enough sample size, the mean of that performance is exactly equal to the true talent. In small samples the mean is an unbiased estimate of that true talent.

Now, if we let all those 21 year olds, no matter how good or bad they are play in their age 22 seasons, the aggregate difference will be a perfect unbiased estimate of the aging curve between age 21 and 22. There is no survivorship or other bias at work here.

However, there are other problems. I will try and explain some of them:

One, not all of those 21 year olds play again as 22 year olds. Some of them get sent down to the minors never to return or return at a later age than 22.

As a group the players who get sent down and don’t play as 22 yo got unlucky. As a group. That is a fact. They are also the worst players of the 21 year olds.

That means that of the players who return at 22, as a group they got lucky. They are also the best of the 21 year olds.

So, let’s say that the entire group of 21 year olds were true talent .300 wOBA hitters. And let’s say that as 22 year olds, we expect them to be .310 hitters. Let’s say that were true. In other words, they get better from age 21 to age 22. It is still part of the upward part of the aging curve, as we used to think (and perhaps still do).

So, we know that the ones who do not return are both worse and got unlucky. Let’s say that they are true .290 hitters, as a group, and they hit .270. They don’t come back as 2 year olds, so they are out of our data sample.

The ones who do come back, say, are true .310 hitters (say 50% drop out and 50% come back next year). They also got lucky and hit .330.

Now, what happens to them at age 22? They were true .310 hitters at age 21 who hit .330 because they got a little lucky. They will hit .320 at age 22! That will make it appear as if they got 10 points worse from age 21 to age 22! That is the problem of survivorship bias which is not accounted for by using this author’s method – just a delta method. I explain this in the article that he references.

The only way to account/adjust for this bias other than making all players come back at age 22 (haha) is to regress yearly stats toward some mean to estimate true talent every year, or to include the players who drop out in the sample of “deltas” and assign them a phantom performance in year 2 (which involves a little circular reasoning – again, I explain that in my article).

You get a similar issue when you use cutoffs/minimums in each year, which I think this author did. Let’s say we have the same 21 year old players. And let’s say that you only use those who amass 300 PA in each of 2 consecutive seasons, so the players in your aging curve pool had to have 300 PA at age 21 and 300 at age 22 to be included.

Now what happens?

Your 21 year olds play and put in 100 or 150 PA or so the first half of the season. Who ends up amassing 300 PA for the season? Who becomes a regular or semi regular and who becomes a bench player or gets sent down that season and does not qualify for your data set? Again, the good AND lucky players. So their year 1 performance will again be lucky. Now, this problem is not as bad as the previous one. The lucky year 1 players because of the minimum playing time requirement is offset by the requirement in year 2. These “lucky (and good) in year 1” players start year 2, but in order to qualify in year 2, they have to play well in that year as well. So, they must have gotten a little lucky (and are good) in year 2 also. My guess is that in order to continue playing in year 1 at age 21, you have to be really lucky though. If you make it to your age 22 and second season, teams will probably tolerate some bad performance more than in year 1. So the result, again, is that it looks like your talent when down from age 21 to age 22, even though it was just (very) lucky performance in year 1 and then normal or a little lucky performance in year 2.

The third problem is this. We purposely string together different players at different ages when we use the delta method and then we pass off the result as an “aging curve” as if it applies to one player. It does not. Again, I’ll explain.

Player that were brought up at age 21 and then were successful enough to play again at age 22 (and perhaps qualify in both years, if you are using a min number of PA) are obviously very good players. They are likely more physically mature than the average player. They may have a physiological age of 25 or 26. So when we look at these guys from age 21 to 22, we may really be looking at an “effective” 25-26 age increase. But now we string these guys together with players at age 22-23 (they might be effectively 24-25), 23-24, etc., to form our aging curve and we call this an aging curve from 21 to 40 or whatever.

It’s not! If my assumptions are true, that the really young players are physiologically 25 or 26 and not 21 or 22, then we in effect have an aging curve of 24 or 25 to 40!

And we have a similar problem at the far end. Players who are 35 and 40 and still playing and qualifying are/were probably late maturers and are in effect only 30 or 35 rather than 35 or 40. So we end up with a curve that we claim represents an “average player” from “21 to 40” but is really a conglomeration of players who are in effect 24 to 34 or something like that.

Brian L Cartwright
Brian L Cartwright

Also, although there is not a strikeout term in the wOBA formula, wOBA is still indeed a function of a player’s K%, as a batter cannot get a walk or hit or homerun unless he does not K. Increasing a player’s K-rate by definition decreases all the wOBA terms as a percent of plate appearances.


Analyses like this make me wonder about a deeper application of a PECOTA (as Nate explained) like system. That is, not all hi-K and lo-K are the same. Some batters have hi-K because they suck at hitting. Other batters have hi-K because they are trading swing accuracy for swing velocity. At least I think that could be an issue.

To me this is missed in many other applications – defensive analysis being one. The particular skills (speed, rxn, size) that makes a player good or bad at a task must be considered *if* we are to make assessments about “the next player”.

I guess maybe what I haven’t seen (and it may be out there) is aging curve by size (hts and wts). And of course, after size, the subsets of size.