Author Archive

Free Scott Van Slyke!

Some team really should take a chance to give Scott Van Slyke a starting OF job next season.  Frankly, I’d find it almost sinful if some team does not go for it.

(Granted, the Dodgers may still use the off-season to relieve their outfield logjam, so maybe Van Slyke works his way into the Dodgers’ own starting lineup.  But I’ll suppose for now that that does not happen.)

First, a summary of his career performance:
.261/.348/.476
.361 wOBA
134 wRC+
(455 PA)

The 134 wRC+ certainly is impressive.  And while he obviously did it only over a limited sample, if he were a full-time player, that would have ranked 24th in 2014; just behind Hanley Ramirez, David Ortiz, and Jose Altuve.  Alternatively, among all players with 450+ PA from 2012-2014, Van Slyke’s wRC+ also ranks 24th.

So he certainly has been good in-sample.  But what should you expect going forward?

There seem to be three key questions:
(1) Can he hit righties well enough?
(2) What is his true talent BABIP?
(3) What is his true talent ISO?

On the first point, Van Slyke’s career-to-date statline has certainly benefited from heavy use against left-handers.  In his career, he’s had slightly over half of his plate appearances against lefties — with a punishing 151 wRC+ — and a more pedestrian 116 wRC+ versus righties.  Taking those numbers at face value, for now, even if you re-weighted his plate appearances to be 70% against righties and 30% against lefties, that still comes out to 126.5, aka plenty good.  At least in-sample, that’s not that different from Josh Donaldson, who mashes lefties and is comparatively average against righties.  And I’m sure most teams would be elated to have Josh Donaldson.

The next question, then, is whether his career-to-date .323 BABIP is his true-talent BABIP.  There are some plausible reasons to think “no.”  Steamer projects him for .295 BABIP next season, and at least this 2012 version of an xBABIP calculator puts him more in the .270 territory.

I’m somewhat more optimistic on his BABIP, though.  His minor league BABIPs were good, after all: .404 over a full season in AA, and .354 and .437 across two half-seasons in AAA.  And ZiPS had him projected for .310 BABIP for 2014, and after a .394 actual showing, it will most likely be higher next season.

For simplicity’s sake, suppose you take everything else about Van Slyke’s career-to-date batting as given (BB and K rates, ISO, etc.), and just do the BABIP adjustment.  (This is not entirely realistic, but again, simplicity.)  What do his stats look like for different BABIP values?  You get:

BABIP OPS
0.280 0.772
0.290 0.784
0.300 0.796
0.310 0.808
0.320 0.820

Even on the low end, that’s still a useful player.  And even lowering everything by .050 for the platoon adjustment,* even the worst-case scenario is about a league-average LF, which this season posted a .720 OPS.  And the more optimistic scenarios put him above average.

* – Remember that 126.5 wRC+ computed earlier?  This would be about a .341 wOBA, which is .020 lower than his unadjusted wOBA.  .020 wOBA is approximately equal to .050 OPS.

Then the last question is: has he also overachieved on ISO in-sample?  Here, I’m a little more convinced that he may have.  His minor league ISOs were not much higher than his Major League career-to-date mark (.215), and you see that Steamer has him projected for just .165 ISO next year.  It’s also possible Steamer is stingy, as ZiPS had him projected for .170 ISO in 2014, and this will only increase after his actual 2014 performance.  But even supposing that increases to something like .182, it still suggests Van Slyke’s true-talent ISO is lower than what he’s shown so far.

Suppose we somewhat conservatively assume Van Slyke’s true talent BABIP is .300, and again take BB and K rates as given, but this time do an ISO adjustment.  What would his career-to-date stats look like?  You get:

(assuming .300 true-talent BABIP; no platoon adjustment)

ISO OPS
0.170 0.751
0.180 0.761
0.190 0.771
0.200 0.781
0.210 0.791

Or, if you want a full table that allows BABIP and ISO to vary simultaneously, you get:

(OPS value in cells; no platoon adjustment)

BABIP .170 ISO .180 ISO .190 ISO .200 ISO .210 ISO
.280 BABIP 0.727 0.737 0.747 0.757 0.767
.290 BABIP 0.739 0.749 0.759 0.769 0.779
.300 BABIP 0.751 0.761 0.771 0.781 0.791
.310 BABIP 0.763 0.773 0.783 0.793 0.803
.320 BABIP 0.775 0.785 0.795 0.805 0.815

Especially after factoring in some platoon adjustment, you see that there definitely are scenarios where Van Slyke could be below a league-average corner OF, despite his promising performance to date.  But these require that he has overachieved in either BABIP or ISO, or both; neither of which is given.  Even using the seemingly conservative Steamer projection for Van Slyke’s 2015 performance, he projects for something like 2 WAR over a full season, which is good enough to start.  And meanwhile there are many scenarios where he could be better than that.  (In-sample he’s been 4.5 WAR per 600 plate appearances!)

Of course the Dodgers know this as well.  Even so, I can’t imagine the price to acquire Van Slyke would be that high, and with the upside, it sounds totally reasonable for teams like Cincinnati, Seattle, or the White Sox, who didn’t get nearly enough production from their outfield last year.

Reader thoughts?


xHitting (Part 4): 2014 Fantasy Edition!

Welcome to the fourth installment of xHitting!  As always, reader comments and feedback are super encouraged and appreciated.  (Links to parts one, two, and three)

Briefly recapping the method, the gist is to estimate the expected rate of each individual hit type based on a player’s underlying peripherals, and in turn recover all the needed components to compute expected versions of wOBA, OPS, etc.  The only real change to the model since last time is that I now utilize a “hybrid” predicted home run rate, that averages between actual and (raw) predicted home run rate, with the weight given to actual HR rate increasing in the number of plate appearances.  (This is explained in part three, for those curious.)

Perhaps the more exciting change, though, is that this time I actually have results for an ongoing season, which potentially can help for fantasy purposes.  (Not that most readers need my help necessarily.)  Related to fantasy usage, there were a few requests to see a full spreadsheet of past results (2010-2013 seasons), which I have posted here.  Again feel free to take it or leave it at your leisure.

Note: I collected most of these data at the All-Star Break, so numbers may be a few weeks behind, but they’re still mostly true.  Also, for time considerations I only fetched 2014 stats for qualified leaders.  This even leaves out a few big names, but I couldn’t justify time to fetch every player.

So far, I’ve typically posted the biggest “over-” and “under”-achievers for a given season.  And I suppose I’ll continue that tradition today.  But while these lists are useful for highlighting which players seem most likely to regress, it overlooks another main use of the model, which is to assess the realness of a player’s apparent “breakout” or “decline;” at least in-sample.  (In some cases, the model may think that a player’s breakout is entirely justified, given peripherals, while others it may view more skeptically.)  Thus, today I’ll also post a second list, of players who seem to have taken a pronounced step forward/step back this season, and what the model thinks of their season-to-date performance.

Okay, time for results!  I’ll start with the list of “over-” and “underachievers.”

2014 Underachievers (1st half) 2014 Overachievers (1st half)
Name wOBA xWOBA Diff Name wOBA xWOBA Diff
Jean Segura 0.256 0.305 -0.049 Casey McGehee 0.345 0.277 0.068
Chris Davis 0.306 0.353 -0.047 Yasiel Puig 0.398 0.340 0.058
Mark Teixeira 0.352 0.397 -0.045 Matt Adams 0.376 0.324 0.052
Gerardo Parra 0.289 0.327 -0.038 Mike Trout 0.428 0.381 0.047
Brian McCann 0.298 0.330 -0.032 Marcell Ozuna 0.343 0.300 0.043
Torii Hunter 0.323 0.355 -0.032 Lonnie Chisenhall 0.396 0.359 0.037
Joe Mauer 0.308 0.340 -0.032 Scooter Gennett 0.355 0.320 0.035
Jimmy Rollins 0.320 0.352 -0.032 Marlon Byrd 0.344 0.309 0.035
Brian Roberts 0.304 0.334 -0.030 Giancarlo Stanton 0.397 0.363 0.034
Buster Posey 0.326 0.352 -0.026 Hunter Pence 0.359 0.325 0.034

A general pattern I notice is that, having worked with this model for a while now, there do seem to be players that give the model some trouble and have a disproportionate tendency to appear on this list from year to year.  A few of these players appear on this list… more on that later.

Partly for that reason, I wouldn’t necessarily say to “buy low” the guys on the left, nor “sell high” the guys on the right; although you can if you want.  I won’t address every player, but I have some scattered comments:

  • For readers who prefer OPS, .020 wOBA translates to about .050 OPS, on the margin.
  • .397 predicted for Teixeira?  Not sure where that came from…
  • Poor Segura.  All things considered, I think nobody deserves a big second half more than he does.
  • Whatever happened to Casey McGehee‘s power?  The guy once hit 23 home runs in a season, but now has ISO of .073, with surprisingly low fly ball distance.
  • Although Chisenhall‘s breakout is not as impressive if you take out what the model thinks is luck, it’s still a pretty impressive improvement.
  • Chris Davis is sort of the reverse of Chisenhall.  Adding back in what the model thinks has been bad luck, he’s still way down from what he did last year, but not nearly as disappointing as he probably has been to many owners thus far.

As mentioned, certain players do seem to be able to over/underperform the model somewhat consistently; the same way we think some pitchers are usually better or worse than their FIP.  With now 4.5 years of data to work with, however, I think I can make educated guesses about which players systematically deviate from the model predictions.  I’ll term this deviation the “player fixed effect.”

(Requiring at least 1000 PA from 2010 through 2014 first half)

Model loves too much Model loves too little
Name Player FE
estimate (wOBA)
Name Player FE
estimate (wOBA)
Brian Roberts -0.033 Wilson Betemit 0.032
Todd Helton -0.026 Brandon Moss 0.032
Jean Segura -0.026 Ryan Sweeney 0.028
Jose Lopez -0.025 Mike Trout 0.027
Mark Teixeira -0.025 Peter Bourjos 0.026
Russell Martin -0.024 Matt Carpenter 0.025
Darwin Barney -0.023 Brandon Belt 0.025
Chris Getz -0.023 Melky Cabrera 0.025
Jimmy Rollins -0.021 Carlos Ruiz 0.024
Jason Bay -0.020 Chris Johnson 0.024

Comments:

  • Again, .020 wOBA is equivalent to about .050 OPS, on the margin.
  • Taking out their apparent fixed effect, Teixeira is only underperforming his xWOBA by about .020, and Brian Roberts is actually doing about par.
  • On the reverse side, Mike Trout‘s “adjusted” xWOBA jumps up to .408, where really it probably doesn’t surprise us that he’s outperforming even that, since he’s Mike Trout.  And although Giancarlo Stanton misses the Top 10 cutoff above, his apparent fixed effect of +.022 would be 11th; so his “adjusted” xWOBA is more like .385.
  • Yasiel Puig (.058) would also be on the list of “positive fixed effects” if we relaxed the PA requirement (he has 826 during this time).  And Matt Adams (~.040) might also be well on his way to that list; although he has fewer plate appearances still than Puig.
  • I don’t really have good explanations/know any common themes for players with negative fixed effects.  Maybe readers can help?
  • For Trout, home runs are pretty clearly the area where the model underestimates him.  In any given season (2010-2014), he hits about twice as many HR as the model thinks he should in the “raw” prediction.
  • And Trout’s not the only “HR rate defier,” either; just the most salient.  In general, the model has never done as well with home runs as it does with singles, doubles, and triples.  It seems there are other important determinants of home run hitting that really should be in the model, but currently are not.  Intuitively, I sort of would like velocity and angle of the ball off the bat, but so far have not found a good data source to actually include these.  (Maybe that will change in the coming years as MLBAM releases “Hit F/X” style data?)  Until then, reader suggestions are also super welcome here.

And now, finally, for the other usage: here’s a partial list of players who have taken either a pronounced step forward or back this season, relative to established norms.

2014 “Decliners” 2014 “Improvers”
Name Career wOBA 2014 wOBA 2014 xWOBA Name Career wOBA 2014 wOBA 2014 xWOBA
Nick Swisher 0.352 0.285 0.305 Michael Brantley 0.324 0.394 0.404
Joe Mauer 0.373 0.308 0.340 Lonnie Chisenhall 0.328 0.396 0.359
Allen Craig 0.350 0.289 0.309 Seth Smith* 0.334 0.389 0.356
Billy Butler 0.352 0.300 0.309 Victor Martinez 0.362 0.416 0.422
Evan Longoria 0.365 0.315 0.323 Jonathan Lucroy 0.342 0.383 0.354
Domonic Brown 0.315 0.267 0.267 Anthony Rizzo 0.342 0.382 0.382
Chris Davis 0.351 0.306 0.353 Nelson Cruz 0.356 0.393 0.380
Matt Holliday* 0.385 0.342 0.318 Jose Altuve 0.319 0.356 0.325
Jean Segura 0.299 0.256 0.305 Brian Dozier 0.311 0.344 0.362
David Wright 0.377 0.335 0.305 Kyle Seager 0.334 0.367 0.344
Buster Posey 0.366 0.326 0.352 Dee Gordon 0.297 0.329 0.318
Shin-Soo Choo 0.369 0.333 0.346 Alcides Escobar 0.284 0.312 0.300
Dustin Pedroia 0.356 0.325 0.337 Casey McGehee 0.321 0.345 0.277
Jed Lowrie 0.327 0.297 0.305
Jay Bruce 0.343 0.315 0.326

* – To avoid inflation from Coors Field, for these players I’ve taken the total from 2011-13 seasons only

Comments:

  • At least in-sample, Brantley‘s breakout seems to be pretty much entirely justified.  Of course this doesn’t mean that he won’t regress somewhat, but if I were to guess, I’m a little more optimistic than ZiPS and Steamer (which currently project .341 and .333 RoS, respectively).  Similar deal for some others.
  • “Yikes” for Billy Butler and Domonic Brown, whose declines this season seem (at least in-sample) to be entirely justified.
  • I’m not sure why the model dislikes Casey McGehee so much.  Obviously his fly ball distance (mentioned earlier) isn’t doing him any favors, and his .369 first-half BABIP is probably unsustainable.  Still, .277 xWOBA?  Seems harsh.

As with any fantasy advice, don’t take any of this too literally…  Take it or leave it as you see fit.

Lastly, although I hyped this piece from a fantasy perspective, the overall goal remains that I would love to see more work done to de-luck hitter stats, the way people do so often for pitchers.  (FIP for pitchers, and xWOBA or xWRC+ for hitters! Is the dream.)

Reader thoughts on how to improve the model, or requests for players not already mentioned?


Sabathia’s Decline = Lincecum’s Decline? Specific Patterns for Velocity Loss?

CC Sabathia‘s recent decline is looking more and more like Tim Lincecum‘s also-much-scrutinized decline.  To make the point, here are some key year-by-year stats for each.

Lincecum
ERA FIP FBv K/9 BB/9 BABIP LD% LOB% HR/FB%
2009 2.48 2.34 92.4 10.42 2.72 0.282 19.2 75.9 5.5
2010 3.43 3.15 91.3 9.79 3.22 0.310 19.5 76.5 9.9
2011 2.74 3.17 92.3 9.12 3.57 0.281 19.1 78.5 8.0
2012 5.18 4.18 90.4 9.19 4.35 0.309 23.8 67.8 14.6
2013 4.37 3.74 90.2 8.79 3.46 0.300 23.1 69.4 12.1
2014* 9.90 6.24 89.9 10.80 0.90 0.393 37.5 48.1 40.0
Sabathia
ERA FIP FBv K/9 BB/9 BABIP LD% LOB% HR/FB%
2009 3.37 3.39 94.2 7.71 2.62 0.277 19.8 71.4 7.4
2010 3.18 3.54 93.5 7.46 2.80 0.281 15.1 75.6 8.6
2011 3.00 2.88 93.8 8.72 2.31 0.318 23.1 77.0 8.4
2012 3.38 3.33 92.3 8.87 1.98 0.288 21.1 71.6 12.5
2013 4.78 4.10 91.1 7.46 2.77 0.308 22.3 67.4 13.0
2014* 6.63 4.82 89.1 9.95 1.42 0.308 21.1 58.8 38.5
* – as of 4/14/14

The velocity loss is perhaps the most publicized common aspect.  Yet, while acknowledging that year 2 of Sabathia’s decline is only about 10% (19 innings) in, it’s shaping up as though there may be many other commonalities:

  • ERA above FIP when it wasn’t the case before
  • Sudden (and permanent?) spikes in HR/FB%
  • An apparent loss in ability to strand runners
  • (BABIP might also be trending up for each, but this is harder to tell, due to the regular noisiness of year-to-year BABIP.  Lincecum also saw his LD% spike, which might not be true for Sabathia.)

Having also been thinking about Nathan Eovaldi lately — who has both elite fastball velocity and an apparent ability to suppress HR/FB (7.0% in 279.2 IP) — I couldn’t help but wonder if these things are systematically related.

I remember there was some attention paid to these things when SIERA was being introduced.  But it turns out most of the attention there was on strikeouts, rather than velocity.  Obviously velocity and strikeouts are positively related.  But (1) Lincecum and Sabathia are actually still pretty good/decent at strikeouts, and this hasn’t prevented their recent struggles; (2) Eovaldi has only elite velocity, and pretty pedestrian strikeouts.  So the real question is: Does velocity itself matter, in addition to strikeouts?

(In the subsequent analysis, I’ll be looking primarily at effects on HR/FB%, LOB%, and ERA-FIP, since those seem to be problems plaguing both of the high-profile cases that prompted this line of thinking.  But there’s otherwise no reason to think those are the only intermediate outcomes where velocity may matter directly.

Also, it turns out that great velocity isn’t required for HR/FB suppression, as a look at the leaderboard in recent years includes some notable non-flamethrowers like Stults, Weaver, and Fister.  Obviously the ballpark matters a lot, too.  But there are also hard throwers near the top, and overall I remained intrigued enough to keep digging.)

Realistically, if there is something there, Sabathia and Lincecum are probably on the more extreme end of the spectrum.  Probably there have been other guys who lost similar velocity but that we didn’t hear as much about because they were better able to adapt or otherwise did not see their overall results decline so dramatically.

What do the results indicate?  By and large, it does appear that velocity matters directly, in addition to strikeouts.  (Regression results below)

HR/FB% LOB% ERA-FIP
OLS FE FD OLS FE FD OLS FE FD
K/9 -.122** .533*** .189 1.118*** .445** .509* .037*** .132*** .151
FBv -.124*** -.841*** -.656*** .140* .953*** 1.155*** -.022** -.155*** -.155***
N 1677 1677 1085 1677 1677 1085 1677 1677 1085
R2 0.015 0.511 0.009 0.125 0.575 0.0265 0.008 0.53 0.029

* = significant at 10%; ** = significant at 5%; *** = significant at 1%

I use 3 different estimation techniques for each outcome:

  • Plain-old OLS
  • Fixed effects (“FE”): estimates results within player, essentially comparing each pitcher’s own years of higher velocity/strikeouts against his years of lower velocity/strikeouts
  • First difference (“FD”): the outcome is now the one-year change in HR/FB% (etc.) for Pitcher A, while the explanatory variables are the one-year change in K/9 and FBv for Pitcher A

Of these, methods 2 and 3 are probably more convincing, since they give results for the same player, where anything else that’s distinct to the player (but invariant over time) gets washed out.  OLS doesn’t do this, and instead mostly compares across players, who may have many differences besides strikeouts and velocity.  In an exaggerated illustration, if our full sample consisted only of Tim Hudson and Felix Doubront, the fact that Hudson is altogether a better pitcher, but sort of a “pitch-to-contact soft tosser,” can make it look like strikeouts/velocity are bad, using OLS, even if having more strikeouts/more velocity is actually good for either player.

Some technical notes:

  • Sample includes player-seasons between 2010 and 2013 with at least 30 innings pitched
  • Standard errors (not displayed) are clustered by player
  • Don’t look too much into the fact that “FE” always gives the highest R2.  Most of this is from all the “specific player indicators” that are now present, rather than the “within-player” aspect, which is the actual point of using FE
  • Starters and relievers are both included.  Part of me prefers to look at just starters, but this allows for much more observations and statistical power.  I’m also not controlling for starter/reliever status, so you’d need to believe that that only matters through its effects on strikeouts and velocity.

You can maybe argue that there are other explanatory variables that should have been included, or perhaps that one needs to be more judicious about the sample to consider.   (I must admit that I threw this together fairly quickly.)  But even if the current analysis is somewhat imperfect, it appears at least plausible that velocity matters directly (for various outcomes), in addition to the rate of strikeouts.

It’s a little too bad, because coming into this season I’d thought there was a decent chance of a Sabathia bounceback, given his partial velocity rebound as 2013 went along.  But that seems to have been only temporary.  While he still may wind up bouncing back when all is said and done, I’m definitely less optimistic than I was a week ago.  Will CC be this year’s version of 2013 Lincecum, who might even tease by FIP/xFIP but continue to underwhelm?


Does a Velocity *Increase* Also Predict Injury? (A Primer)

Leading into the currently-young 2014 season, one of the biggest stories in baseball was the rash of pitcher injuries — with UCL injuries and Tommy John surgery seeming unusually frequent this year.

For Patrick Corbin’s case, in particular, my immediate thought was “Hm, I recall he increased his velocity last year”… which of course led me to wonder if the velocity increase actually caused his injury in some way.

I don’t know how common this line of thinking is.  So far as I can tell, the discussion of velocity and injury more frequently goes the other way, that a velocity decrease may be the first sign that something is wrong.  Or maybe this is actually a more common suspicion than I realize.  If nothing else, it seems to merit a closer look/increased discussion.

The logic here is simple: for most players, velocity only seems to decrease from year to year (although it may increase within a season).  So when a player bucks the usual pattern and increases velocity between years, you have to wonder what exactly he did.  At least some of the time, guys may be cheating a little (doing something not entirely sound, mechanically) to get that extra “oomph.”  This is of course is where the injury part enters.  If indeed some guys are cheating, maybe it’s only a matter of time before they blow out an elbow (or shoulder).

So can a velocity increase be a sign that a guy’s cheating and thus a future injury risk?  Answering this thoroughly takes some time and effort, more than I can probably spare this week, but I thought I’d at the very least get some reader thoughts.  Eventually I hope to look at guys from many different seasons,  comparing the injury rate of guys who did vs. did not see a notable velocity increase the preceding season.  (I’ll be using this list of TJ patients, which seems fairly complete.  Probably it would be better to add shoulder injuries, too, if someone has a list.)

For those curious, here are the 2012 and 2013 velocities for the five big names of this year’s “Tommy John cohort.”  Unfortunately there’s hardly anything that can be taken away from such a small list.  Harvey and Corbin had velocity increases (consistent with the conjecture), while the others did not.  But Beachy was coming off a previous Tommy John surgery performed in 2012, while Medlen’s 2012 was partially in the bullpen, so it’s not exactly clear what to make of their 2012 vs. 2013 velocities.

Name 2012 velo 2013 velo Change
Matt Harvey 94.7 95.8 1.1
Patrick Corbin 90.9 92.1 1.2
Brandon Beachy 91.0 90.2 -0.8
Kris Medlen 90.0 89.4 -0.6
Jarrod Parker 92.4 91.5 -0.9

(Overall FB velocities in this table.  Maybe it would have been better to just compare 4-seam vs. 4-seam, but I didn’t want to have to worry about composition for now.)

It might be a few weeks before I myself have time for a closer look.  BUT, if anyone else wants to spearhead the effort sooner, please feel free to do so, and I’m of course happy to help.  As always, reader thoughts and feedback are welcome!


xHitting (Part 2): Improved Model, Now with 2013 Leaders/Laggards

Happy holidays, all.  It took me a while, but I finally have the second installment of xHitting ready.  First off, thank you to all those who read/commented on the first piece.  For those who didn’t get a chance to read it, the goal here is to devise luck-neutralized versions of popular hitter stats, like OPS or wOBA.  A main extension over existing xBABIP calculators is that this approach offers an empirical basis to recover slugging and ISO, by estimating each individual hit type.

I’ve returned today with an improved version of the model.  Highlights:

  • One more year of data (now 2010-2013)
  • Now includes batted-ball direction (all player-seasons with at least 100 PA)
  • FB distance now recorded for all player-seasons with at least 100 PA

(There’s no theoretical reason for the 100 PA cutoff, only that I was grabbing some of the new data by hand and couldn’t justify the time to fetch literally every single player.)

I have also relaxed the uniformity of peripherals used for each outcome.  At least one reader asked for this, and after thinking about it a while, I decided I agree more than I disagree.  The main advantage of imposing uniformity was that it ensures the predicted rates (when an outs model is also included) sum to 100%.  But it is true that there are certain interactions or non-linearities that are important for some outcomes, but not others.  Including these where they don’t fully belong has a cost to standard errors/precision, and to intuitive interpretation.  To ensure rates still sum to 100%, there’s no longer an explicit ‘outs’ model; outs are simply assumed to be the remainder.

For those curious, below I display regression results for each outcome and its respective peripherals.  You can otherwise skip below if these are not of direct interest.

(The sample includes all player-years with at least 100 plate appearances between the 2010 and 2013 MLB seasons.  Park factors denote outcome-specific park factors available on FanGraphs.  Robust standard errors, clustered by player, are in parentheses; *** p$<$0.01, ** p$<$0.05, * p$<$0.1)

The new variables seem to help, as each outcome is now modeled more accurately than before (by either R2 or RMSE).  For comparison, here are the R2’s of the original specification:

  • 0.367 for singles rate
  • 0.236 for doubles rate
  • 0.511 for triples rate
  • 0.631 for HR rate

Something else I noticed: for balls that stay “inside the fence,” both pull/opp and actual side of the field matter.  Consider singles: the ball needs to be thrown to 1st base (right side of infield) specifically.  Thus an otherwise-equivalent ball hit to the left side is not the same as one hit to the right side, since the defensive play is harder to make from the left side.  Similarly, hitting the ball to left field is less conducive for triples than hitting the ball to right field.

But hitting the ball to the left side as a lefty is not the same as hitting it there as a righty, since one group is “pulling” while the other group is “slapping.”  The direction x handedness interactions help account for this.

How well do the predicted rates do in forecasting?  For singles, doubles, and triples, the predicted rates do unambiguously better than realized rates in forecasting next season’s rates.  Things are a little less clear for home runs, which I will expand on below.

Although predicted HR rate shows a slight edge in Table 1, the pattern often reverses (for HR only) if you use a different sample restriction — say requiring 300 PA in the preceding season.  (For other outcomes, the qualitative pattern from Table 1 still holds even under alternative sample restrictions.)

So home runs appear to be a potential problem area.  What should we do when we need HR to compute xAVG/xSLG/xOPS/xWOBA, etc.?  Should we:

  1. Use predicted HR anyway?
  2. Use actual HR instead?
  3. Use some combo of actual and predicted HR?

Empirically there is a clear answer for which choice is best.  But before getting to that, let’s take a look at whether predicted home-run rate tells us anything at all in terms of regression.  That is, if you’ve been hitting HR’s above/below your “expected” rate, do you tend to regress toward the prediction?

The answer to this seems to be “yes,” evidenced by the negative coefficient on ‘lagged rate residual’ below.

So, although realized HR rate is sometimes a better standalone forecaster of future home runs, predicted HR rate is still highly useful in predicting regression.  Making use of both, it seems intuitively best to use some combo of actual and predicted HR rate for forecasting.

This does, in fact, seem to be the best option empirically.  And this is true whether your end outcome of interest is AVG, OBP, SLG, ISO, OPS, or wOBA.

Observations:

  • (Option 1 = predicted HR only; Option 2 = actual HR only; Option 3 = combo)
  • Whether you use option 1, 2, or 3, xAVG and xOBP make better forecasters than actual past AVG or OBP
  • Option 1 does not do well for SLG, ISO , OPS, or wOBA
  • ^This was not the case in the previous article, but results to that point had sort of a funky sample, having recorded flyball distance only for a partial list of players
  • Option 2 “saves” things for xOPS and xWOBA, but still isn’t best for SLG or ISO
  • Option 3 makes the predicted version better for any of AVG, OBP, SLG, ISO, OPS, or wOBA

End takeaways:

  • The original premise that you can use “expected hitting,” estimated from peripherals, to remove luck effects and better predict future performance seems to be true; but you might need to make a slight HR adjustment.
  • The main reason I estimate each hit type individually is for the flexibility it offers in subsequent computations.  Whether you want xAVG, xOPS, xWOBA, etc., you have the component pieces that you need.  This would not be true if I estimated just a single xWOBA, and other users prefer xOPS or xISO.
  • A major extension over existing xBABIP methods is that this offers an empirical basis to recover xSLG.  The previous piece actually provides more commentary on this.
  • Natural next steps are to test partial-season performance, and also whether projection systems like ZiPS can make use of the estimated luck residuals to become more accurate.

Finally, I promised to list the leading over- and underachievers for the 2013 season.  By xWOBA, they are as follows:

Overachievers (250+ PA) Underachievers (250+ PA)
Name 2013 wOBA 2013 xWOBA Difference Name 2013 wOBA 2013 xWOBA Difference
Jose Iglesias 0.327 0.259 0.068 Kevin Frandsen 0.286 0.335 -0.049
Yasiel Puig 0.398 0.338 0.060 Alcides Escobar 0.247 0.296 -0.049
Colby Rasmus 0.365 0.315 0.050 Todd Helton 0.322 0.369 -0.047
Ryan Braun 0.370 0.321 0.049 Ryan Hanigan 0.252 0.296 -0.044
Ryan Raburn 0.389 0.344 0.045 Darwin Barney 0.252 0.296 -0.044
Mike Trout 0.423 0.379 0.044 Edwin Encarnacion 0.388 0.429 -0.041
Junior Lake 0.335 0.292 0.043 Josh Rutledge 0.281 0.319 -0.038
Matt Adams 0.365 0.323 0.042 Wilson Ramos 0.337 0.374 -0.037
Justin Maxwell 0.336 0.295 0.041 Yuniesky Betancourt 0.257 0.294 -0.037
Chris Johnson 0.354 0.314 0.040 Brian Roberts 0.309 0.345 -0.036

Comments/suggestions?


xHitting: Going beyond xBABIP (part I)

For a few years, it’s struck me as unusual that pitching and hitting metrics are asymmetric.  If the metrics we use to evaluate one group (FIP or wRC+) are so good, why don’t we use them for the other?

One issue is that we’re not used to evaluating pitchers on an OPS-type basis, and similarly we’re not used to evaluating hitters on an ERA basis.  Fine.  But there’s a bigger issue: Why do pitching metrics put so much more emphasis on the removal of luck?

While most sabermetricians are aware of BABIP, and recognize the pervasive impacts it can have on a batting line, attempts to (precisely) adjust hitter stats for BABIP are surprisingly uncommon.  While there do exist a few xBABIP calculators, these haven’t yet caught on en masse like FIP.  And xBABIP doesn’t appear on player pages in either FanGraphs or Baseball Prospectus.

xBABIP itself isn’t even the end goal.  What you probably really want is xAVG/xOBP/xSLG, etc.  Obtaining these is a bit cumbersome when you need to do the conversions yourself.

Moreover, it strikes me that xBABIP cannot be converted to xSLG without some ad hoc assumptions.  Let’s say you conclude a player would have gained or lost 4 hits under neutral BABIP luck.  What type of hits are those?  All singles?  2 singles and 2 doubles?  1 single, 2 doubles, 1 triple?  The exact composition of hits gained/lost affects SLG.  Or maybe you assume ISO is unaffected by BABIP, but this too is ad hoc.

At least to me, whenever a hitter performs better/worse than expected, we really care to know two things:

  1. Is it driven by BABIP?
  2. If so, what is the luck-neutral level of performance?

As I’ve attempted to illustrate, answering #2 is not so easy under existing methods.  (Nor do people always even attempt to answer it, really.)  Even answering #1 correctly takes a little bit of effort.  (“True talent” BABIP changes with hitting style, so it isn’t always enough just to compare current vs. career BABIP.  And then there are players with insufficient track record for career BABIP to be taken at face value.)

Compare this to pitchers.  When a pitcher posts a surprisingly good/bad ERA, we readily consult FIP/xFIP/SIERA.  Specific values, readily provided on the site.  So why not for hitters?

Here I attempt to help fill this gap.  The approach is to map a hitter’s peripheral performance to an entire distribution of hit outcomes.  These “expected” values of singles, doubles, triples, home runs, and outs, can then be used to computed “expected” versions of AVG, OBP, SLG, OPS, wOBA, etc.

Recovering xAVG and xOBP isn’t that different from current xBABIP-based approaches.  The main extension is that, unlike xBABIP, this provides an empirical basis to recover xSLG, and also xWOBA.

Steps:

  1. Calculate players’ rates of singles, doubles, triples, home runs, and outs among balls in play.  (Unlike some other BABIP settings, I count home runs as “balls in play” to estimate an expected number.)
  2. Regress each rate separately on a common set of peripherals.  You’ll now have predicted rates of each for each player.   (Keeping the explanatory variables common throughout ensures the rates sum to 100%.)
  3. Multiply by the number of balls in play (again counting home runs) to get expected counts of singles, doubles, triples, home runs, and outs.
  4. Use these to compute expected versions of your preferred statistics.

What explanatory peripherals are appropriate?  Initially I’ve used:

  • Line drive rate, ground ball rate, flyball rate, popup rate
  • Speed score
  • Flyball distance (from BaseballHeatMaps.com), to approximate power
  • Speed * ground ball rate
  • Flyball distance * flyball rate

These explanatory variables differ somewhat from those in the xBABIP formula linked earlier.  The main distinctions are adding flyball distance (think Miguel Cabrera vs. Ben Revere) and using Speed score instead of IFH%.  (IFH% already embeds whether the ball went for a hit.  Certainly in-sample this will improve model fit, but it might not be good for out-of-sample use.)

Regression results:

Spd FB Dist/1000 FB Dist missing (Spd*GB%)/1000 (FB Dist*FB%)/10000 LD% GB% FB% IFFB%/100 Pitcher dummy Constant
Singles rate -0.0177 0.0608 0.0111 0.4882 0.0090 -0.0019 -0.0063 -0.0066 -0.0417 -0.6833 0.7296
Doubles rate 0.0076 0.6044 0.1457 -0.1059 -0.0152 -0.0058 -0.0066 -0.0061 -0.0070 -0.6700 0.5235
Triples rate 0.0040 0.0193 0.0057 -0.0279 -0.0019 -0.0077 -0.0077 -0.0077 -0.0010 -0.7695 0.7634
HR rate 0.0018 0.9392 0.2764 -0.0295 0.0283 0.0081 0.0080 0.0085 -0.0127 0.8020 -1.0790
Outs rate 0.0043 -1.6238 -0.4389 -0.3249 -0.0202 0.0073 0.0125 0.0118 0.0624 1.3205 0.0625

Technical notes:

  • These are rates among balls in play (including home runs)
  • Each observation is a player-year (e.g. 2012 Mike Trout)
  • I’ve used 2010-2012 data for these regressions
  • Currently I’ve only grabbed flyball distance for players on the leaderboard at BaseballHeatMaps.  This is usually about 300 players per year, or most of the “everyday regulars.”  (Fear not, Ben Revere/Juan Pierre/etc. are included.)  The remaining cases get an indicator for ‘FB Dist missing.’
  • LD%, GB%, FB%, and IFFB% are coded so that 50% = 50, not 0.50.
  • Pitcher proxy = 1 if LD% + GB% + FB% = 0.  Initially I haven’t thrown out cases of pitcher hitting, nor other instances of limited PA.
  • Notice the interaction terms.  The full impact of GB% depends both on GB% and Speed; the full impact of FB% depends on both FB% and FB distance; etc.  So don’t just look at Speed, GB%, FB%, or FB Distance in isolation.
  • Don’t worry that the coefficients on pitcher proxy “look” a bit funny for HR rate and Outs rate.  (Remember that these cases also have LD%=0, GB%=0, and FB%=0.)  In total the average predicted HR rate for pitchers is 0.01% and their predicted outs rate is 94%.
  • Strictly speaking, these are backwards-looking estimators (as are FIP and its variants), but they might well prove useful in forecasting.

I next calculate xAVG, xOBP, xSLG, xOPS, and xWOBA.  For now, I’ve simply taken BB and K rates as given.  (xBABIP-based approaches seem to do the same, often.)

Early results are promising, as “expected” versions of AVG, OBP, SLG, OPS, and wOBA all outperform their unadjusted versions in predicting next-year performance.  (At least for the years currently covered.)

Which players deviated most from their xWOBA?  Here are the leaders/laggards for 2012, along with their 2013 performance:

Leaders Laggards
Name 2012 wOBA 2012 xWOBA Difference 2013 wOBA Name 2012 wOBA 2012 xWOBA Difference 2013 wOBA
Brandon Moss 0.402 0.311 0.091 0.369 Josh Harrison 0.274 0.355 -0.081 0.307
Giancarlo Stanton 0.405 0.332 0.073 0.368 Ryan Raburn 0.216 0.290 -0.074 0.389
Will Middlebrooks 0.357 0.285 0.072 0.300 Nick Hundley 0.205 0.265 -0.060 0.295
Chris Carter 0.369 0.298 0.071 0.337 Jason Bay 0.240 0.299 -0.059 0.306
John Mayberry 0.303 0.238 0.065 0.298 Eric Hosmer 0.291 0.349 -0.058 0.350
Torii Hunter 0.356 0.293 0.063 0.346 Gerardo Parra 0.317 0.369 -0.052 0.326
Jamey Carroll 0.299 0.244 0.055 0.237 Daniel Descalso 0.278 0.328 -0.050 0.284
Cody Ross 0.345 0.291 0.054 0.326 Jason Kipnis 0.315 0.365 -0.050 0.357
Melky Cabrera 0.387 0.333 0.054 0.303 Rod Barajas 0.272 0.322 -0.050
Kendrys Morales 0.339 0.286 0.053 0.342 Cameron Maybin 0.290 0.339 -0.049 0.209

Is performance perfect?  Obviously not.  The model does quite well for some, medium-well for others, and not-so-well for some.  Obviously this is not the end-all solution for xHitting.

Some future work that I have in mind:

  • A still more complete set of hitting peripherals.  I’m thinking of park factors, batted ball direction, and possibly others.
  • Testing partial-season performance
  • Comparing results against projection systems like ZiPS and Steamer

Otherwise, my main hope from this piece is to stimulate greater discussion of evaluating hitters on a luck-neutral basis.  Simply identifying certain players’ stats as being driven by BABIP is not enough; we really should give precise estimates of the underlying level of performance based on peripherals.  We do this for pitchers, after all, with good success.

Above I’ve contributed my two cents for a concrete method to do this.  A major extension to xBABIP-based approaches is that this offers an empirical basis to recover xSLG and xWOBA.  While the model is far from perfect, even in its current form it generates “expected” versions of AVG, OBP, SLG, OPS, and wOBA that outperform their unadjusted versions in predicting subsequent-year performance.  (Not just for leaders/laggards.)

Comments and suggestions are obviously welcome!