FanGraphs Baseball

Comments

RSS feed for comments on this post.

  1. If I had to guess, I’d say the large variation in LD% owes to a lower base rate. That magnifies the apparent fluctuations.

    Unless I’m making a math logic error.

    Comment by Dan Rozenson — January 16, 2013 @ 3:30 pm

  2. I’d guess that some of it is due to scoring issues also. There’s been some research that has shown that hits are more likely to be scored as LDs, and outs more likely to be classified something else.

    A “Line drive” handled by the shortstop may be hit on the same trajectory as a “ground ball” hit up the middle and handled by the CF.

    Comment by Synovia — January 16, 2013 @ 3:54 pm

  3. The base rate doesn’t have anything to do with correlations.

    Comment by nayr — January 16, 2013 @ 3:58 pm

  4. LD% has a low correlation because it’s the most likely to be subject to measurement error.

    Comment by walt526 — January 16, 2013 @ 4:06 pm

  5. There have also been studies showing that the height of the press box has an effect on the classification of LD/FB.

    Comment by Jacob Smith — January 16, 2013 @ 4:32 pm

  6. A player does not have a .000 batting average from an 0 for 0 performance. I think you meant 0 for 4.

    Comment by KrunchyGoodness — January 16, 2013 @ 4:32 pm

  7. 0 for 5 if you are a Mariners fan.

    Comment by Xeifrank — January 16, 2013 @ 5:00 pm

  8. 0 for 5? In what world does anyone in that offense get 5 AB’s in a game (barring extra innings)?

    You must have meant 0 for 3 ;)

    Comment by Hank — January 16, 2013 @ 5:17 pm

  9. Here’s what I would like to see in one of these studies: grouping players into a handful of “offensive player types” so you get different means for the particular group. Shifting the mean is going to drop all the r^2s (probably), but I think it could offer additional insight. For example, good questions might be “How much do power hitters control HR/FB rates?”, or “How much does BB% fluctuate among ‘speed’ players?”

    Because when you use the whole population of hitters, I think the correlations hold a lot of information that is already obvious. Power metric correlations for a 30 HR Adam Dunn season and a 50 HR Adam Dunn season are going to be high – mashers gon’ mash and slappers gon’ slap, ya know? What I’m trying to say is that certain types of players can only survive by keeping certain metrics not just better than the league average, but better than the average among their (swing-happy, power hitter, leadoff-type, etc etc) peers.

    Comment by kwk9 — January 16, 2013 @ 5:26 pm

  10. Dave answered part of the Michael Bourn conundrum yesterday, but to me this is the other part. If you’re going to pay a guy big money, you want to be able to project with some certainty. It may not make a ton of sense in the abstract, but if you’re the one writing the checks, at least you can be confident in what you’re going to get from Jason Kubel.

    Comment by Paul — January 16, 2013 @ 5:35 pm

  11. LD% … most doubles are liners, aren’t they? And/or likely to be counted that way? Not much correlation in 2B, and that’s not due to either a low base rate or measurement error. Looks like something’s going on here.

    Are 2B rates more consistent in, say, Fenway, where certain balls are highly likely to be two-baggers without regard to fielding?

    Comment by Mr Punch — January 16, 2013 @ 5:51 pm

  12. I think it does when we’re looking at year-to-year correlations, because small fluctuations in either the outcomes or scoring of more common events (balls in play) can cause large percentage changes in less common events (balls in play that are deemed ‘line drives’).

    In other words, in this case, smaller base rate means smaller sample size.

    Comment by Jon L. — January 16, 2013 @ 9:59 pm

  13. I’m guessing another reason why LD% is so low relative to GB% and FB%, besides measurement errors, is due to the quantity of line drives.

    Comment by Frag — January 17, 2013 @ 12:25 am

  14. I agree — sample size issues are part of it, and Jon L. makes a great point earlier, along those lines. There’s also measurement errors and biases, as mentioned.

    But mainly, I think there’s more randomness involved, which to me makes sense when you think about the physics of hitting a line drive (i.e., how precisely you have to contact the ball with the bat to get one).

    I think an analogy might be weighted dice — contact percentage has a very heavy weight leading to a certain roll for each player, but the LD% die can only have a pretty light weight.

    Nice work, by the way, Matt Klaassen. Very thought-provoking.

    Comment by Steve Staude. — January 17, 2013 @ 2:04 am

  15. 3B/(2B+3B) has a low rate, generally under 0.10, with small samples (at most 50 chances per year, many times 30-35) and it’s correlation is .606. LDs are just too subjective, even when Matt limited the sample to players on the same team in consecutive years.

    Comment by Brian Cartwright — January 17, 2013 @ 8:00 am

  16. Aah yes I think you are right…. I am guessing the effects of scoring biases and pure randomness around how how it is to hit a line drive still have a much bigger effect.

    Comment by Nayr — January 17, 2013 @ 10:09 am

  17. Actually, thinking about it more I changed my mind again. The sample size issue doesn’t really make sense. The year to year correlation of non-line drives (1-LD%) is the same as the correlation for LD% and that has a much higher base rate obviously.

    The real issue is that the scorer bias and underlying randomness in the stat are higher. Could be wrong (or change my mind again), but this makes the most sense to me right now.

    Comment by Nayr — January 17, 2013 @ 10:52 am

  18. So we can conclude that the higher the correlate, the more skill is involved? For instance hitting fly balls is more of a skill than line drives? Or maybe they are just easier to repeat?

    Comment by Chris — January 17, 2013 @ 3:13 pm

  19. Isn’t that true of (1-anything%)? LD% is still the variable.

    I think it’s a small part of what’s going on here, but the law of large numbers is in play, as always: http://en.wikipedia.org/wiki/Law_of_large_numbers

    A rarer event is likelier to occur further from a certain expected rate — e.g.: flipping a coin 75% heads one year and 25% the next is not a shocker if you only flip it 4 times per year… it would be nearly impossible to do that if you flipped it 1,000 times per year (I mean, assuming it’s a normal coin…).

    Comment by Steve Staude. — January 17, 2013 @ 5:32 pm

  20. Yes that is true of any stat, but what I am thinking is that every one of the stats presented in this article is binomial (either a LD or not a LD…either a GB or not a GB). The sample size is still all balls in play or whatever the denominator is for that stat. The number of trials is not changing.

    It doesn’t seem to me that there is any reason that having a different probability of success on a binomial trial would lead to the year to year correlation dropping (unless something else is different).

    To use your coin example, it is not about flipping the coin 4 times vs 1000, but flipping a bunch of differently weighted coins 1000 times each and comparing the correlation of each coin’s results year after year.

    Comment by nayr — January 17, 2013 @ 7:00 pm

  21. I mean, I think you’re mainly right (I commented earlier and used the analogy of weighted dice, which is the main thing happening here, IMO), and I may just be confusing myself here, but here’s a little exercise I just did on a spreadsheet:

    I simulated 300 roulette wheel spins each year over a period of years, on a 0-36 wheel (no 00s), and tried to see how often 0’s came up each year. On this simulation that just popped up, I have 1% in one year followed by 6% in the next. Something like that would completely mess up a correlation. It can happen with rarer occurrences. Now contrast that with a coin flip — there’s no way you’ll get a Heads% 6 times higher than the previous year’s (e.g. 15% to 90%), over that many flips, and with the same coin.

    Comment by Steve Staude. — January 17, 2013 @ 8:46 pm

  22. Regarding higher year-to-year correlations equating to more skill being involved: probably for most of them, but not necessarily. As they say, correlation does not imply causation, so skill isn’t necessarily causing the high correlations.

    Example: there’s a 0.75 year-to-year correlation for playing in rainy games (OK, I made that up). Player skill has nothing to do with it — that’s more about where you play.

    Comment by Steve Staude. — January 17, 2013 @ 9:03 pm

  23. Remember that the 1% vs 6% is the same as saying 99% vs 94%. I don’t think that you are thinking about correlation properly. The deviation from the overall mean is what is important, not the magnitude.

    Comment by Nayr — January 18, 2013 @ 9:18 am

  24. You multiply the sums of each factor minus their means, as you say, but then you divide that by the product of the standard deviations for each factor (and n-1). So the magnitude is important by way of the standard deviation. If you uniformly multiplied a factor by ten, the differences from its mean would rise accordingly, but this would be offset completely by the rise in its standard deviation.

    That being said… yeah, you’re right. I’d only be right if we were talking about variables that aren’t essentially different sides of the same die. I was forgetting how much it changes things that LD%, FB%, and GB% are so dependent on each other — I mean, LD% = LD/(LD+GB+FB), which makes the standard deviation in LD% the same as the standard deviation of (GB+FB)%. I guess that’s what you were getting at earlier. Whoops, sorry.

    Comment by Steve Staude. — January 18, 2013 @ 5:23 pm

Leave a comment

Line and paragraph breaks automatic, e-mail address never displayed, HTML allowed: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>


Close this window.

0.197 Powered by WordPress