Doom and Gloom From a Dismal Scientist

facebooktwitterreddit

The month of April has reached an end, and a significant number of important Tigers have underwhelmed at the plate.  Is this a sign of things to come?  It should be no secret to readers of this blog that statistical performance is dominated by variance and by regression to the mean: in general that means “things are rarely as bad as they look, or as good”.  Slumps end, as to streaks.  Magglio will bounce back, so will Victor Martinez, et al.  Jim Leyland clearly possesses that patience and that perspective.

It doesn’t feel like that as a fan, though, does it?  That’s why we in the blogosphere write alarmist posts and demand action!  Watching game after game, it sometimes feels like Austin Jackson will never make contact.  Perspectives are shortened, we want recovery now and we worry that bad hitting in April will mean no pennant race and no playoff games in the fall.  Should we worry, or should we try to have Leyland’s patience?

To answer this question, I have fallen back on the techniques of my day job: statistical analysis.  I gathered data on batting performance over the whole 2009 season, and over the 2010 season split into two – April numbers, and those for the rest of the season, including in the sample all 294 hitters who had 100 plate appearances in 2009 and in 2010 as well as at least 25 in April of last year.  This is primarily to weed out pitchers, but has the effect of biasing the sample towards veterans without serious injuries.  Then I used a simple linear regression to explain batting average in the last five months of 2010 using only that player’s batting average in 2009 and in April of 2010.  The same technique was followed for balls in play average, walk percentage, strikeout percentage and isolated power.  So what do we see… does a poor April seem to predict poor performance over the rest of the year?

To find out, follow me through the jump:

First, a little basic statistical background and terminology:  ‘R-squared’ means the percentage of the variation in something that we’ve managed to predict using something else, a high ‘R-squared’ implies less randomness and vice versa.  The ‘constant term’ in these linear regressions is, in effect, a measure of mean reversion – since the average value of 2009 stats is non-zero and probably pretty close to that for 2010.  The linear ‘coefficients’ for 2009 stats and April stats tell us the percentage of it that we should expect to carry over to 2010.  If (ignoring April for the moment) we get a constant term of 0.13 and a coefficient for 2009 batting average of 0.5 then we should expect a player’s 2010 batting average to be half of his 2009 batting average plus 0.13 – so a player who hit .200 in 2009 would be expected to hit .230 in 2010 but a player who hit .300 in 2009 would be expected to hit .280.  If we get a positive coefficient for April, that would mean that players who hit well in April would be expected to hit better over the rest of the year than their performance in 2009 would otherwise have suggested.

And on with the results!

For Batting Average:  we get an R-squared of only 15.1%, as we should have expected batting averages are highly variable from year to year.  The coefficient for 2009 batting average is 0.46, and that for April 2010 batting average is 0.084 – results for both are ‘statistically significant’ – the constant term is 0.11.  If a player hit .300 in 2009 and in April 2010, we would still expect him to muster only a .273 average over the remainder of 2010 – that’s a lot of mean reversion.  If a player hit .200 in 2009 and in April 2010 we would expect him to hit .219 over the remainder of 2010.  However, if that .300 hitter in 2009 hit .200 in April 2010 we would revise our forecast for the remainder of 2010 down to .265.  A weak April does mean something.

We get similar results for BABIP, as most of you would have expected.  R-squared is low, only 9.5% – because BABIP is highly variable from year to year.  Mean regression is even more pronounced, with a constant term of .179.  The coefficient on 2009 BABIP is lower (0.272), implying less predictive power – but the coefficient on April BABIP is higher (.116).

For isolated power:  we get a much higher R-squared of 45%, less mean variation here.   Less mean reversion too with a constant term of only 0.023.  The coefficient on 2009 ISOP is 0.63 and the coefficient on April 2010 ISOP is 0.126 – again both are ‘statistically significant’.  We would expect a player with a .200 ISOP in 2009 and in April 2010 (say, a .270 BA and a .470 SLG) to have an ISOP of .174 over the rest of 2010.  A player with an ISOP of .200 in 2009 but an ISOP of .053 in April 2010 (that was Hanley Ramirez’s ISOP this year last time I checked) would be expected to put up only a .156 ISOP over the rest of 2010.

The same basic story also holds true for strikeouts and walks:  R-squareds are higher, since these are definite skills subject to less variability year-on-year than batting average or even power.  Constant terms are smaller, both as a percentage of the average value and in absolute terms, because there is less mean reversion.  2009 numbers have strong predictive power, but there is a statistically observable effect from April stats.  All else equal a guy who strikes out more in April than he did the previous season is likely to continue to strike out more over the course of the season.

A quick note on the mean regression and R-squared numbers: above all else we should temper our expectations of hitters with a limited track record who have displayed no tool other than BABIP (cough*Jackson*cough).  A significant part of what makes batting average consistent from year to year, especially among batters with longer track records like an Ordonez, is putting more balls in play by striking out less.  If we strip that out of the equation we get very little predictive power and very little consistency from year to year.  These results could be taken to imply that hot hitters stay hot and cold hitters stay cold, but that’s not really what I am trying to say.  It is probably more that a strong April suggests that last year’s performance was not indicative of the player’s true talent, the opposite would also be true.  This could be because the player was hurt and is now healthy, or vice versa – that the player was healthy but now has an injury likely to linger.  It could be that a weak April suggests that an older player is in decline, or that last year’s success was a fluke.  Note:  this is more than mean regression, which is also a ‘statistical fact’.  If a specific player has a low BABIP in April that means that the specific player is going to have an unusually strong mean reversion.

So, bad April numbers here do paint a somewhat grim picture – we shouldn’t expect our guys to snap back to their career norms.  We may have based our expectations solely on how well they hit last year and we need to revise them down a bit (if we haven’t already) – in part because of inevitable mean regression from guys like AJax – but also specifically because of a poor start to the year.  However… We should have enough patience to resist pulling the plug on productive veteran hitters, and we should not expect guys like Magglio Ordonez to hit just as poorly as they have thus far.  Although April numbers do matter, they matter less by far than a player’s longer-term track record.  Maggs may not match his 2010 numbers from here on out, but he probably won’t fall all that far behind.  We also should put too much stock in hot bats in April, though that does suggest that they will outperform preseason expectations.  Still, their track records do matter more.

Brennan Boesch hit .256 last year and (as of Friday afternoon) had the second-best average in the AL at .350 so far this year.  From here on out, we would expect him to hit only a fairly tepid .258 from here on out.  Ordonez, on the other hand, hit .303 last year and a meager .172 this spring (as of Friday) – from here on out we might expect him to hit .265