The Problem with ProjectionsApril 15th, 2010 by Chris Liss in General Guidance, Theoretical
I posted this as a comment in the last thread, but figured it merited its own because it's a new topic.
I think the reason it's hard to make accurate projections even setting aside injuries and variance, i.e., luck, is that healthy players' skills don't remain the same. And anticipating skill growth and regression, apart from luck, is the key to the game. One can plot general growth curves for players (hitters peak and 27 and slowly decline), pitching, I think, at 31, but the general curve is just an average and does not apply to individual players whose growth and regression can be gradual or abrupt, or proceed in fits and starts. So whatever projections you use will be wrong unless they can anticipate growth or regression optimally for each player – apart from the variance that accompanies their growing or regressing skill set.
The problem with projections whether you use a range or a specific number is that by their nature they assume an average amount of growth or regression given the player's age or experience level. If I think Justin Upton will hit 45 homers this year (that that's really his new baseline given his park, his rapidly approaching peak, his growing experience, etc.), I cannot possibly put that number into the model because it's too far away from the normal growth curve of a player his age and with his history. But outlier careers exist! Not just as a matter of variance, but outlier baselines exist. If you were doing career projections you could not have given Pujols or ARod their actual baselines. It was too unlikely. But some player is going to be Pujols or ARod. So your model either has no outliers or random outliers. That's the trouble with projections – they're impossible to do right not only because of variance, but because they're either too timid, or too speculative.
So I've found it's better to just practice identifying the breakout players, and go the extra buck for them at auction (within reason). Just get really good at identifying them by being honest and rigorous at the end of the year about which ones you got right, wrong and why. Condition one's brain to synthesize the data well enough to get a sense of what collection of variables gives the best chance to find the breakout guy. If you're good at that, then you'll win your leagues more often than you should. If you could build a model that synthesized those variables better than my brain, I'd be concerned. I don't know if that's possible, but if there were a projection-creating model that was more than as Peter says, weighted averages, that would be something.
As I've said many times before, I'm skeptical that a pricing model (as opposed to a breakout-finder model) would confer a significant advantage. And I've probably argued it more vigorously than I've needed to. The bottom line – in my experience playing against really strong players some of whom use a model of that kind and some who don't, it's not a difference maker. It's possible in the 30 or so high-level experts leagues I've played in, and the 50 more regular leagues, I've missed something, and there's more to it.
But apart from my experience, that most valuation models depend on projections, and projections, it seems to me, are necessarily timid or random makes it hard for me to believe any system that merely seeks to translate projections accurately is going to be a winner – unless the other players aren't any good at finding the outliers, i.e., are as likely to leave the breakout players behind as they are to choose them. Then the vig might be enough to make the difference.