How Well Can We Predict Saves?August 12th, 2011 by Derek Carty in Player Discussion, Prediction, Standings Analysis, Team Analysis, Theoretical
This season, I’ve been running a series of articles here at the website for the CardRunners Experts League looking at closers and how we can best predict the number of games a closer will save in a given year. Thus far, I’ve looked at a closer’s preseason hold on the job, his skills, and his closing experience, but aside from picking a closer with a firm hold on the job before the season starts, there is little difference between the top tier closers and the bottom tier ones in terms of pure saves. Today, I wanted to combine all of our factors to see just how well we can predict saves and then look at which closers have over/underperformed expectations in 2011 and which CardRunners teams have gained/lost the most.
The Secret Sauce
To combine everything we’ve looked at thus far, I’ll be using a multivariate linear regression, which is a lot simpler than it sounds. For variables, I’ll be using a closer’s Marcel-projected preseason ERA, the number of years of closing experience he’s had in the past six years, and each of our six preseason job security variables (Sole Closer, Injured Closer, Injury Replacement, Injury Replacement Committee, Closer Committee Favorite, and Closer Committee Member). In order of importance:
1. Sole Closer
2. Closing Experience
3. Projected ERA
4. Injury Replacement
5. Injury Replacement Committee
6. Closer Committee Favorite
7. Closer Committee Member
8. Injured Closer
When we combine all of these variables and have them predict the number of saves the closer will accrue, we get an adjusted r-squared of 0.26. That’s nothing to sneeze at. What this means is that, using these eight variables, we can predict 26% of the differences in closer saves. The other 74% is comprised of other variables, sheer luck among them (and probably a significant chunk).
The Best Bets for Saves in 2011
Using the formula derived from our regression, we can determine who the best bets for saves were at the time of the CardRunners auction. Keep in mind that we drafted in early March, so while a guy like Kevin Gregg started the year as the sole closer, at the time there was a competition of sorts between him and Koji Uehara.
It’s not surprise to see Mariano Rivera at the top of the list, but Joe Nathan second is a bit of a surprise, especially since he ended up losing his job very quickly. The model doesn’t take complicated situations like Nathan’s into account, and injury history is completely ignored. The rest of the 25+ guys were all expected to be full-time closers to start the year, and the only surprise might have been Fernando Rodney expected to save 26 games.
CardRunners members seemed to judge the AL’s three murkiest situations (Seattle, Baltimore, and Tampa Bay) very well, as the two relievers involved for each team were purchased within $3 of each other. As my studies have shown, if you’re not healthy and have the job completely to yourself, it’s almost an even proposition who will get more saves—you or your primary competition.
You’ll notice that I highlighted four players on the list. These are the four players that would have been the biggest bargains based on their expected number of saves and their price tags. What’s incredibly interesting, however, is that these are also the only four full-time closers to lose their jobs this season (if you excuse Soria’s brief demotion and Bailey’s injury). What does this mean? Is it mere chance and bad luck that this happened, or did CardRunners owners know something that the model doesn’t?
My gut is that it’s a combination of both, but more bad luck than anything else. As noted earlier, Nathan’s situation was different as he was coming back from Tommy John Surgery, had missed all of 2010, and his velocity was down. Rodney has a reputation as a terrible pitcher who has no business being a closer, and his skills are terrible compared to the other 25+ xSV closers. I think this affected his price greatly—and unduly—since my previous studies have shown that closers with terrible skills still average around 25 saves. That leaves Thornton and Francisco who, while expected to be the primary closers, hadn’t had the job 100% locked down at the time of our draft (they were probably 85-90% bets). I’d be very interested to hear what other members of the league think about this, though.
The Luckiest and Unluckiest Closers in 2011
Now that we’ve seen the which closers were the best bets for saves, let’s see which CardRunners participants drafted each and how they wound up doing. I’ve color-coded the teams, so hopefully that helps more than it hurts.
*This field lists the closer’s year-to-date saves prorated through the rest of the season so that it can be compared to his expected saves. That is, if Brandon League continues with his current pace, he’ll save 38 games by the end of the year.
“Rotoman” Peter Kreutzer made out quite well in terms of saves, nabbing two of the top five luckiest closers in terms of our xSV formula. ESPN’s Jason Grey and partner Paul Jones were the big winners with Brandon League, but their gains there have been mitigated by Andrew Bailey’s injury (though he’s a decent bet to get to 28 saves now that he’s healthy). Clark Olson and Larry Schechter received a little better than even value for their single closer purchases, which is something we all hope for and probably played into their sitting atop the standings for most of the year. The team of Chris Hill and Nick Cassavetes received even value on both of their high-priced closers, while the big loser appears to be me. I managed to select two closers who lost their jobs in April, both of which are in the bottom five here.
CardRunners’ Luckiest and Unluckiest Teams
Let’s take a look at how it all breaks down by team:
The second place team in terms of xSV profit this season didn’t even appear on the previous list as Wiggy/Hastings didn’t purchase a real closer but took a reserve round flier on Sergio Santos. Shawn Childs appears third on the list despite purchasing just one closer on the previous list, Jake McGee, but his $1 flier on Jordan Walden paid off big time. The other two clear beneficiaries were Peter Kreutzer (who nabbed Farnsworth and Gregg plus a flier on Jon Rauch), Grey/Jones (who nabbed League), and Hill/Cassavetes (who made small profits on Papelbon/Feliz and hit on their Matt Capps lottery ticket). On the flip side, I clearly took the biggest beating with Thornton and Francisco, receiving 35 fewer saves than xSV would have projected. The only other team close to me in that regard is Brauning/Baird, who drafted Soria and Rodney and currently sit in a firm last place. I find it kind of incredible that I’ve managed to do so well despite such great losses here, currently sitting in second place overall and with 9 points in saves.
Given my storied unsuccess drafting closers, Eric suggested that I run some studies on closers and examine just what kind of return on investment one can expect from a closer. Over the next few weeks, I'll be digging into the data and answering a number of questions about closers that should prove extremely useful for both myself in figuring out where I keep going wrong and for the population at large in their own closer decisions.
This appeared in the first article I penned on the subject of closers here at CardRunners. So where did I go wrong? Or did I at all? I like to think I was merely unlucky, given everything we’ve seen over the past four articles, but maybe people disagree. If you do, I’d be very interested to hear.
I’d also be interested to hear from Jason Grey/Paul Jones and Peter Kreutzer, who combined to grab three of the top four biggest xSV overachievers. I’d also be interested to hear from Andrew Wiggins/Brian Hastings and Shawn Childs about their respective selections of Sergio Santos and Jordan Walden.
Did you guys see anything in any of these closers or their situations that led you to target them specifically? Is there anything you saw that I didn’t study here and didn’t go into my formula? I imagine managers and injury history are two other important factors that I didn’t look at, but these are difficult things to test (especially given data constraints).
That wraps up my series on closers. I like we learned some very interesting things about closers, turned some preconceived notions on their head, and have found ways to better optimize the money we choose to spend on closers on draft day. Any comments, questions, disagreements, or suggestions for improvement are more than welcome.