CardRunners Promo Codes

How Well Can We Predict Saves?

August 12th, 2011 by in Player Discussion, Prediction, Standings Analysis, Team Analysis, Theoretical

This season, I’ve been running a series of articles here at the website for the CardRunners Experts League looking at closers and how we can best predict the number of games a closer will save in a given year.  Thus far, I’ve looked at a closer’s preseason hold on the job, his skills, and his closing experience, but aside from picking a closer with a firm hold on the job before the season starts, there is little difference between the top tier closers and the bottom tier ones in terms of pure saves.  Today, I wanted to combine all of our factors to see just how well we can predict saves and then look at which closers have over/underperformed expectations in 2011 and which CardRunners teams have gained/lost the most.
 

The Secret Sauce

To combine everything we’ve looked at thus far, I’ll be using a multivariate linear regression, which is a lot simpler than it sounds.  For variables, I’ll be using a closer’s Marcel-projected preseason ERA, the number of years of closing experience he’s had in the past six years, and each of our six preseason job security variables (Sole Closer, Injured Closer, Injury Replacement, Injury Replacement Committee, Closer Committee Favorite, and Closer Committee Member).  In order of importance:

1.       Sole Closer
2.       Closing Experience
3.       Projected ERA
4.       Injury Replacement
5.       Injury Replacement Committee
6.       Closer Committee Favorite
7.       Closer Committee Member
8.       Injured Closer

When we combine all of these variables and have them predict the number of saves the closer will accrue, we get an adjusted r-squared of 0.26.  That’s nothing to sneeze at.  What this means is that, using these eight variables, we can predict 26% of the differences in closer saves.  The other 74% is comprised of other variables, sheer luck among them (and probably a significant chunk).
 

The Best Bets for Saves in 2011

Using the formula derived from our regression, we can determine who the best bets for saves were at the time of the CardRunners auction.  Keep in mind that we drafted in early March, so while a guy like Kevin Gregg started the year as the sole closer, at the time there was a competition of sorts between him and Koji Uehara.

It’s not surprise to see Mariano Rivera at the top of the list, but Joe Nathan second is a bit of a surprise, especially since he ended up losing his job very quickly.  The model doesn’t take complicated situations like Nathan’s into account, and injury history is completely ignored. The rest of the 25+ guys were all expected to be full-time closers to start the year, and the only surprise might have been Fernando Rodney expected to save 26 games.

CardRunners members seemed to judge the AL’s three murkiest situations (Seattle, Baltimore, and Tampa Bay) very well, as the two relievers involved for each team were purchased within $3 of each other.  As my studies have shown, if you’re not healthy and have the job completely to yourself, it’s almost an even proposition who will get more saves—you or your primary competition.

You’ll notice that I highlighted four players on the list.  These are the four players that would have been the biggest bargains based on their expected number of saves and their price tags.  What’s incredibly interesting, however, is that these are also the only four full-time closers to lose their jobs this season (if you excuse Soria’s brief demotion and Bailey’s injury).  What does this mean?  Is it mere chance and bad luck that this happened, or did CardRunners owners know something that the model doesn’t?

My gut is that it’s a combination of both, but more bad luck than anything else.  As noted earlier, Nathan’s situation was different as he was coming back from Tommy John Surgery, had missed all of 2010, and his velocity was down.  Rodney has a reputation as a terrible pitcher who has no business being a closer, and his skills are terrible compared to the other 25+ xSV closers.  I think this affected his price greatly—and unduly—since my previous studies have shown that closers with terrible skills still average around 25 saves.  That leaves Thornton and Francisco who, while expected to be the primary closers, hadn’t had the job 100% locked down at the time of our draft (they were probably 85-90% bets).  I’d be very interested to hear what other members of the league think about this, though.
 

The Luckiest and Unluckiest Closers in 2011

Now that we’ve seen the which closers were the best bets for saves, let’s see which CardRunners participants drafted each and how they wound up doing.  I’ve color-coded the teams, so hopefully that helps more than it hurts.

*This field lists the closer’s year-to-date saves prorated through the rest of the season so that it can be compared to his expected saves.  That is, if Brandon League continues with his current pace, he’ll save 38 games by the end of the year.

“Rotoman” Peter Kreutzer made out quite well in terms of saves, nabbing two of the top five luckiest closers in terms of our xSV formula.  ESPN’s Jason Grey and partner Paul Jones were the big winners with Brandon League, but their gains there have been mitigated by Andrew Bailey’s injury (though he’s a decent bet to get to 28 saves now that he’s healthy).  Clark Olson and Larry Schechter received a little better than even value for their single closer purchases, which is something we all hope for and probably played into their sitting atop the standings for most of the year.  The team of Chris Hill and Nick Cassavetes received even value on both of their high-priced closers, while the big loser appears to be me.  I managed to select two closers who lost their jobs in April, both of which are in the bottom five here.
 

CardRunners’ Luckiest and Unluckiest Teams

Let’s take a look at how it all breaks down by team:

The second place team in terms of xSV profit this season didn’t even appear on the previous list as Wiggy/Hastings didn’t purchase a real closer but took a reserve round flier on Sergio Santos.  Shawn Childs appears third on the list despite purchasing just one closer on the previous list, Jake McGee, but his $1 flier on Jordan Walden paid off big time.  The other two clear beneficiaries were Peter Kreutzer (who nabbed Farnsworth and Gregg plus a flier on Jon Rauch), Grey/Jones (who nabbed League), and Hill/Cassavetes (who made small profits on Papelbon/Feliz and hit on their Matt Capps lottery ticket).  On the flip side, I clearly took the biggest beating with Thornton and Francisco, receiving 35 fewer saves than xSV would have projected.  The only other team close to me in that regard is Brauning/Baird, who drafted Soria and Rodney and currently sit in a firm last place.  I find it kind of incredible that I’ve managed to do so well despite such great losses here, currently sitting in second place overall and with 9 points in saves.
 

Circling Back

Given my storied unsuccess drafting closers, Eric suggested that I run some studies on closers and examine just what kind of return on investment one can expect from a closer. Over the next few weeks, I'll be digging into the data and answering a number of questions about closers that should prove extremely useful for both myself in figuring out where I keep going wrong and for the population at large in their own closer decisions.

This appeared in the first article I penned on the subject of closers here at CardRunners.  So where did I go wrong?  Or did I at all?  I like to think I was merely unlucky, given everything we’ve seen over the past four articles, but maybe people disagree.  If you do, I’d be very interested to hear.

I’d also be interested to hear from Jason Grey/Paul Jones and Peter Kreutzer, who combined to grab three of the top four biggest xSV overachievers.  I’d also be interested to hear from Andrew Wiggins/Brian Hastings and Shawn Childs about their respective selections of Sergio Santos and Jordan Walden.

Did you guys see anything in any of these closers or their situations that led you to target them specifically?  Is there anything you saw that I didn’t study here and didn’t go into my formula?  I imagine managers and injury history are two other important factors that I didn’t look at, but these are difficult things to test (especially given data constraints).

Concluding Thoughts

That wraps up my series on closers.  I like we learned some very interesting things about closers, turned some preconceived notions on their head, and have found ways to better optimize the money we choose to spend on closers on draft day.  Any comments, questions, disagreements, or suggestions for improvement are more than welcome.

20 Responses to “How Well Can We Predict Saves?”

  1. Derek Carty says:

    One other interesting this to note that I forgot to mention in the article is that the pitcher's projection and years of experience seem to operate more-or-less independently of each other.  I thought there was a chance that there would be a lot of overlap between the two and only one would be significant in the model.  I thought skill might be a decent proxy for closing experience.  After all, if a pitcher is good, he's more likely to be a closer.  The p-value for the projection was 0.028 while the p-value for experience was 0.005 — both highly significant.

  2. Peter Kreutzer says:

    Lots of interesting stuff in the series, Derek, though I think your goals are hamstrung somewhat by looking at just one year. That represents just 32 situations and constitutes a very small sample size.
    My approach in ambiguous situations, unless I have a firm read on a player, is to take the cheaper guy. So I took the cheaper Farnsworth, Gregg and Aardsma over the more expensive McGee, Uehara, and League. I also took Rauch in the reserve rounds over Francisco.
    The point isn't that the cheaper guy is a better chance than the more expensive guy, but you get a few more bullets with the cheaper guys and that can be a significant advantage. If it works out, as it did for me. For the price of one good closer I got two and a half okay closers and am holding onto first place in saves months after trading Farnsworth, Rauch and Gregg away. (It's nice to have one bright spot on the season.)
    I think your first chart is more interesting if you sort by the purchase price, which filters out whatever distortions there are in xSV. The four yellow highlighted closers are then grouped below the successful closers, and with similar (if less dramatic) failures McGee, Uehara and Aardsma. 
    Thanks for taking this on, Derek. Your work this year will help us build a more extensive multi year study, which will perhaps help sort out the winners and losers in the preseason. 

    • Derek Carty says:

      Thanks, Peter.  All of the tests I ran in the first three articles and the first half of this one look at 10 years worth of data (2001-2010).  This article focuses primarily on this year, though, since I was applying the results of those tests to this year's CardRunners league.  Those xSV numbers are based on the formula derived from studying 10 years worth of closers.

  3. Eric Kesselman says:

    I think one thing to remember is that while we're investing capital in search of good returns, but (especially in leagues without CR's payout structrure) we are often looking for speculative profits instead of conservation of investment in these spots.

    We aren't trying to preserve our $260 auction dollars, or turn it into $270. We are looking for big wins, and should be prepared to take some risks to get there. The reason for this of course is because we're trying to beat out 11 other people, and if you don't score some big wins, someone else invariably will and you won't win the league.

    There may be places in your roster where you do want to play it safe, but it seems to me that closers is not that place. The reason being that there are so many spots to place high reward, low risk bets. Because of that, I think the flier who turns into a closer or nothing is a great way to invest a little capital. 

    I am leery however of purchases like the $8 Gregg. It seems if you win you wind up with a mediocre closer (seems reality turned out nearly as well as could be hoped for and you made $8 profit or so?), and if you lose you've turned $8 into 0. This strikes me as neither low risk nor high reward- its really medium risk and medium reward. Seems like you pay $8 for a guy who is 50%ish to turn into a $16 closer. Also if you take a few of these bets, the likely outcome is probably mud. Any single win is not big enough to dominate, and the losses are expensive. Put another way, I don't think these bets put your team in a position to get lucky and win the league. If you had three Kevin Greggs, you'd have to hit on all three just to win $25 or so. Unlikely and not a big enough win when the parlay hits. 

    I think the better play is to try to get the next Sergio Santos and Jordan Walden for a $1 or a reserve pick, even if the chance of him becoming the closer is 5% to Gregg's 50%. 

    I liked both your Aardsma and your Farnsworth purchases more, Aardsma because it seemed safer for your $8- he was only supposed to be out a month or of the season and League actually went for $6. Also hes a much better pitcher than Gregg. Farnsworth for $6 because his competition wasnt an established good pitcher and because it was 25% cheaper. 

    Perhaps its a bit extreme but I think I love gregg at $5 and hate him at $8. 

     

    • Peter Kreutzer says:

      I opened the bidding on Gregg at $8 and got crickets. That was perhaps a mistake, because maybe I would have gotten him for less. On the other hand, in most leagues he went for $8 or more, so maybe a saved a buck or two by bidding as I did. I was happy with Gregg at $8 because I didn't think there was any chance he was a $0. Maybe he'd prove to be a $4, but I felt pretty sure given the setup in Baltimore that he was a $12+. This proved to be true.
      I liked Farnsworth at $3, but I'm not sure he was a better pick. Gregg had a history of success. Farnsworth had a history of failure. I got lucky there. Farnsworth turned out to have Guile.
      The point you're making that I agree with is that you need big winners to win. I didn't buy Gregg to be a big winner, I bought him to be in the game. I bought Farnsworth and Rauch and Mike Gonzalez looking for a big score, and that worked out.
       

    • Ahlam says:

      You send out birthday cards and Christmas cards. The dialershep has one of those books with all those addresses in them( I worked at a Cadillac, Pontiac, GMC) dialershep for many years, I know their gimmicks, and they work, to a degree.Send the card and tell them to ask for you.References :

  4. Peter Kreutzer says:

     
    Obviously I should have reread the series before commenting about the sample size. I forgot what facts you were deriving your formula from.
    I'm still bothered by that sort in the first chart. It makes sense if you're testing your formula to sort by xSV, but to then act as if xSV is a real fact when analyzing the real world results appears to ignore the fact that you haven't yet proved the value of xSV.
    In other words, in order to draw conclusions on a study based on xFIP instead of FIP, you have to prove that xFIP is superior to FIP. Similarly, you need to prove that xSV means something more than other ways of evaluating the closers on the margin. I'm not sure you have. Certainly, Thornton and Francisco and Rodney argue against it.
    It seems like your analysis in its details supports Shandler's ancient observation about closers and potential closers needing Talent, Opportunity and Guile. The problems emerge because, while we can objectively evaluate talent and we can subjectively determine (before the fact) opportunity, we get stuck on Guile. The only thing we have to go on are results.
    Jon Rauch seems like he should be able to close as well as Kevin Gregg, talentwise, but when it comes time to do the job because he gets the opportunity, he seems to always flounder. While Gregg, like Mr. Magoo, muddles along. I almost said guilelessly.
    Since in our preseeason forecasts we're simply allocating a team's 35-50 saves in something approaching a zero-sum equation, Francisco's lack of Guile increases Rauch's. Farnsworth's Guile means Peralta, McGee and others don't get to show theirs. Thornton's lack of Guile, plus Sale's failure, means Santos gets a chance to prove his.
    I guess I wonder whether Guile is really a quality that persists, or whether it's something you acquire in your first success in the role of closer and lose if at first you fail?
    Your numbers indicate that a player with the role (pretty much regardless of apparent talent) who holds onto the role will be near as valuable as the player with a track record of success. This is interesting stuff, but it tells me that most of the rest of the details are irrelevant. You can (in fact must) pay for guys with track records, but when it comes to guys without a history of saves success, no matter how much talent and opportunity they appear to have, they come with more risk the more you pay for them.
    Incorporating that knowledge into your projected bid prices seems to me the key bit of business here.
    Ps. I've reread the June 17 piece (Do only good closers keep their jobs?) a few times and something bothers me about the ERA relationship with the number of saves for Sole Closers to start the season. If the proof there is that designated sole closers, regardless of skills, are fairly reliable, then I have no quarrel (though the difference between 25 and 35 saves is significant). But isn't the real issue trying to identify situations where, regardless of what the manager says, there is less certainty? And then pick the winners there?
     
     

    • Derek Carty says:

      You're right, Peter, that it would be good to test the formula up against something.  I'd be happy to do that.  Do you have a suggestion?  I guess I sort of assumed it would be last season's saves or any other such simplistic method, since that's pretty much accounted for and more.  Whatever you suggest, if it's testable, I'll check.
       

      It seems like your analysis in its details supports Shandler's ancient observation about closers and potential closers needing Talent, Opportunity and Guile.
       

      I'm not quite sure that it does.  Yes, there's a difference between 25 and 35 saves, but the study basically showed that talent is unnecessary so long as the pitcher is at least MLB replacement level-caliber.  A pitcher like that will produce 70% of the saves that cream of the cream of the crop will produce.  Talent is important, but opportunity is exponentially more important.
       
      Guile is an interesting thing, and I wonder if it simply comes down to noise.  The same as it seems like some players perform better in the first half than other players.  But if players are normally distributed, 4% of them are always going to be at least two standard deviations from the mean.  Often times we, as analysts, like to view those outliers as something more than they are.  In some cases they might be, but without any real way to tell, how can we say which belong there and which are there simply because of the statistics of it, simply due to random variation?
       

      You can (in fact must) pay for guys with track records, but when it comes to guys without a history of saves success, no matter how much talent and opportunity they appear to have, they come with more risk the more you pay for them.
       

      I'm not sure this is true.  The third article in the series showed a relatively weak correlation between prior closer experience and success.  What makes you say that it's a necessity to pay for track records?

      …something bothers me about the ERA relationship with the number of saves for Sole Closers to start the season. If the proof there is that designated sole closers, regardless of skills, are fairly reliable, then I have no quarrel (though the difference between 25 and 35 saves is significant). But isn't the real issue trying to identify situations where, regardless of what the manager says, there is less certainty? And then pick the winners there?

      I think the first half of that is correct, but I agree that it can be important to identify the situations you mention in the second half.  How do we do so, though?  What do we look at that I didn't look at in this series?  There probably (in fact, certainly) are things I didn't look at, but I think the question is "what are they?" and "how can we test them?"  Aside from trying to read between the lines of a manager's quotes and divine what they're thinking, what else can we do?  What else separates a closer who will succeed and a closer who will fail?  I think another variable that might be important that I wanted to test but couldn't do to data constraints was a manager's hook.  How quick is any particular manager to pull a struggling closer?

      • Peter Kreutzer says:

        Since xSV is a projection of saves, I would think the test would be to see how it correlates with the real world results. If it does the best job of producing accurate projections for saves then it's the best gauge for the purposes of your study. 
         
        My real issue was your ranking of the expectations for the closers on your untested scale, and then using their performance against that scale to draw conclusions about what happened. This was this issue of the first chart, which is sorted by xSV rather than $cost in the draft.
         
        I think the problem is that, as we all know, a pitcher who has the closer role (regardless of talent), is likely to get saves. Since Rodney, Francisco and Thornton were apparently given the jobs, they rank highly in your model. But I think that's a bug, not a feature.
         
        Talent evaluation would tell you that Rodney was wholly unsuited for the job.  I thought Thornton would do a good job, but the bottom line is that the leash is fairly short for new guys. If they don't show Guile right away, they are replaced. 
        I use the word Guile ironically. I was suggesting in the previous post that Guile is a function of performing successfully and earning confidence. It isn't measurable, but Ryan Franklin had it. Until he lost it. I'm pretty sure this was how Shandler was using it. 
        In these borderline cases, as you learned this year, the annointed closer ain't necessarily so. And when one guy fails, another blooms. Which means that any money you spend for a failure is additionally insulted because someone else gets a closer on the cheap. Santos makes your Thornton buy that much worse.
         
        The point is that this happens every year. So you shouldn't pay up for guys who have the job but have no track record of success as closers. Unless they're great. Not because the guy you pay for may fail, but because his failure means that someone else is going to get saves cheap.

        • Derek Carty says:

          I'm not sure I buy all this, Peter.  It seems that you're saying Rodney was a bad play because he was a terrible pitcher, but the second article showed that this is of relatively minor importance.  Knowing nothing about Rodney aside from 1) he was a closer and 2) he was terrible, we'd still expect him to save 25 games.  I think the question is, if Rodney was a bad pick, what else was there to lead us to that decision?  Being a bad pitcher isn't enough.

          As for Thornton, you said that he was a good pitcher but that he had a short leash because he was a new closer.  Article 3 tested this hypothesis, though, and again found that a closer with good skills and zero experience closing would still be expected to close close to 30 games.  So the question here becomes, what else did we know about Thornton that makes him a bad pick?

          You said that having Rodney/Thornton/Francisco with 25+ xSV is a bug, not a feature, so I'm asking where the bug is.  What else should be included that would show us that these three weren't good bets for 25+ saves?  There will always be guys losing jobs; no model will nail it perfectly.  This year it happened to be these three.  Was it coincidental, or is there more to the story?  Any closer who loses a job will be a double-whammy because someone else will get the fill-in, but if we can't pick out who will fail, that's going to simply be a part of the game.

          As far as correlating with real-world results, the factors that went into xSV produced an r-squared of 0.26 on real world results.  That doesn't necessarily tell us anything without something to compare it to, but since it incorporates most of the things we all consider when drafting closers and combines them optimally, what do we think would perform better?

          • Peter Kreutzer says:

            I'm not saying that your model isn't the best for projecting saves, I simply don't know, but have you tested the r-squared on other simple ways to make a projection? How about last year's saves? A two or three year average? It would be good to have confidence that your model is right.
             
            That said, your statement in the third article interests me: 
             
            DC: "Given that, on the whole, less than 50 percent of pitchers who begin the season as a closer end the year closing games, deciding which closers to choose for our fantasy teams is a tricky subject."
             
            This seems to be the essential question, and it's the one that xSV can't answer more accurately because one of the inputs is whether a guy is a closer or not.  That's the source of the bug, I suspect. If not even half of the guys who start the season closing end the season closing, and we don't know who they are, preseason classifying seems like a likely cause of the problem.
            I know, you found that players designated as "sole closer" at the start of the year were far more reliable than players with the other designations, and I trust that you worked hard to make accurate assessments, but I wonder how much bias crept in anyway. 
             
            Did you also run the regression using draft day price as a proxy? (One CR issue in 2011 was the early date of the auction. There was a lot we didn't know at that point about roles on the various teams with open situations. Still, the market did a pretty good job: My loose categorization is usually
            Solid closer ($15-$22) Six of eight were closers all year. Bailey and Nathan have had issues, but are closers now. (.76)
            Likely sharer ($8-$14) Only five guys fell in this group, only Gregg prospered. (2.03)
            Backup sharer ($4-6) League and Farnsworth succeed, five others fail. (.55)
            CIW ($1-4) Capps, Walden, Fuentes and Santos win. (.26)
             
            The number in parens is how much each save cost thus far in each group (assuming all money spent on relievers was to buy saves).  Obviously this is a very rough approximation, but if it was repeated for a number of years maybe we would find a pattern. 
             
            There's one here:
            Grp, SPENT, ActSVS, xSVS
            1: 148 | 195 |  251
            2: 61 | 30 | 93
            3: 33 | 60 | 47
            4: 27 | 103 | 0
             
            I hope the formatting doesn't make that impossible to read, but clearly the best closers were reliable last year, while the guys who were designated closers (but hadn't done the job before) were not. I have no idea if this is a one year blip, but since it throws into question your assertion that regardless of what quality they had, sold closers were solid closers, I think we need to test all these paths more closely.
            On thing I did was run correlation coefficients on this year's saves versus CR draft prices for relievers, xSV and pkSV (my preseason projection) versus the actual saves put up by the drafted relievers.
             
            I'm emailing it to you, Derek, because xSV doesn't test well there, but it's possible I did something wrong, so I'd like you to double check it. I can report that the CR prices for relievers showed an r^2 of .61 and my 3/31 projections showed a .63 versus this year's results. This is in line with projection results I've tracked since the early 90s, when I started doing this. 
             
             

      • Vanessa says:

        Accreditation is something that sttnedus don’t always pay attention to. There was a story on one of the news magazine shows recently about the online University of Phoenix, and how for some of the areas they teach the accreditation does NOT transfer out of Arizona like teaching credentials. In spite of assurances to the sttnedus by Univ. Phoenix.So yeah, there’s so much more to this story than the shut-down of the paper. (I noted in one of the follow-up stories, the administration was trying to weigh down the paper with red-tape oversight in editorial advisory board .)

      • Mica says:

        If you’re reading this then you know the AXE is a trnedmeous gun and performs with $1200 guns. After shooting my friend’s AXE I sold my 07 Ego and bought one of these bad boys. You won’t be dissappointed. The biggest selling point everyone is overlooking is the LIFE TIME WARRANTY! All you have to do is register online with proof of purchase!Did you find this paintball product review helpful? Yes No14 / 16 found this review helpful.

    • Akila says:

      My first advice, hvinag done it for years, is to run from the car business as fast as you can. That is unless, which is rare, that you work for a store with a good pay plan and doesn’t work you like a slave.As far as advertising, I recommend sites like Craigslist. It is free and allot of people read it. Put it in the cars for sale by dealerships section. You will have to repost it frequently, due to other salespeople flagging it.Also, you indicated that you are just a regular salesman. If you think that way then you will starve. The difference between a salesman and closer are light years apart.Work on becoming a closer and not a salesman. A closer makes a $100K a year and a salesman make $30K. Which one do you want to be?Do ever fear a customer getting up and walking away. Be polite, professional, bold and don’t hesitate to call bullshit when the customer is dishing it out.If they tell you they can get it for $3K less down the street, then the question that should be politely asked is, “Wow, with such a great deal, why are you hear?” They are their because that deal does not exist.Closers call BS and nail people to the wall. Great movies on sales are “Suckers”; great for the car business and “Glen Gary, Glen Ross”. The Alec Baldwin scene in particular. Both can be found at your local Blockbuster.You have to have brass balls to close people. You have to be the mentally toughest person at the table. You also can’t close them all. You cannot overcome ignorance, stupidity or poverty on behalf of the buyers.References :

  5. Andrew says:

    Great writeup Derek. I agree w/ a lot of what Eric said. I have a hard time justifying $8-15 on guys that can so easily lose their value by losing the job. If you have the time, I think the best approach is to get a good grasp on up and coming pitchers and take a flier on them. Sergio Santos is a great example. I went into the draft knowing I wanted him. I live in Chicago and see a ton of White Sox games. I knew that the closing spot was far from locked up and that Santos had the tools to be a closer. We were able to take a flier on him and get a solid return. Of course, there was plenty of luck there too. Thornton struggled far worse than I expected he would. We also had to dodge Chris Sale. Sale was a younger guy with less experience and it seemed unlikely he would get the spot over Santos. He went for a few more $ though. 
    I've never done a hard analysis on it, but I suspect a lot of teams spend far too much of their budget on saves only to end up in the middle of the saves pack. I'd rather spend my money elsewhere and hope to get lucky on a guy or two. If you do that, you can stay out of the celler in saves and then eventually flip the closers for another need. 

    • Gregg says:

      Overall Rating it’s a really great makerr. for a long time i was a poppet makerr user. now this makerr convince about the spool valve. the makerr has a great air efficiency, the trigger is really smooth and shoot faster than most high-end makerr. i really recommended for any player.Did you find this paintball product review helpful? Yes No7 / 9 found this review helpful.

  6. Paul Jones says:

    Great stuff, Derek.  
    As far as League goes, it was more a result of picking up value where I saw it than targeting him.  As I recall, he was nominated very early in the auction.  He was the temporary fill-in at that point.  Word was that Aardsma would miss about a month, but he hadn't picked up a ball yet.
    That struck me as a good spot to be in for League–especially as shaky as Aardsma was last year, even when healthy.  I had him projected for 8 saves, with plenty of room for more.  Even if he only had the full time job for a month and spent the rest of the season as primary setup and backup closer, there was still value there at $6.  (My projections had him earning $9.)
    Like I said, I wasn't necessarily targeting him walking into the draft.  (Jason and I had actually talked about grabbing Walden late, but I was beaten to him in dollar days.)  But when I saw a guy with the skills to close who is starting the season with the job–even temporarily–stalling out at $5, I'll say "$6" pretty much every time.

    • Kiel says:

      Overall Rating i have recently sthcwied to the axe marker and full empire kit and i cant find a single fault! this marker will easily shoot as smoothly as any top end ego or dye for half the price! i would like to thank empire for this awesome creation!Did you find this paintball product review helpful? Yes No10 / 11 found this review helpful.

  7. Derek Carty says:

    This seems to be the essential question, and it's the one that xSV can't answer more accurately because one of the inputs is whether a guy is a closer or not.  That's the source of the bug, I suspect. If not even half of the guys who start the season closing end the season closing, and we don't know who they are, preseason classifying seems like a likely cause of the problem.

    I'm not sure we're asking the same question here, Peter.  The purpose of these articles was mostly to determine what makes a closer successful, given that he's actually a closer to begin with.  It doesn't really make an attempt to identify preseason non-closers like Santos and Walden who will ultimately end up saving games (though I have a feeling, given that primary closers will save 25 games, expected saves for those guys will be quite small).  While 50+% start the year closing but don't end the year closing, far greater than 50% will still save the majority of their team's games.  That's an important distinction.

    I know, you found that players designated as "sole closer" at the start of the year were far more reliable than players with the other designations, and I trust that you worked hard to make accurate assessments, but I wonder how much bias crept in anyway.

    It's entirely possible, but I worked very hard not to have this happen (especially since back in the early part of the decade, I didn't play fantasy baseball, so bias there would have been very difficult to creep in; I've only been playing seriously since 2008).  I used a combination of the prices you gave me, and if I didn't know for absolute certain that a guy was the closer (like Mariano Rivera), I searched Google until I found an article or two from that year's Spring Training that explicitly said who the closer was.

    Did you also run the regression using draft day price as a proxy?

    No, since you said that the numbers you gave me were taken from different league types and weren't really comparable from year-to-year, and since some guys were missing prices.

    I have no idea if this is a one year blip, but since it throws into question your assertion that regardless of what quality they had, sold closers were solid closers, I think we need to test all these paths more closely.

    This is exactly why I pointed out the Thornton/Rodney/Francisco thing in this article.  From 2001-to-2010, those kinds of guys did fine, on average.  This year they didn't.  I'd think this is a one-year blip given the 10 years where it held true, but it's possible something else is at work.  I think it's possible certain managers are more tolerant of first year closers than others, and this may have a sizeable effect.

    I simply don't know, but have you tested the r-squared on other simple ways to make a projection? How about last year's saves? A two or three year average? It would be good to have confidence that your model is right.

    I'll test this.

    • Peter Kreutzer says:

      Derek went through the spreadsheet I sent him and found the problem with xSV and the R value I was deriving for it, which had to do with whether a sample had a 0 or a blank cell. Once a 0 was put in all blank cells something really boring happened:
       
      cols    R     R^2    description
      C:B   .62   .39 (saves vs. CR price)
      C:D   .62   .39 (saves vs. xSV)
      C:E   .64   .41 (saves vs. pkSV)
      C:F   .69   .48 (saves vs pk$)
      The R and R^2 values show how much of the output is predicted by the input. If the two were identical both would be 1. If they were totally opposite the R and R^2 would be -1.
      That last comparison matches actual saves to my predicted draft day prices for relievers. These had an advantage over the CR$ in that they were made on March 31, not our early March draft day, and and over my projected saves, because I'm more diligent with the prices than the actual projections (because there is only so much time).
       
      I guess I got off on this tangent because I was asking a different question. Derek wanted to know if he'd screwed up taking Thornton and Francisco, or if he was justified by history. He was justified.
       
      I think the interesting question is what is the best way to buy closers, and I think the numbers show that there are a variety of approaches that can work, as other commenters pointed out. We'll have to keep working with the historical sample to see if any of them are actually better. In 2011 paying between 10 and 15 bucks for a closer was a losing strategy, but that isn't true every year.
       
      In the end, the importance of our discussion may be that it demonstrates that our draft day prices for relievers are pretty much a proxy for our projected saves for relievers, so those prices will probably serve as a better (and once cleaned up) easier to work with measure of the collective opinion in the preseason than anything else.
       
      As for combining different league formats, I've been compiling startup league prices for the Fantasy Baseball Guide for seven or eight years now. The sample always includes Tout and LABR prices. Sometimes I have some other startup leagues in the sample, usually 5×5 since that's the game most play these days. These are not pure, of course, but for our purposes they reflect a useful set of evaluations of players, quantified. A 4×4 league, as LABR was until a few years ago, will draft more relievers than starters, and closers will go for a few more dollars than they will in 5×5 leagues, but the order and ranking will be pretty similar.
       
      The data I sent Derek was compiled on the fly and some pieces were missing. My first order of business is to clean all that data up and insure its accuracy, so that we can use it for research when it's the right too.
       
      I'd love to find other sources for prices for startup roto leagues from the past 30 years. Contact me at askrotoman at gmail dot com.

Leave a Reply

Or Log in

DraftDay

Draft Day. Every Day. Play daily fantasy sports and win real money.

Twitter Facebook