(Originally published on March 9, 2022)
This is part of a series of posts on sabermetrics and the mathematics of baseball. You can find more here.
My favorite thing about playing baseball as a kid was stealing bases. I thought I was fast, since I did it a lot and didn't get caught, but it probably had more to do with young catchers having poor throwing arms. Even now as an adult just watching baseball on TV, stolen base attempts never fail to inject some excitement.
But how much is a stolen base really worth? How confident should a manager in a runner to signal a steal? And how has this part of the game changed over the past century?
In a previous post, I briefly outlined the idea of run expectancy and used it to compare the value of a single to a double. To recap, we start with a run expectancy matrix for a given year or era. This tells us the average runs scored by baseball teams in a given base-out situation, i.e. with a certain number of outs and runners on base. A given play moves us from one base-out state to another, and we value this play in terms of the difference in expected runs at the start and end of the play, plus any runs that scored: \[RE_{end} - RE_{start} + RS.\] For a more complete explanation, check out this one on FanGraphs. Most of the run expectancy values I'll be using today come from Greg Stoll's Win Expectancy Finder.
This makes valuing a stolen base pretty easy. Let's say there's no outs and a speedy runner on first base who wants to steal second. Using Greg Stoll's calculations with 2021 data, the average run expectancy is 0.94. That is, averaging over all situations with no outs and a runner on first in MLB games in 2021, the batting team scored 0.94 runs by the end of the inning. After stealing second, there are still no outs but a runner on second, giving an average run expectancy of 1.17. Taking the difference, since no runs scored on this play, the stolen base changed the run expectancy by +0.23 runs!
This approach alone suggests that for every four or five steals of second base, a team will score about one more run. The problem with this approach is that it fails to take into account the risk in attempting the steal of second base. In 2021, MLB runners were successful in their stolen base attempts only 75.7% of the time! If the runner in our example above had been thrown out, there would be one out with the bases empty, which lowers the average run expectancy to 0.28 for a change of -0.66 runs.
As a first attempt to value this risk, let's try to evaluate the impact of the stolen base attempt instead, using the same run expectancy framework. If \(RE_{end}\) is our new run expectancy after the attempted steal, then its expected value \(E(RE_{end})\) is given by \[E\left( RE_{end} \right) = SB\% \cdot RE_{success} + (1 - SB\%) \cdot RE_{caught}\] where \(SB\%\) is the chance of success on the stolen base attempt. Thus the expected value, in change in run expectancy above average, of a stolen base attempt is given by \[E\left( RE_{end} \right) - RE_{start}.\] In the case of no outs and a runner on first, this becomes \[SB\% \cdot RE^0_{020} + (1 - SB\%) \cdot RE^1_{000} - RE^0_{100}.\] Here the shorthand \(RE^i_{xyz}\) indicates the run expectancy with \(i\) outs and \(xyz\) denoting the baserunner configuration (e.g. 000 for bases empty, 100 for a runner on first, 020 for a runner on second, etc).
Of course, we don't know what the chance of success is --- but that's not the point. We'll use this run expectancy calculation to determine what value of SB% we need in order to make stealing worthwhile, from a run expectancy point of view. For now, "worthwhile" means that we expect a positive change in run expectancy. So, we set up a linear inequality and solve for SB%, \[SB\% \cdot RE^0_{020} + (1 - SB\%) \cdot RE^1_{000} - RE^0_{100} > 0\] \[\implies SB\% > 74.2\%.\] We'll call this the breakeven success rate for stealing bases with no outs and a runner on first. What it tells us is that if we are at least 74.2% sure that our runner will be successful in the stolen base attempt, then we expect sending the runner has a positive effect on run expectancy. Applying this strategy over and over in this situation (assuming the true success rate is above 74.2% and we're in the same run environment) would likely lead to higher runs scored over time.
Of course, we don't have a great way of estimating the true rate \(SB\%\). A good starting point might be a particular runner's season or career success rate, but this may not reflect future success future success in all situations. We might try to fit a runner's sprint speed to their success rate, but this sounds like a problem for another time, and wouldn't allow us to study more than the few years back for which we have such data. It also doesn't take into account differences in pitcher motion, pitch selection (fastballs reach the plate sooner and are harder to steal on), and catcher ability. Instead, I think of this as a manager's tool of understanding the risks and rewards associated with stealing second base (with no outs in this run environment). Using all of the information available to them in that moment, they should be about 74.2% confident in the runner just to break even.
I went ahead and computed the breakeven success rate for several stolen base situations below, using the same sample run expectancy data. Here's what I found:
Runner(s) | 0 out | 1 out | 2 out |
---|---|---|---|
100 | 74.2% | 72.6% | 72.7% |
020 | 75.4% | 70.4% | 84.6% |
120 | 61.2% | 55.9% | 78.3% |
103 | 67.3% | 79.0% | 86.7% |
Less risky, but not necessarily easier. For a runner on second base stealing third with zero or one out, the breakeven success rate is comparable to that of stealing second. This means we should have about the same amount of confidence in the runner to ask them to steal third base. Now, this doesn't tell us if it's any harder or easier to steal third base versus second base. It just says that the balance of risk and reward is similar to the previous situation of stealing second, in terms of run expectancy. This came as something of a surprise to me from a baseball perspective, as the runner on second base is already in scoring position, ready to score on most hits to the outfield. It does make sense that doing so is more risky with two outs than one or none, since with two outs there is no possibility of a sacrifice fly scoring the runner after they've stolen third. See this ESPN article from 2010 for some anecdotes about stealing third.
Where are the double steals? In the third row of the table (labeled "120" above), we have runners at first and second and consider a double steal --- if successful, the runners will be at second and third. Note that here I'm assuming the catcher attempts to throw out the runner heading to third, so if caught, there will be a runner at second base only. This is not a maneuver I see often (though I don't have any data to back that up), so it's surprising to see the breakeven success rate for zero or one out considerably lower than the situations! It's actually less risky from an expected runs point of view than asking a lone runner on second to steal third base! I wonder why more teams aren't double stealing with runners on first and second.
Don't make the last out at third. I remember hearing this one when playing as a kid. If there's already two outs, there's no sense in taking the risk in advancing to third (on a stolen base or otherwise) if you can stay put at second and hope for a hit to score you later. The data actually seems to support this, as the breakeven success rate is significantly higher for a runner on second (or first and second) with two outs than with none or one out. I might go so far as to say don't make the last out when there's a runner in scoring position, as there is also quite a jump for the situation of runners at the corners with two outs.
Across the MLB, stolen bases have dropped to their lowest level in about 50 years at 0.46 steals per game (see Baseball Reference). In that timeframe their peak appears to be around 1987, in which we saw around 0.85 steals per game. That's nearly a 50% drop! What's more, getting caught stealing has become more rare as well; in 2021, there were 0.15 caught steals per game, for about 1 steal attempt per game, with the aforementioned success rate of 75.7%. In 1987, players were attempting steals about 1.2 times per game and only successful about 70% of the time, suggesting that it's certainly not the case that modern players have become slower.
To better understand this trend, let's consider the possibility that the game of baseball itself has changed over this period. We can blame the emphasis on the home run or steroids or whatever else we want to, but perhaps MLB teams and coaches understand their run environments, and adjust their strategies accordingly. If this is the case, then we might expect that in years when more steal attempts were made, there was a greater reward for the risk of doing so, as measured in run expectancy. That is, I would expect that the breakeven stolen base success rates were lower in the 1980s than in 2021.
The good news is that we can test this! Again using run expectancy data from Greg Stoll's Win Expectancy Finder for the years 1980 - 1989, I calculated the breakeven success rates below.
Runner(s) | 0 out | 1 out | 2 out |
---|---|---|---|
100 | 70.6% | 71.9% | 68.8% |
020 | 78.7% | 67.9% | 86.5% |
120 | 61.2% | 53.8% | 72.9% |
103 | 75.5% | 78.2% | 83.0% |
The result is not quite as clear cut as I imagined, but it does tell a story. In 8 of the twelve situations above, the breakeven success rate is in fact lower for the 1980s than for 2021. In particular, with a runner on first base stealing second, the breakeven rate was lower in all three out situtations, so this does suggest to me that stealing bases was more favorable in the run environment of the 1980s. The biggest difference in the other direction appears to be with runners on the corners and no outs. This seems to arise from the fact that \(RE_{success}\) value for runners on second and third with no outs is about 1.96 in the 1980s and 2.11 in 2021, while the other run expectancy values are quite close --- put more simply, there was similar risk and more reward in 2021 for stealing second with runners on the corners and no outs! Overall though, several similar patterns emerge here, including the low breakeven success rate for double steals with zero and one out and the high breakeven success rates when there are two outs and runners already in scoring position.
One additional caveat to consider: how do the steals themselves affect the run environment and the average run expectancy data we're using for these calculations? The chance that a runner on first could steal second base, making it more likely for them to score, is built into the calculation of the \(RE^i_{100}\) values, possibly inflating them. On the other hand, the chance they get caught is also built in to this calculation, providing some balance. Either way, I expect that even at the rate of 1.2 steals per game in 1987, these represent a small enough fraction of the total plays with a runner on first base that they don't have an outsize effect on the run expectancy calculations (but if you're really worried about this and want to figure out a way to recalculate the run expectancy by excluding all stolen bases, or otherwise eliminating any potential bias, please do so and let me know!).
I tried this with a few different eras and recorded the results in this Google sheet. The results are color coded: shades of red indicate a breakeven success rate above 75% (when you need to be the most surefor it to be worthwhile), while shades of green are used for rates below 70% (when you can afford to be less sure). I also included some run expectancy data from Tom Tango to compare it with the Win Expectancy Finder data. Both the run expectancy matrices and breakeven stolen base success rates compare closely for the two data sets, so I'm not two worried about any methodology or data reporting differences (like the fact that Win Expectancy Finder provides only two decimal places of precision) having an impact.
With all this talk of breakeven success rates, who was actually successful at stealing bases? Rickey Henderson is MLB's all time career leader in stolen bases with 1406. However, he is also the all time leader in times caught stealing, with 335. Calculating his success rate as \[SB\% = \frac{SB}{SB + CS},\] Rickey Henderson's career success rate is 80.8%. This is higher than most of the values in the table above, so I'd say it was likely worth him attempting all those steals from a run expectancy perspective. How does this success rate stack up to to his MLB peers?
To find out, I queried the 2019 version of the Lahman baseball database with the following SQL command:
SELECT playerID, sum(SB) as Career_SB, sum(SB)/(sum(SB) + sum(CS)) as Career_PCT
FROM batting
WHERE CS is not null
GROUP BY playerID
HAVING sum(SB) > 99
ORDER BY Career_PCT DESC;
This command adds up a player's stolen bases and computes their success rate for certain seasons, namely those in which we have caught stealing data. (This data is missing from the early years in the database, so I decided to just ignore all seasons in which the data is missing.) We'll also require the players to have at least 100 stolen bases total (in years for which we have CS data) to be included. This number is totally arbitrary, but 505 players match these criteria, which seems like a reasonable number! (Baseball Reference uses a threshhold of 80 stolen base attempts, and as such a slightly different list.)
Name | SB | SB% |
---|---|---|
1. Chase Utley | 154 | 87.5% |
2. Carlos Beltran | 312 | 86.4% |
3. Jayson Werth | 132 | 85.2% |
4. Jarrod Dyson† | 250 | 85.0% |
5. Kazuo Matsui | 102 | 85.0% |
6. Eric Byrnes | 129 | 84.9% |
7. Mike Trout† | 200 | 84.8% |
8. Nate McLouth | 133 | 84.7% |
9. Pokey Reese | 144 | 84.7% |
10. Tim Raines | 808 | 84.7% |
⋮ | ||
40. Rickey Henderson | 1406 | 80.8% |
⋮ | ||
505. Babe Ruth* | 110 | 48.5% |
*Babe Ruth had 13 stolen bases between 1914 and 1919, for which no CS data was available. CS data is also missing for his final 1935 season.
†Jarrod Dyson and Mike Trout are currently active, and the numbers above are through the end of the 2019 season. As of the end of the 2021 season, they both have success rates above 84.4% and thus would remain in this top ten (so long as no one else has made this list in the meantime!).
Let's start at the bottom, which is my favorite part of this table. Babe Ruth was caught more often than not in his career (in the years for which I have data anyway). This includes four years straight from 1922 - 1925, when he was still relatively young and not as full of hot dogs. I have to wonder why the Babe was still trying to steal, even making four attempts in 1934, his final full season (he was caught three out of four times). If he were with us today, he might protest "the run environment was different then!" but I checked: between 1918 - 1930, breakeven stolen base success rates were well above 50% across all situations, so he probably should've been staying put.
At the other end of the table, we have...Chase Utley? The unluckiest man in baseball (in early 2015 anyway)? I don't particularly remember him for his speed and indeed, according to Baseball Savant his sprint speed was slightly above average at best. His highest stolen base total for a season was 23 in 2009, a year in which he wasn't caught once for a 100% success rate. In fact, of these top ten names, only Tim Raines (4 times from 1981 - 1984) and Mike Trout (in 2012, his first full season) ever led their league in stolen bases. Utley and Werth never even had a season in the top ten of base stealers!
It's fun to speculate how these players like Utley, and the even slower Werth, maintained their high success rates over their careers. My best guess is that they were good at identifying situations in which they would have a high probability of success: a pitcher who is slow to the plate or not holding them on, a catcher with a weak arm, fielders poorly positioned to receive a throw. Perhaps they weren't the fastest, but got great jumps to make up for it. Or, noticing that they were teammates from 2007 - 2010, was it that the Phillies had exceptional base coaches during this time?
All of this suggests more questions than it answers about evaluating the impact of a stolen base and the stealing prowess of players throughout history, so I'll end this post with one that I'm especially intrigued by. How can we measure the impact of a stolen base on the game outcome? This requires something different than the run expectancy analysis we did earlier, since it doesn't take into account the score of the game at the time of the steal. A stolen base in the third inning of a two run game just doesn't seem as impactful as a stolen base in the 9th inning of a tie game (think Dave Roberts' memorable steal of second base in the 2004 ALCS). Is there a way to look at each stolen base (in a season, player's career, or all of MLB history) and quantify how the steal (or caught stealing!) affected the game's outcome? What would such an analysis suggest in terms of strategy --- do teams benefit from being more conservative or aggressive in these different game environments? Which players had the most game impact on the basepaths throughout their careers, and are they the players one might expect?