This is our old blog. It hasn't been active since 2011. Please see the link above for our current blog or click the logo above to see all of the great data and content on this site.

W-L% for starters and relievers

Posted by Andy on March 17, 2009

Some of our discussions from last week spurred me to dig a little deeper into some numbers.

Using a few Pitching Game Finder searches, I determined the total number of wins and losses registered by starters and relievers in each year since 1956. For any individual year, you can get this data from league-wide splits, such as the splits for 2008. From that link, you can see that both starters and relievers were nearly .500 during last season: starters went 1682-1671 and relievers went 746-757. But what does it look like for the last 50 years?

Here's a plot:

(click the image for a larger version)

So, wow, we can see that things have changed dramatically and quite consistently. Back in the 1950s, starting pitchers won only about 48% of the time while relievers won about 56% of the time.

First, a point of explanation. In case you wonder why the starters' and relievers' W-L% numbers don't average to exactly .500, it's because the starters have many more decisions than the relievers. (Remember above when I told you that starters had about 3300 decisions in 2008 while reliever had about 1500.)

Anyway, we see that the numbers have trended pretty steadily towards a 50/50 split. 2008 was very different from 2007, and I wonder whether 2009 will see another reversal or not.

The biggest questions are: what do these numbers mean, and why have they changed?

I think that generally, the numbers determine a rough sort of league-wide efficiency of pitching. The corollary is that they show how much offense there is. When offense has something to do with a graph, I always notice two things: an aberration in 1987 (which this graph shows) and different behavior in the period of 1993 to 2005 (also known as The Steroids Era.)

Notice that in 1993, starters' W-L% jumped up to 49.7%, the 4th-highest value between 1956 and 1993. Then, other than slight drops in 1995 and 1996, we see a long run of numbers very close to 50%, capped by the first cross over 50% in 2005. As for what has happened in the last few years, I think we'll need another 5-10 years to understand that trend.

So why is this happened? Why has more offense led to starters winning more games?

For our explanation, let's go back to the 1950s. Back then, most starters' victories were complete game wins. Even many starters' losses were complete games. But think about how relievers were used during this era. For the most part, they came into games when the starters had been knocked out and their team was losing. This means that, generally at least, relievers usually came in when their team was already behind, making it more likely that they'd earn a win (if their team rallied while they were the pitcher of record) than they'd earn a loss (which would require their team to rally, and then for the reliever to blow that lead.) I think this clearly explains why relievers had a higher W-L% during earlier periods. The really simplified way of looking at it is like this: each starting pitcher throws a complete or nearly-complete game, meaning that they'd get 1 win and 1 loss between them, for a .500 record. But sometimes, the starter on the losing end would get taken out, and then his team would rally against the other team's starter. This earns the starter who was in the lead a LOSS, the starter who was training a NO DECISION, and the reliever a WIN. This is why the starters' numbers are a little below .500 and the relievers' a bit above .500.

Over time, relievers have been used more and more. They come in when games are tied, or often times when their team is in the lead but the manager has decided to pull the starter after a certain number of pitches or innings. This means that these days, relievers enter games more often when the outcome is less certain. So, their average W-L% has gotten closer to 50%.

We can also look at this from an offensive standpoint. When more runs are scored, it means that games can be decided at different times with less predictability. These days, it's not too rare to see a team put up 8 runs in the first 3 innings and win a game. It's also not that rare to see a team rally for 6 runs in the 7th and 8th inning and come back to win a game. More offense means that more rallies are happening and therefore it's more random as to whether it's a starter or reliever pitching at the time. More randomness tends to even things out, which is why we've seen the push towards 50%. This increased randomness also comes forth from the fact that starters are pitching fewer and fewer innings per start. Whatever forces kept the starters' W-L% all the way down at 48% are lessened by virtue of them going shallower and shallower in games.

Over the next couple of days, I'm going to post more graphs with related data and we'll tease apart these issues in more detail.

3 Responses to “W-L% for starters and relievers”

  1. Stat of the Day » More on fraction of decisions going to starters Says:

    [...] but that gap has closed to nothing in recent years. (This is the same conclusion drawn from my post two days ago where the starters’ historical win percentage has climbed from 48% to [...]

  2. Jgeller Says:

    I think it's related to the starters not going the distance anymore. But it's not that relievers are being used more. It's that because starters are going less innings and closers are becoming 1-inning specialists, we have more sub-par relievers pitching in games in the 6th, 7th, and 8th innings of games. Because there are more sub-par relievers pitching, there are more blown leads leading to more losses for relievers.

  3. JohnnyTwisto Says:

    Inferior relievers vs. tired starters. I'm pretty sure the % of leads lost in the late innings has not changed significantly over the last few decades. I think this can be searched on B-R but I don't remember how and don't have time to find it right now. Will try to do it later.