I've often heard it asserted after a team's close loss that it will "bounce back harder next week". With a little work, that's a testable claim.
Firstly, we need to define some terms, specifically what we mean by "close" and what constitutes "bouncing back". Today I'll test three definitions of close, a margin of under 6 points, under 12 points, and under 18 points, and I'll define bouncing back as having a higher probability of winning the next game than might be expected or as producing a better-than-expected game margin in that next game.
While I'm looking for evidence of next-game resilience, I might as well look for evidence of its opposite too - that is, for any evidence that teams enjoying a close win might suffer a performance decline relative to expectations in the subsequent game.
The search for evidence will draw, as is customary, on the data for all games from the start of season 2006 up to and including the most recent result (which is the regrettable 2014 Grand Final).
The hypothesis being tested is whether a team's performance in a particular game is influenced by its suffering a close loss (or enjoying a close win) in the immediately preceding game. To properly isolate the effects of such a narrow win or loss we need to control for other factors that might reasonably be thought to influence the victory chances and ultimate game margin of that ensuing game. These factors are:
- The strength of the teams involved (which I'll proxy by my team MARS Ratings and, for some models, the Bookmaker's Implicit Home Team Probability - here the Risk Equalising version)
- The game venue (which I'll proxy by team Venue Experience and the game's Interstate Status)
- Whether or not the team's opponents are themselves playing after having narrowly won or narrowly lost their previous game. This status will be reflected in the models by a pair of dummy variables taking on the value 1 when the relevant status is true (eg the opponents had a narrow loss in their previous game) and 0 when it's false.
(For an explanation of many of these variables see the What Variables Are Used in MAFL Statistical Models? in the MOS Primer)
For each of the three definitions of "close" I've fitted two models:
- A binary logit, with the game result from the Home team's perspective as the dependent variable (and excluding all drawn games from the data)
- An OLS regression, with the game margin from the Home team's perspective as the dependent variable
Initially I fit all of these models excluding the Bookmaker Home Team Implicit Probability variable on the assumption that, were there any "close last game" effects these might already be incorporated in the Bookmaker's pricing. Including the Bookmaker variable then serves to account for any such expected last game effects, but also serves to further control for the relative quality of the competing teams to the extent that this is not achieved via my own team Ratings and the other, venue-related variables.
Looking firstly at the modelling outputs for the Binary Logit formulations in which we are assessing the effects of narrow wins and losses on the subsequent victory probability of a team we find scant evidence for any "last game effects", regardless of which definition of "close" we adopt.
The models on the left in each block are those where the Bookmaker Implicit Probabilities have been excluded and they all include small and statistically non-significant coefficients on the Close Win and Close Loss dummy variables. In other words, a team playing after a close win or a close loss in the preceding round is not, statistically speaking, more or less likely to win. Ignoring statistical significance we can say that teams playing after a close win are slightly less likely to win in the next round (adjusting for other factors), regardless of the definition of close, while teams playing after a close loss are actually LESS likely to win too unless we define close loss as being a loss by less than 3 goals. For teams that would otherwise be 50% chances even the largest coefficient in absolute terms implies only a reduction in that probability by 1.4% to 48.6%.
Including the Bookmaker Implicit Probability variable does nothing to change the signs of the Close Win and Close Loss coefficients, though it does serve to increase their absolute magnitudes. Not though by enough to make them statistically significant and, even in the case of the most negative coefficient only by enough to drop an otherwise 50:50 proposition to a 46.7% proposition. So, not statistically significant in any of the cases and probably only practically significant in the most extreme case.
On that basis I think it's fair to assert that a team's close loss or close win in one week has no meaningful influence on the likelihood of its winning its next game after we adjust for the quality of the opponent it's facing and the venue of the subsequent contest.
Perhaps though the effect is more subtle, and teams don't become more likely to win but merely more likely to score or concede a few more points. That's the hypothesis tested via the OLS Regression formulation, the results for which appear next.
Once again, if we're searching for statistically significant effects, we've come to the wrong hypothesis. Excluding the Bookmaker Implicit Probability variable we find promising coefficient values only when we define a close result as being one where the margin is less than a goal. In that case we find that team's backing up after a close loss score 4.3 more points than we'd expect, and teams backing up after a close win score 1.3 points fewer than we'd expect.
Including the Bookmaker Implicit Probability variable produces this same pattern of losing teams subsequently scoring more and winning teams subsequently scoring less after close results, regardless of the definition of close. Still, however, none of the coefficients is statistically significant. There is perhaps some solace in the fact that, as we narrow the definition of close the absolute size of the coefficients increases. So, the narrower the loss (and the narrower the win) the bigger the subsequent effect.
A generous interpretation of these results would be that there might be a small effect - that teams suffering a narrow loss one week score slightly more in the following week and that teams enjoying a narrow win one week score slightly less in the following week, but that the effect is so small (or variable) that we've insufficient sample to confidently proclaim its existence.
Even if such an effect does exist in terms of the ultimate game margin, however, the results of the binary logit models mostly suggest that it's insufficiently large to effect teams' winning and losing chances and serves instead only to reduce the size of a loss or increase the size of a victory.
The main conclusion is that there's no statistically significant evidence for either of the hypotheses we wound up testing. Teams that suffer narrow losses don't do better (or worse) than they otherwise might have been expected to in the subsequent games, and teams that enjoy narrow wins don't subsequently do worse (or better).
Ignoring statistical significance, there is some, small evidence that very narrow wins and losses (ie those by fewer than 6 points) have about a half-goal effect on the score in the ensuing game in the direction hypothesised. We need more data though to distinguish between the existence of a genuine but small effect and a highly variable but mean zero effect.