Modelling Game Outcomes In-Running

Way back in 2010 I developed a model to estimate the Home team's chances of victory in-running (ie during the course of a game) based on the lead it held at the time and the time remaining in the game. In a subsequent post I investigated ways of combining the in-running projections of that model with the TAB Bookmaker's pre-game assessments in something of a proto-Bayesian way.

At the time I was constrained by having scores only at the end of each quarter, which meant I had just four useful data points per game.

Recently, Paul, who looks after the afltables site, provided me with a data set of every scoring and end-of-quarter event for every game from the start of season 2008 to the end of the home-and-away season of 2013. So, instead of just four points a game, I now have a data set with as many unique data points as there were scoring events in that game.

For this blog I've used that data to re-estimate the original model, this time including the time-varying influence of the pre-game Bookmaker probability directly within the model rather than incorporating it as an add-on.

Specifically, I fitted a probit model (the probit actually fits a little better than the logit I used in the original blog), with the target variable being the binary outcome of the game from the Home team's viewpoint (drawn games excluded) and with the regressors being:

• The Event Time in Game, which is the time at which the relevant scoring event took place, expressed as a proportion of the total game length. For this purpose, each quarter was assumed to occupy 25% of the total game time and each event that occurred within a quarter was expressed as a proportion of the total time for that quarter. So, for example, an event that occurred exactly half-way into the 1st Quarter would be assigned a time value of 0.125, whereas one that occurred 80% of the way through the 3rd Quarter would be assigned a time value of 0.5 + 0.8 x 0.25 = 0.7.
• The Home Lead at the time of the event (ie Home Score less Away Score)
• The Pre-Game Home Probsbility, which is based on the TAB Bookmaker's pre-game prices and calculated using the Risk-Equalising approach so that Pre-Game Home Probability = 1 / Home Price - (1 / Home Price + 1 / Away Price - 1) / 2

Since each game has as many rows in the fitted data set as there were scoring events in that game, high-scoring games (or games where the teams had very low scoring shot conversion rates) will tend to be more highly represented in the data than low-scoring games. An alternative, which would equalise the number of points in the data set for every game, would have been to "sample" the score from every game at a number of fixed points - say every 0.5% of the game - and then use that data for modelling purposes. Some other day, perhaps.

THE MODEL

We end up with 67,892 data points after discarding games that ended in draws, using which we fit the model whose coefficients appear in the table at right. All of the coefficients are highly statistically significant, a consequence partly of the huge number of data points. Their significance might also be inflated, however, by an underestimation of their standard errors, this a result of having multiple observations with undoubtedly correlated errors, each from the same game. Today though I'm not particularly interested in significance but, instead, with predictive ability. (* Sound of rug being lifted and a broom sweeping furiously ...*)

With this predictive aim in mind, I have for the current model experimented with different exponents on some of the terms. In the original formulation, the first two non-intercept terms shown above carried square roots as exponents (ie exponents of one-half), as per the original Stern paper that inspired my first post. The exponents shown here provide a superior fit based on the AIC measure.

One practical way of intuiting the fit of this model is to estimate how well it predicts the final result of a contest at various points within it. To this end I estimated the confusion matrices that appear at right, which provide some standard binary model metrics in relation to our model's performance on all of the scoring events from a particular quarter.

So, for example, the Acc (ie Accuracy) metric for the Q1 data tells us that, were we to have projected the final result of the relevant game just after any of the 16,619 scoring events that occurred in 1st Quarters, we'd have been right 73.1% of the time. As you might expect, this Accuracy metric rises for later quarters - for events in 4th Quarters, our Accuracy rises to 92.8%.

The other metrics shown here are Sensitivity, Specificity, Positive Predictive Value and Negative Predictive Value, for information about which the interested reader is referred to the Wikipedia page linked. The fact that the Sensitivity metric is higher than the Specificity metric for all four quarters tells us that the model is better at identifying wins for the Home team rather than losses.

APPLYING THE MODEL

We can, of course, apply this model prospectively to any future game where we'll have the pre-game Bookmaker prices and real-time score updates. I plan to do this for the 2014 Grand Final.

For today though, to give an idea of what the model's in-running projections look like, I'm going to apply the model retrospectively to each of the Grand Finals from 2008 to 2013. As well, to estimate the relative importance of the Bookmaker's pre-game prices, I'm going to rerun the model for each game assuming that the pre-game prices were slightly different - specifically, that they implied pre-game Home team victory probabilities 10% lower or 10% higher than they actually did.

Below are the outputs for the 2008 GF. The top half of the chart maps the Home team's (here Hawthorn's) lead at each point in the game, while the bottom half maps the corresponding model assessment of the Home team's victory probability at the same time.

2008 Grand Final : Hawthorn v Geelong

The grey line in the bottom chart marks the pre-game Home team probability implied by the the TAB Bookmaker's pre-game prices. Is this game that probability was a little over 30% as the Hawks were priced as \$2.65 underdogs on the Wednesday before the game. As we'd expect, the base model (ie the model that uses the actual pre-game Home team probability as an input, shown in black) has an initial assessment of Hawthorn's chances roughly equal to that value.

We can track the diminishing importance of the pre-game Home team probability assessment by noting the narrowing of the gap between the green line (which uses a pre-game probability 10% points higher than the base line) and the red line (which uses a pre-game probability 10% points lower than the base line). In this game, by about two-thirds of the way through the 3rd Quarter, by which time the Hawks led by about 3 goals, the difference in the three estimated probabilities is only about 5% points. By Three-Quarter time it's less than 4% points.

Next we have the 2009 Geelong v St Kilda Grand Final, for which the Cats were \$1.60 favourites pre-game. Their baseline probability assessment dipped below 50% when they trailed during the first half of the 2nd Quarter but surged above 50% as they built a lead later in that same Quarter.

2009 Grand Final : Geelong v St Kilda

The Cats' probability then tracked below 50% for most of the 2nd Half until a few late goals saw them take the lead and their probability spike upwards. Again, the model shows a diminishing contribution from the Bookmaker's pre-game probability assessment as the game progresses. The difference between the base and the +10% scenario is only 7% points at Half Time and 4% points at Three-Quarter Time.

2010 Grand Final Replay : Collingwod v St Kilda

2012 Grand Final : Sydney v Hawthorn

2011 Grand Final : Geelong v Collingwood

2013 Grand Final : Hawthorn v Fremantle

The 2012 GF is an interesting example of a game where the influence of pre-game Bookmaker probabilities diminished relatively rapidly. In that game the difference between the base model assessment and the assessment of the model with a 10% point higher pre-game starting point was only 5% points by Half Time and 3% points by Three-Quarter Time. In 2013, the influence was smaller still, the Half-Time difference measuring just 3.5% points, and the Three-Quarter Time difference measuring just 2.7% points.

SUMMARY

We've now a model that allows us to estimate the Home team's victory probability with what seems to be a reasonable level of accuracy, in-game. What remains is to apply this model during the course of a series of games to see how it fares post-sample, which is something I plan to do over the remainder of this season and over the initial stages of 2015.