Estimating AFL Player Value

A few months back I had a first look at incorporating player data into predictive models, and found that we could knock about 0.4 points per game off the mean absolute error (MAE) of game margin predictions across the 2011 to 2018 seasons by valuing players solely on their Super Coach (SC) scores.

The methodology I used there for converting SC scores to estimated player values was quite simplistic and involved only averaging and regularising recent SC scores. Probably the most obvious shortcoming of this approach was that it treated very recent games no differently to those played as much as 12 months ago. It was, in effect, form agnostic.

As the first step in today’s analysis I’m going to deploy a more complex approach.

METHODOLOGY

The broad-brush elements of today’s post will be very similar to those of that first post. We will, for example, again take the available MoSHBODS and SuperCoach data for the nine seasons from 2010 to 2018, and split it 50:50 into the training and test sets - to facilitate comparisons, the same sets, in fact, as we used in the previous blog. With that data we will, once again, build linear regressions on the training set to explain game margins - the difference between the home and the away team final scores - and we’ll measure the final performance of our models on the test set.

To help estimate a player’s value (as measured by SC scores) we’ll use the ets function from the R forecast package, which will allow us to treat the string of scores that a player has recorded as a time series, thereby recognising the order in which scores were registered. More specifically, we’ll estimate a one period ahead forecast SC score for a player based on his most recent N games. We’ll try a range of values for N, and for the other parameters described in the following paragraphs, and select a joint set of values that provides the best fit for our regression model, described below.

(Technical note: we use players’ actual SC scores and do not adjust them for the time spent on the ground because we’re interested in each player’s actual “output” as measured by his SC score, not what he might have produced had he been on the field for every minute of the contest. Implicitly, we assume that a player’s average minutes per game in the sample data is a reasonable proxy for his minutes per game in future matches. In treating the string of SC scores as a time series, we also ignore the time period that elapses between subsequent games. A player’s second-most recent game is treated as having occurred twice as long ago as his most-recent, regardless of how distant in time it was compared to his most-recent.)

The ets function requires that we provide a number of parameters. One of those parameters is alpha, a smoothing parameter, and another is a parameter to define the model type we are fitting. We’ll set that latter parameter to “ZNN”, the two N’s of which mean that we’re assuming the underlying SC data has no trend or seasonality, and the Z of which means that we’re allowing the algorithm to determine the error type (refer to the previously linked PDF for more details).

When we run the ets function using a player’s SC history for a specified alpha, we’ll get an estimate - a forecast - of that player’s next SC score. We can think of that as an estimate of his current value, as measured by his next most-likely SC score.

Now for some players, this estimate will be based on very few games and so will be highly variable. To deal with this reality we will, again, employ some regularisation, this time assuming that, for players who’ve played fewer than Y games, they would have recorded a score of S in the “missing” games.

The final estimate value of a player then will be a simple weighted average of the estimate we get from ets and this assumed score for the missing games.

In other words:

For players who have played Y games or more

Estimated Player Value = ets estimate

For players who have played fewer than Y games

Estimated Player Value = (ets estimate x number of games played + (Y - number of games played) x S) / Y

Having calculated an estimated value for every player, we will then calculate the mean of these values for each team going into a contest. That average will be a proxy for the team’s strength and will be used in the following regression model:

  • Home Team Game Margin = Constant + k x MoSHBODS Expected Score + m x Difference in Mean Estimated Player Value

We’ll choose the optimal values of all the available parameters by seeking to minimise the MAE produced by this model on the training data.

RESULTS

In all, we had four parameters to optimise, the final values for which were as follows

  • N (the maximum number of games to include): 100 (roughly speaking then, for a player who regularly takes the field for his team, we’ll be including the last four to five seasons).

  • Alpha (the smoothing parameter): 0.029 (this produces forecasts that are quite slow to respond to the most recent games. It makes the forecasts behave somewhat like a raw average, though older SC scores are assigned slightly less weight. You can get an idea of the speed with which estimated values respond to SC scores from the chart at right for selected, relatively high-performing players.)

  • Y (the minimum number of games before regularisation is no longer employed): 22

  • S (the assumed SC score for “missing” games): 45 (this tends to lower estimated values for players in their first year or two of their careers, unless they consistently produce sub-45 SC scores. It also provides the value that is used for debutants.)

Using the estimated player values that these parameters produce yields the following regression equation:

  • Estimated Home Team Margin = 3.06 + 0.72 x MoSHBODS Expected Score + 2.42 x Difference in Mean Estimated Player Values

The MAE performance of this model on the training data is shown at left. For comparison purposes, we also show the MAE for a model that uses MoSHBODS data alone, and the results for the best model that we built using player SC data in the earlier blog.

Though we should always be cautious when interpreting the in-sample fit of a any model, the new model, labelled here as “Current Model”, looks encouraging. It produces superior MAEs in six of the eight seasons and, overall, produces an MAE 0.35 points per game lower than the best model from the previous blog.

Compared to using MoSHBODS forecasts alone, our hybrid model that uses MoSHBODS and our player value estimates yields forecasts with an MAE over 1 point per game lower.

Far more important is what we get when we apply the model to the test set, which, vitally, was not used at all in determining the optimal values of our various parameters.

Again, the Current Model is superior in six of the eight seasons, losing out to the best model from the previous blog only in 2013 (and to MoSHBODS in 2016).

Overall, it finishes over a point per game better than MoSHBODS alone, and about two-thirds of a point per game better than the best model from the previous blog.

That’s pretty comprehensively superior.

VALUE ABOVE REPLACEMENT (VAR)

The regression model provides us with a simple formula for converting estimated player values into points: every 1 point increase in a team’s average player value is worth about 2.4 points. A single player represents 1/22 of the average, so we can restate that relationship by saying that a player with an estimated value 10 points higher than an average player is worth about an extra 1 point to his team (ie 10/22 x 2.4).

That calculation segues us nicely into the concept of Value Above Replacement (or Value Over Replacement, as it is sometimes called), which, broadly, is about the contribution a player makes to a team relative to some notional “replacement”. Operationalising the concept requires that we have two things: a measure of value (which we’ve just created), and a method for determining what constitutes a replacement and his or her value.

In the context of AFL, what complicates matters a little if we use SC scores to estimate value is that average scores vary non-trivially by position. (This is not an issue that’s unique to SC scores, by the way: a similar phenomenon occurs with AFL Player Ratings.)

For the 2018 season, we have the following average SC scores by position:

  • Ruck: 91.0

  • Midfielder: 87.3

  • Midfielder/Forward: 78.0

  • Small/Medium Defender: 71.9

  • Key Forward: 71.1

  • Key Defender: 67.1

  • Small/Medium Forward: 63.0

We might ask then:

  • Is a typical Small/Medium Forward truly only “worth” about 70% of a typical Ruck?

  • Is this true only of the crop of players currently in these positions, or has it always been the case?

  • Do SC scores not capture the things that measure the true worth of a Small/Medium Forward? More generally, is this true to a greater or lesser extent for all positions?

Those questions are largely unanswerable for now, but understanding that different positions have - in some cases, substantially - different average SC scores raises important practical questions about determining the value of a replacement player for a given player. Most apparently, it forces us to decide whether we should assume that a player will be replaced by a generic player from any position or by a player from the same position.

Having answered that, we need also to decide whether the replacement player should be assumed to come from the pool of players from the same club or from some broader pool.

No clearly superior answers to these questions exist, I’d contend, so it comes down to making some reasonable choices and seeing what they produce.

METHODOLOGY

We’ll be calculating VARs for players from the 2018 season and will use as the estimate of their replacement players’ values, the average estimated season-end value of all players from the same position from any club. So, ruckmen will be assumed to be replaced by an average 2018 ruckmen, midfielders with an average 2018 midfielder, and so on.

That has the effect of reducing the differences in estimated VARs across players from different positions relative to differences in their raw estimated values, but seems fair in that it doesn’t expect, for example, a Key Defender to perform at extraordinarily high levels relative to his peers in the same position just to record the same VAR as an average Midfielder.

Note that it does mean, however, that it’s possible for a team’s average estimated value to increase even though a player with a higher VAR is replaced by someone with a lower VAR, if that replacement comes from a position with a higher average estimated value. For example, replacing a Midfielder/Forward with a VAR of +2 with a Midfielder with a VAR of -1 would increase a team’s average estimated value because the base for the Midfielder/Forward is 78, but for the pure Midfielder is 87.3.

The key point to remember is that we’re adjusting estimated values when we calculate VARs to account for the fact that SC scores vary markedly by position. A far better solution, if we assume that all positions contribute equally to a team’s performance, would be to create a scoring system that produced roughly equal average scores by position across time, but we’re stuck with what we have for now.

Below is a chart of VARs calculated in this way for all players who played a minimum of 10 games in 2018 and who have played 30 or more games in their career.

(Click on the chart to access a larger version.)

You might have noticed in the chart above the relative preponderance of players with positive VARs and wonder where all the sub-zero VAR players are.

They’re found far more often amongst the players omitted from this chart - those who played fewer than 10 games in 2018 or who have careers of less than 30 games. Under our methodology, most of these players will still be subject to regularisation of their estimated values, and many of them will have relatively low saw SC scores as well. The average SC score for these players in 2018 was just 59.3, which contrasts with the 79.4 average for the players in the chart above. The chart for them appears below.

FINAL THOUGHTS

Philosophical and mathematical complications related to calculated VARs aside, the key message from this analysis for me is that appropriately summarised SC scores can practically be used to estimate player values that, in turn, can be used with MoSHBODS forecasts to create margin predictions far more accurate that we’ve had previously.

That, if nothing else, makes 2019 a tantalising prospect.

I’ve no doubt that further refinements are possible - revisiting the wisdom of choosing a fixed 45 as the “missing” SC scores for players from all positions being one obvious area of future exploration - but this second attempt appears to have produced something of genuine value.

As ever, genuinely enjoy getting your feedback.