More red ink for Investors this weekend, though a Saints victory in the last game of the round would have produced a roughly break-even outcome across the eight games. The Saints though fell well short, eventually going down by 52 points, leaving the Head-to-Head Fund down by about 3% on the round and the Line Fund down by just over 6%.
At the end of Round 3 the Head-to-Head Fund is down by about 7% though it has an 8 and 5 record, and the Line Fund is down by 5% though it has an 11 and 8 record. That performance by the Line Fund would have been good enough for a 9.5% profit had I continued last season's policy of level-staking Line Fund predictions at 5% of the Fund per bet; the move to Kelly-staking, coupled with the Line Fund's relatively poor probability calibration so far this season, has reduced the return by almost 15%. It's wagered 41% of the Fund on the 8 losing wagers and another 41% on the 11 successful wagers, which means that the average losing wager has been considerably higher than the average winning wager.
It seems I might have been a little premature in diagnosing overfitting in our neural network-based tipsters. They now fill 1st and equal 2nd positions on head-to-head tipping, 1st and 4th on MAPE, and Combo_NN_2 has selected 17 of 24 line betting winners. That's post-sample performance any way you care to look at it.
(Please click on the image for a larger version.)
Combo_NN_2, which is the last tipster I created for this season and which is based on just 5 inputs - the TAB Sportsbet prices and MARS Ratings for the teams and a binary variable to denote whether or not the game is an interstate fixture - leads the head-to-head tipping race on 17 from 24 (71%).
It's a tip ahead of Combo_NN_1, its far more algorithmically complex brother, and of Bookie_9 which, despite being based solely on the TAB Sportsbet head-to-head prices, has tipped slightly better than BKB and Bookie_3 each of which use exactly the same inputs. It's not enough to know that a particular variable is predictive, you have to know exactly how it's predictive.
Amongst the Heuristic Tipsters, revamped for this season, BKB and Short Term Memory I and II lead out on 15, one tip ahead of most other heuristics and two ahead of Easily Impressed I and II.
Combo_NN_2 also leads the margin prediction competition with an MAPE of 25.88 points per game, just 0.02 points better than Bookie_3. There's then a gap of almost a full point back to Combo_7 on 26.85, and then another 1 point to Combo_NN_1 on 27.78.
This week I've added some new statistics to the Margin Prediction section that provide more detailed information about the performance of each Margin Predictor. The column heads "<6" records the proportion of games in which the Margin Predictor's margin forecast was within 6 points of the eventual margin. The columns headed "6<12","12<18" and ">42" are defined similarly.
Combo_NN_2's and Combo_7's strong MAPE performances have been driven by their ability to predict the final margin to within 1 goal in 25% of games. Combo_7, however, has been undone by being more than 7 goals away from the final margin in another 25% of games.
Below the table in the Margin Prediction section that presents the performance data numerically is a section that conveys the same information graphically. Scanning the stacked bar chart in the middle of this section, the data for Combo_NN_1 stands out immediately. The three leftmost bars, which pertain to predicting the correct margin to within 1 goal, 1-2 goals, and 2-3 goals respectively, are noticeably shorter for this Margin Predictor than for any other. What's preserved the overall performance of Combo_NN_1 has been its ability to better avoid the MAPE-destroying games in which the predicted margin is more than 7 goals different from the actual margin. It's only had 4 games where its margin prediction has been wrong by more than 42 points, largely because of its willingness to predict large victory margins, and to predict them for the right team.
The statistically devastating effect of very-wrong predictions is evidenced by the fact that the 6 Predictors with the largest proportion of predictions that proved to be more than 42 points in error, are the 6 predictors with the worst MAPEs.
On the left of the stacked bar chart is a variance chart that depicts how much better (in green) or worse (in red) each Predictor's MAPE is relative to an arbitrary "acceptable" performance of 30 points per game. At this point, seven Margin Predictors have sub-30 MAPEs.
Finally in this section I've shown the line betting performance of each Margin Predictor. Combo_NN_2, ProPred_3 and ProPred_7 lead on this metric on 71%, and all but Bookie_9 have better-than-random scores (ie above 50%). The variance chart below the line betting data depicts the performance of each Predictor relative to a chance (ie 50%) performance.
The Head-to-Head Probability Predictors all had strong rounds, moving them comfortably into better-than-chance territory. ProPred's performance is particularly gratifying given that it was designed to be a strong probability predictor. I do get nervous about its willingness to make 90% and higher probability predictions however, which it's done in 8 games already this season for 7 wins and 1 draw. The draw hurt - it knocked a whole 1 point off ProPred's probability score - but a loss would be seriously ugly.
As noted above, the Line Fund's probability predictions have proven to be poorly-callibrated so far this season. Indeed, at this point the Line Fund's probability score is worse than what could have been obtained by assigning a 50% probability to the home team's winning on line betting in every contest.