The committee lies

So the final 2023 CFP rankings have come and gone, and I reflect on the model performance. There are 3 different ways we can look at what happened — computer best teams, computer most deserving teams, and playoffPredictor.com method with committee bias.

Computer best teams involves carefully looking at margin-of-victory and using that to deduce the best teams. Close wins are punished (a 1-point win gives an m=-0.8) and blowout wins are rewarded (35-points and greater wins give an m=+0.5). This is detailed in the full method. I know it works because results are well correlated with other computers (who churn though *much* more data than I do), and I have excellent results — I finished the 2023 regular season at around 52.3% ATS, which was 6th (out of 28 computers) and 2nd (out of 17 computers) that picked all games this season. From a best team perspective — meaning bet on these teams in Vegas to win on a neutral field, here is what we have after week 14:

Teams that were ultimately selected by the committee as the 4 “committee best” are highlighted in yellow.

We see that 2 of the 4 of the “committee best” are nowhere close to the “computer best”. The computer best top 4 includes 2-loss Oregon and 1-loss Ohio State. These teams were dominant — For example, Oregon beat top-25 Oregon State by 24 points, where Washington beat them by only 7. Computers and book makers agree — if Oregon and Washington were to meet a 3rd time this season Oregon would still be a very slight favorite. The fact that Washington beat them already twice has nothing to do with where the money would land. Oregon, Ohio State, Georgia — all really, really good teams.

Now, look at it from computer “most deserving” standpoint. Most deserving involves slightly punishing 1-point and 2-point wins, and giving slight advantages to blowouts of 25 points and again 35+ points. It is a formula that is well correlated to human polls with a end of season η around 1.2. Again, see the full paper. Most deserving in 2023 were:

The computer got it exactly right, including #4 Alabama and #5 FSU. But that is not the method I have used for 10 years. What I have used is a committee bias — basically see differences between the computer most deserving and committee best, and use the average of that to predict the future. That method cratered this year. 2023 final predictions were:

The model of computer most deserving + bias predicted #2 Georgia and #4 Ohio State. But these were wrong.

What happened?

Well, to put it bluntly, the committee lied. They lie every week from week 9 right through week 13. Week 14 (the final week) they just pick who they want. Take Georgia for example — the committee had them #2 in weeks 9-10, and #1 in weeks 11-13. Rece Davis even asked the CFP chair point blank before the Bama game “is Georgia unequivocally one of the 4 best teams?” Translation — they have already done enough to get into the playoff. The votes weeks 9-13 would say yes — the computer had Georgia as low as #9 in week 9 – they had not beaten anyone. Georgia only rose to #3 in the computer most deserving by week 13 because finally they had blowout wins over Tennessee, Ole Miss, and Missouri. But that bias was so strong– as high as .11 in week 9 and always above ~.05 that even with a loss to Bama, Georgia should have been one of the best 4 in the final poll. Else, the committee was lying all those weeks 9-13. I mean in Week 9 when Georgia was #2 behind Ohio State their best wins were Kentucky, Auburn, and Florida. Teams that would finish 7-5, 6-6, and 5-7 respectively. I mean they beat Auburn by 7 when New Mexico State beat Auburn by 21. If the committee knew something and elevated them to #2 back then, they sure sold Georgia up the river in the final poll when they acted like they never had UGA in the top 5.

So what to do going forward? I could simply change the formula to have no bias and simply use the computer most deserving, but that does not work all years. Look at this over the last 10 years:

To read this take an example from 2022. The computer most deserving after week 14 was Georgia, Michigan, Ohio State, and TCU. That was #1, #2, #4, and #3 to the committee (the committee had TCU #3 and Ohio State #4). With correction for bias the method produced #1 #2 #3 #4. All 4 teams correct and the order correct.

Look at 2023. Computer most deserving are #1 #2 #3 #4. But add in bias and you get #1 #3 #5 #7. Not good. Again, the committee lied.

But look at 2017. Computer most deserving are #2 #8 #1 #6 — bad. But add in bias and you get #1 #2 #3 #5. Not perfect, but good. 2017 was the year USF went 12-0, and the committee was never going to get them to the top 4 even though the computer most deserving had them at #3.

So, I’ll keep my method for the 12-team playoff the same next year, but use a grain of salt. The method is sound, the computer is accurate, but the committee lies.