Starting “seed” Some BCS supporters hailed the January 2011 BCS championship game as an indicator that where a team started the season didn’t matter. Both Oregon and Auburn began outside the top 10. Oregon was 11 and Auburn 22 in the USA/Coaches Preseason poll. This was supposedly big news as it showed that any team could make the championship game – preseason rankings were not important. But look further into this claim. In 2010 Oregon and Auburn were the only teams to go undefeated allowing them to jump those in front of them. This is not entirely true as TCU didn’t lose either and started at 7, but they were not a member of one of the “Big Six” BCS conferences…but that is another issue. TCU is the only team to start in the top 10, go undefeated, and NOT play in the championship game. Outside of this year there were three other years in which a team went undefeated but were left out of the BCS championship game: 2004 – Auburn, Utah and Boise State; 2006 – Boise State; and 2008 – Utah. These teams also finished undefeated. In each case the teams were either in the bottom of the top 25 or unranked to start the season.
Looking back, since 2002 and prior to 2010 only twice has a team reached the championship game when starting outside the Top 10 of the USA Today Preseason poll: Ohio State in 2002 (12) and LSU in 2003 (15). In the case of OSU they went undefeated with all teams in from of them losing at least once - except for Miami whom they played. As for LSU they lost only once but all teams in front of them lost at least twice allowing them to move up. The exception was USC who also only lost once, but with the SEC championship LSU played one more game.
In years where the records were identical, the team with the higher preseason rank was the one that ended up in the game. For instance, in 2007 Ohio State was 11-1 while LSU, Oklahoma, Va Tech, and Missouri were all 11-2 (Hawaii was 12-0). The championship game pitted OSU against LSU. In order of preseason rank these teams were: LSU (2), Oklahoma (8), Virginia Tech (9), and Hawaii (24). Missouri was unranked. The only time a team ranked in the Top 10 went undefeated and did not make the championship game was 2010 when TCU started the season ranked seventh.
What does this mean going forward? For one, the American Athletic Conference (formerly the Big East) and Big 12 are fighting an uphill battle without a conference championship. If they want to play in the championship they better hope they produce a team which has fewer losses than all but one team.
Second, if you aren’t in the preseason top 10 and/or not in a BCS automatic conference your chances are practically nil. Your only hope is to have one of the two best overall records. For example, if you are Oklahoma and you go undefeated you better hope that there is only one other undefeated team, or that the undefeated team is from a non-automatic conference. If you are a Boise State you pretty much need everybody to lose.
How can this be corrected? Simple: don’t rank teams until after several weeks into the season starts. For this year that would follow games such as Georgia-Clemson, Notre Dame-Michigan, and Florida-Miami.
Who is minding the computers? Football fans in general understand the human polls: participants cast their vote. Of course having the choices made public would be a nice touch. However, nobody seems to talk about the computer methods which make up 1/3 of the rankings. A possible explanation is that “who could understand them anyway”, but another is that some may belief since they are computers they have no bias. Yet these are programmed by humans and therefore are subject to human error. Given the millions of dollars being tossed around you would think (hope?) someone is checking these systems. With the developers keeping their algorithms in a vault the public is not provided an opportunity to critical review the methods. However, by reading through some of the methodology descriptions provided by the authors at their websites and conducting a quick check some flaws are exposed.
Take for example the Billingsley poll (http://www.cfrc.com/). He explains in part his system works as follows:
“My rankings are in effect, a “power rating” …however, I’m not as concerned about predicting future outcomes as I am honoring what transpired most recently on the field of play. Let me give you a general example. If #35 Texas Tech beats #10 Texas (regardless of the score as margin of victory is not a consideration), and both teams have an identical record of 5-1, then my philosophy dictates the Red Raiders should be ranked ahead of Texas in my next poll, regardless of whether the odds are they would win again if they played the next week. The results may not hold true for more than one week, but that’s OK because if a team EARNED that position, they deserve the ranking, regardless of what happens in the next week of play.”
In short, if Team A beats Team B and following the game they have identical records – or I would assume Team A has a better record – then Team A would be ranked in front of Team B in the subsequent Billingsley poll. This would seem to follow a simple set of programming rules. A quick history check, though, indicates an error in either programming or Mr. Billingsley’s description.
As recently as Week 2 in the current Billingsley poll, Washington State (1-1) was ranked behind USC (1-1) despite the Cougars victory over USC at USC. Further research shows that in week 2 of 2009 the Billingsley poll had a 2-0 USC ranked #1 and a 1-1 Washington ranked #96. These two played in week 3 with Washington winning. This brought both teams to 2-1 and according to his ranking scheme Washington should have been put in front of USC in his week 3. Instead USC was #16 and Washington was #69. (see http://www.cfrc.com/Ratings_2009/WK_2.htm and http://www.cfrc.com/Ratings_2009/WK_3.htm Another example occurred in 2010 with Boise State and Nevada. On November 26, 2010 a 10-1 Nevada team took on and beat 10-0 Boise State. Prior to the game, Mr. Billingsley had Nevada at #21 and Boise State ranked #3. Following Nevada’s win the records were 11-1 for Nevada and 10-1 for Boise State. By his method Nevada should be ranked in front of Boise State since Nevada won and actually sported a better overall record. His rankings for the week following this game had Boise State at #5 and Nevada at #16.
This begs a question about his methods: Does head to head really matter? I am confident that Mr. Billingsley would state that these exceptions occur because the team's power ranking is below that of the team they beat. Fair enough; but then why mention anything about head to head as part of your methodology? The ranking would be based on one's power ranking and head to head would not be a factor. Possibly he mentions this to play along with the BCS power brokers who faced much, and deservedly so, criticism when back in 2000 a Miami team beat a Florida State team that eventually played for the national championship. Both teams finished 11-1 with FSU going on to play Oklahoma for the championship. Of course what is lost in this debate is that Washington also went 11-1 that season giving Miami their only loss!
Does winning margin really matter?
The BCS committee above all else preaches sportsmanship and to support this they demand the computer systems ignore margin of victory. We assume they want the human pollsters to do the same, but human nature being what it is, this expectation seems unreasonable. The computers are a different matter again because they can be programmed to ignore it…directly. However, indirectly is a different matter. By this we mean scoring average, i.e. do higher scoring teams have an advantage over lower scoring teams?
For example, in 2008 the Big 12 had three teams with prolific offenses: Oklahoma, Texas, and Texas Tech. After 12 weeks each of these had one loss, each losing to one of the other teams. Meanwhile, Alabama and Utah were undefeated. Three of the six computer polls - Massey, Sagarin, and Wolfe - listed in varying order the three Big 12 schools ahead of the lone unbeatens. Why? A possible and reasonable explanation would be something to do with points scored in determining a team’s rank, i.e. their modeling methods would favor strong offensive teams. However, it is very likely that high point totals are also related to margin of victory, with more points scored leading to larger victory margins. If true, this goes against the very “sportsmanship” banter on which the BCS prides itself
On one hand, BCS directives explicitly state that no computer method can invoke margin of victory. However, no other constraints are given. The methodology can include, or not, head-to-head, strength of schedule, etc. So what we get at the end of the conference championships are two teams scheduled to play in the BCS Championship. Once that game is played, however, there is no final BCS poll: the champion is decided on the field. This means the computers and humans who are “qualified” to determine who plays in the game are no longer qualified to choose the champ. Up to December 5, 2010 when the final BCS poll was presented, head-to-head competition is not necessarily a factor in determining the top two teams, yet it is the only factor in determining the overall champion. Why is it that the computer and human polls are so vitally important in figuring which teams should play for championship but not in deciding the champion? There was no final computer poll for the BCS on January 11 following this last game; just the final Coaches Poll based on the championship game. And maybe this is why: In 2008 the final computer rankings used by the BCS computer would have selected the one remaining undefeated team: Utah. How embarrassing would that have been!