Who can tell me how the bcs rankings are constructed? Much was made this year about Oregon and Auburn making the championship game without having started in the preseason top 10



Download 14.09 Kb.
Date19.10.2016
Size14.09 Kb.
#4376
BCS
Who can tell me how the BCS rankings are constructed? Much was made this year about Oregon and Auburn making the championship game without having started in the preseason top 10. Oregon was 11 and Auburn 22 in the USA/Coaches Preseason poll. This was supposedly big news as it showed that any team could make the big game – preseason rankings were not important. But look further into this claim. In 2010 Oregon and Auburn were the only teams to go undefeated allowing them to jump those in front of them. Wait – that’s not entirely true as TCU didn’t lose and started at 7. TCU is the only team to start in the top 10, go undefeated, and NOT play in the championship game. Just a coincidence that they are not in one of the BCS conferences? Outside of this year there were three other years in which a team went undefeated but were left out of the BCS championship game. In 2004 – Auburn and Boise State; 2006 – Boise State; and 2008 – Utah. For the last of these two of these instances these were the only undefeated teams. was the only team to not lose. In all cases those teams were either in the bottom of the top 25 or unranked.

If you consider the other years, and looking since 2002 at espn.com, an interesting result appears: any non-undefeated team in the championship was the one that started highest in the preseason! In some cases this was due to the team – i.e. SEC – playing one more game than the other and thus being say 12-1 instead of 11-1, but in cases where the records were identical, the team with the higher preseason rank was the one that ended up in the game. See 2007 when OSU was 11-1 while LSU, OU, Va Tech, and Missouri we all 11-2 (Hawaii was 12-0). The championship game pitted OSU against LSU. In order of preseason rank these teams respectively were: 2, 8, 9, votes, and 24. Two things to note: Hawaii was good enough to begin at 24, go undefeated, but have a 2-loss Missouri team jump them to be in the top 10. The other was this year with TCU being good enough to start at 7 yet get jumped despite never losing.

What about the computers?

http://www.cfrc.com/ (Richard Billingsley home page)

http://www.cfrc.com/Archives/Dynamics_08.htm (explains his system)

http://www.cfrc.com/html/rankings.html (this is 2010 look Weeks 12 and 13)

http://www.cfrc.com/Ratings_2009/WK_3.htm

http://www.cfrc.com/Ratings_2009/WK_2.htm

This college football season, like so many others in the recent past, seemed to involve some controversy over who should be number one, and more importantly who should be playing for this title. This year TCU was the school left on the outside looking in, yet the BCS supporters made certain to point out how both championship participants reached the ultimate game without starting in the preseason top 10. This apparently was to be interpreted as to how “fair” the system is. Also, much discussion, either in the written or spoken press, revolves around the human portion of the BCS formula. For instance, you will hear or read comments regarding how some coaches have their subordinates cast ballots or that some member of the Harris poll will put a sub-500 team in the top 25. Yet little is written or discussed about the computer pieces of the BCS formula. This portion of the BCS I find most intriguing and somewhat disturbing.


For example, in 2008 the Big 12 had three teams with prolific offenses: Oklahoma, Texas, and Texas Tech. After 12 weeks with those three teams each having one loss while Alabama and Utah were both undefeated, three of the six computer polls - Massey, Sagarin, and Wolfe - listed in varying order the three Big 12 schools ahead of the lone unbeatens. Why? A possible and reasonable explanation would be something to do with points scored in determining a team’s rank, i.e. their modeling methods would favor strong offensive teams. However, it is very likely that high point totals are also related to margin of victory, with more points scored leading to larger victory margins. If true, this goes against the very “sportsmanship” banter on which the BCS prides itself. (To verify these standings please visit from the BCS home page: http://www.footballfoundation.org/pdf/BCS2008/BCS_LONG.Week7.TSACTJRMOB.11.30.08.pdf)
Another computer glitch seems to befall Richard Billingsley’s poll. According to his website, http://www.cfrc.com/, Mr. Billingsley explains that in the dynamics of his system a team will be given, at least for one weak, a higher ranking than a team it beat if those two teams share a common record. For instance, if a 5-1 Texas Tech ranked #35 defeats a 6-0 Texas ranked #4 then with both teams sporting 6-1 marks his system will place Texas Tech in front of Texas for at least that week. Simple enough, except his dynamic appears to be flawed. I’ll explain.
On November 26, 2010 a 10-1 Nevada team took on and beat 10-0 Boise State. Prior to the game, Mr. Billingsley had Nevada at #21 and Boise State ranked #3. Following Nevada’s win the records were 11-1 for Nevada and 10-1 for Boise State. By his method Nevada should be ranked in front of Boise State since Nevada won and the teams sport similar, albeit not identical records. In this instance, Nevada has a better record! Yet on his rankings for the week following this game he has Boise State at #5 and Nevada at #16. This would appear to be in direct violation of his stated system dynamics. But I hear nothing from anyone regarding this error. The computer systems need to be made public or at least evaluated by a competent group of educated individuals. Another example of this error occurred in 2009. In Week 2, the Billingsley poll had a 2-0 USC ranked #1 and a 1-1 Washington ranked #96. Then these two played with Washington winning. This brought both teams to 2-1 and according to the ranking scheme Washington should have been ranked in front of USC in Week 3. Instead USC was #16 and Washington was #69.
As to Auburn and Oregon making the championship game without having started in the preseason Top 10 that, too, is worth reviewing.
From www.espn.com one can review rankings from 2002 and forward. While it is true that neither of this year’s finalists started in the Top 10, the reason they made the final game is because all other automatic qualifiers ranked in front of them lost. Looking back, since 2002 only two teams have reached the championship game from outside the Top 10 of the USA Today Preseason poll: Ohio State in 2002 (started #12) and LSU in 2003 (#15). In the case of OSU they went undefeated with all teams in from of them losing at least once - except for Miami whom they played. As for LSU they lost only once but all teams in from of them lost at least twice allowing them to move up. The exception was USC who also only lost once, but with the SEC championship LSU played one more game. The only time a team ranked in the Top 10 went undefeated and did not make the championship game was this year when TCU started the season ranked seventh. In 2004 Auburn and Boise State both went undefeated, but neither team was ranked in the preseason top 10; #18 and #21, respectively. So it would appear that although two teams made it to the championship game from outside the preseason top 10, this was only due to extenuating circumstances: all the teams in front of them lost except for one team, but that team wasn’t from an AQ conference. The perfect recipe needed for a non-top 10 team to reach the championship.
Other considerations for these polls put some weight on strength of schedule but there is no consensus on how to calculate this, and for some sites they do not offer any insight into how these are calculated.  Looking at USC for example, prior to the bowl games Billingsley ranked their schedule 15, Jeff Sagarin at 40, and Anderson and Hester had it at 68.  How can three different techniques evaluating strength of schedule vary so greatly?  No one can say because none of them provide clear information on how these are calculated.
One final note is the overall irony of the BCS.  On one hand, BCS directives explicitly state that no computer method can invoke margin of victory.  However, no other constraints are given.  The methodology can include, or not, head-to-head, strength of schedule, etc.  So what we get at the end of the conference championships are two teams scheduled to play in the BCS Championship, which we had with Auburn versus Oregon.  But now that the game has been played there is no final BCS poll: the champion is decided on the field by whoever wins this game.  That is, up to December 5, 2010 when the final BCS poll was presented, head-to-head competition is not necessarily a factor in determining the two top teams, yet it is the only factor in determining the overall champion.  Why is it that the computer and human polls are so vitally important in figuring which teams should play for championship but not who is the champion? There was no final computer poll for the BCS on January 11 following this last game; just the final Coaches Poll based on the championship game. For what it is worth, if the BCS did calculate a final poll that included the computers you might find it interesting to note that for the 2008 season Utah would have been ranked number one by the computer formula portion of the BCS.

Download 14.09 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2022
send message

    Main page