Pages

The JCI College Hoops Rankings: An Overview

12/14/2005

I will begin posting my JCI College Rankings on Monday 12/19, but for now here's an overview of how it works and why it's far superior to the RPI  used by the NCAA Selection Committee and better than all other ranking systems out there.

The tournament selection committee has the tough job of selecting and seeding the 65 tournament teams every March.  Unfortunately they have been hindered by using the very poorly constructed RPI.  I could go on and on about the flaws in the RPI, but I'll save that for future postings.  For now, let's just say it was designed to give the committee an overview of a team's strength of schedule and a measure of performance against the schedule, and in my opinion it does a extremely poor job at meeting this very basic objective.  It's unfortunate that they continue to use it when other, far better ranking systems exist out there (the Sagarin ranking  for example).  Plus, changes they made last year to give a bonus for road wins  made the rankings even more misleading.

The basic decision that the Committee is making is: "Is Team A's tournament resume (or 'body of work', although I hate that term) of wins and losses superior to Team B's tournament resume of wins and losses?"  Although they do like to look at recent performance and other mitigating factors (still up for debate whether they should or not), that's really all there is to it.  And the tourney chair states this year after year during the selection show, whether they truly follow it or not.

Any ranking system, either explicitly or implicitly, is trying to do one thing: make the most sense of the college basketball season.  There are usually around 5,000 games over the course of the regular season, and any given ranking system is making the statement: "based on the data available from those 5,000 games, Team A is #1, Team B is #2, etc."  The underlying data that is utilized may be different (some use margin of victory, for example), but the output statement is essentially the same.

This is really where the RPI fails and the JCI shines.  The #40-ranked RPI team may or may not have a better tournament resume than the #45-ranked RPI team, all it's really saying is the #40-ranked RPI team has a higher value of the RPI formula (25% win percentage w/ home/road adjustments, 50% Opponents Winning percentage, and 25% Opp-Opponent's Winning Percentage) than the 45th-ranked RPI team.  Even if those 3 components are correlated with the quality of the team, how can the RPI  help the Committee make the already tough decision of who's in and who's out if the relative ranking doesn't mean anything?

The JCI, on the other hand, is designed to measure of one team's season versus another.  So you can definitively say that based on that team's wins and losses combined with all other games from the season, the 40th-ranked JCI team has a better tourney resume than the 41st-ranked JCI team.  It does what the Committee is trying to do.

The underlying premise is this.  Hypothetically, imagine that there is a 'true' ranking out there (assume God has it) for each Division I team, and when two teams play each other, the outcome is based on the difference between the teams two rankings, the home court advantage, and some random component to account for upsets.  Each game during the course of the season is one piece of the puzzle getting us closer to that 'true' ranking.  The problem is the season is relatively short and teams all play different schedules, so we only get around 30 glimpses of varying information for each team (which is one reason why the usefulness of computer-rankings is limited in the BCS football rankings, since you're only getting 11-12 bits of info.).  If all 326 Division I teams played a round-robin, you would have no need for any ranking system or a selection committee, you would simply take the teams with the best records.  In other words, you'd have a complete picture.

So the JCI is built to take those 30-odd pieces of information on each team, combine it with all of the data across the season to make the most sense of the information.  Looking at it another way, a team has 30 chances to prove whether it's a good team or not.  The wins and losses over the season will bear that out.  The JCI starts with the same 3 basic RPI components, optimizing the weights, then adds a performance measure to reward for wins and losses versus expectations.  Through a series of iterations, the overall rankings are optimized to make the most sense of the data, so you know by definition you have the best measure of a team's success out there.  Any change to the ranking would make it less optimal.

So, that's the JCI in a nutshell.  Make sense?  It'll probably become clearer in the context of some results, and I hope we'll get some ongoing dialog during the course of the season that should answer a lot of questions.  My goal is not to get the JCI implemented by the Committee, but to get the sad truth about the RPI out to the masses and show that there are a slew of superior ranking tools out there  at the Committee's disposal.  The Committee will continue to defend the RPI on one hand while belittling its role in the selection process (not true...more to follow) on the other, but any reliance on substandard information will eventually lead to bad decision making despite the Committee's best intentions.  As a basketball fan, wouldn't you want to give the most deserving teams the chance to play for the national championship?  As a basketball player or coach, wouldn't you want to know that if you take care your business on the court, you'll have nothing to worry about come selection sunday?  Why should we settle for anything less?

 

 

0 comments:

 
Wegoblogger #31 © 2011 | Designed by Bingo Cash, in collaboration with Modern Warfare 3, VPS Hosting and Compare Web Hosting