I did the same thing back in October 2006 to compare how well various projections did for the 2005-06 season. In that analysis, I looked at how well each projection did against a simple benchmark of just using the prior year's per game data. This is the starting point for any projection. Any time a projection deviates from the prior year numbers, you have to assume that the prognositicator has a good reason for moving that number. Because of this, we can measure how well a given projection does relative to the benchmark in each category...or put another way, how much information value does each projection provide on average.
The 2005-06 results showed that two projections...Rotowire.com (which you had to pay for) and CBS Sportsline actually performed worse than the prior year numbers. That's hard to do. Basketballmonster.com and NBA.com (provided by Talented Mr Roto) performed just slightly better than the previous year's data, and my initial projections were about 5% better than any of the competitors.
So let's see how well everybody did this year. For the projections, I looked at a dataset of 230 players that were common to each projection set. For each category, the error between the actual and projection was squared (hence, the term sum of squared errors!) and summed for the 230 player pool. Each projection was compared relative to the benchmark 2005-06 per game data...the benchmark was converted into an index equal to 100. A projection with a rating of 80 shows the projection provided a 20% improvement in that category relative to the benchmark. [Note: for 8 rookies the consensus view of 7 projections (my original plus the others) was used.]
Here are the candidates (if you know of others, send them in and I'll add them to the mix):
Jim - Original: My original projections dated October 24, 2006 and available for free
Jim - Preseason: Based on an analysis I did here, I adjusted my projections based on player-by-player preseason numbers
Jim - Global Preseason: In the same analysis, I made global adjustments based on overall preseason trends. Designed to account for changes caused by the much ballyhooed new game ball.
CBS Sportsline: Free projections available with their fantasy package.
Rotofreak.com: Free projections from a dude with a website
Fantasy Sports Central: Free projections
THE Talented Mr Roto: You have to pay $9.99 for a draft kit to get these projections.
Basketballmonster.com: You get the projections if you pay $12 for full-access to their site features (well worth it in my book).
Rotowire.Com: More pay-for-projections. I think these are $14.99. This is what you get if you sign-up for Yahoo's draft kit.
And here are the results:
My projections did very well again, performing over 20% better than the baseline and a 7% improvement over the next closest competitor, rotofreak. The preseason numbers do provide some information value, similar to what we found last year. The global projections might have held up better if David Stern hadn't switch back to the old ball in the middle of the season.
Bringing up the rear again we have Talented Mr Roto (seriously, who's paying for these?) and CBS Sportsline who gets the unique distinction for performing worse than the prior year baseline two years in a row. Whoever is doing their projections should be fired.
Another way of looking at it is this...for each player the projection which comes the closest gets a score of '10' and the projection which does the worst gets a '1'. Summing up all 230 players, we get the following results:
JIM - PRESEASON: 1,526
JIM - ORIGINAL: 1,460
JIM - GLOBAL: 1,460
FANTASY SPORTS CENTRAL: 1,306
TALENTED MR ROTO: 1,166
CBS SPORTSLINE: 1,045
2005-06 BASELINE: 1,033
Hey, CBS inched ahead of the baseline! The lesson here is be careful what you pay for.
The Wegoblogger / Talented Mr Roto NBA Projection Challenge
Back in October, I also looked at 50 player projections where I differed the most from Talented Mr Roto and put together a little projection challenge. From the results above, you can probably guess how this is going to end up, but for closure's sake, let's post the results.
Again, the methodology is sum of squared errors across the 8-categories. Lowest number wins per player. The results aren't even close. Overall, I win 36-14. In the players that I had higher, I won 17-8. In the players where TMR was higher, I won 22-3. And of the 10 players where we had the players rank similarly but the overall categories were different, I won 7-3. For these 50 players, my projections were 30% better on average. Not much of a challenge after all.
Speaking of TMR, this was about the time of year last season when he reviewed his 'signature' Guys I Love/Guys I Hate list and evaluated himself with flying colors (despite overwhelming evidence to the contrary). I don't know if he's brave enough to do the same this year, so let's do it for him. His basic premise is that his 'Guys I Love' are guys that will exceed expectations and vice versa. We can evaluated how he did using the consensus of the projections used above as the baseline. If a guy he loves outperformed the consensus expectations, then he gets a win. Simple enough.
TMR: GUYS I LOVE
|GUYS I LOVE||Consensus||TMR Proj||Actual||W/L|
TMR did a great job selecting Barbosa, Warrick, Okafor, Josh Smith and Josh Howard, but this was far outweighed by missing the mark on Kirilenko, Claxton (he kept pimping Speedy all year hoping for a tunraround), Marquis Daniels, Channing Frye, etc. Overall, I have him with a win-loss record of 22-26 on his win list, and on average a Berry recommended player UNDERPERFORMED consensus expectations by 4.5%.
One other thing worth noting is that while TMR recommended a whole slew of players, a number of his 'Guys I Love' had much lower than consensus numbers in the projections that he was selling on his site! Check out Josh Smith, Chris Bosh, Eddy Curry, Barbosa, Tony Parker and Gerald Wallace. Why the disconnect? Keep this in mind if he decides to write a column next week and calls Joe Johnson a win.
TMR: GUYS I HATE
|GUYS I HATE||Consensus||TMR Proj||Actual||W/L|
TMR was spot on with Shaq and Telfair. Way to go! However, his projections had much higher numbers for Shaq, Jalen Rose and Corey Maggette. Berry completely missed the boat on Zach Randolph, Tyson Chandler, Caron Butler (a big miss), Carlos Boozer and Mike Miller. I've got him 10-17 in his 'Guys I Hate' list, and a guy on this list OUTPERFORMED consensus expectations by 2.9% on average.
All in all, not good. We'll see how TMR grades himself if he decides to write that column again this year. In the words of Berry himself: "When you are an “Expert” – that is to say, when you are paid for your writing and predictions – especially in fantasy sports, you are only as good as your prognostications." What does it say when his recommended players underperform and his avoid players outperform?