I sense more screwing of anybody not a P5 team. I especially loved this part: "The NCAA Division I Men’s Basketball Committee announced that beginning with the 2020-21 season, the NCAA Evaluation Tool will be changed to increase accuracy . . ." Now explain to me how its ever "accurate". We don't know what teams are better etc., that's what the metric is trying to help shed some light on, but stating it will be more accurate requires the NCAA to have determined what the answer should be and then adjusted the formula to get to the the answer they determined was accurate. I'd love to see the NCAA publish the 2019-2020 Net rankings under the old system and the new system side-by-side and see who gets effected how. I would bet a ton of money that P5 go up and other go down overall. I think one thing that really annoyed the big boys this year was that Dayton and Gonzaga were going to be 1 (at worst 2) seeds with gaudy records but didn't compete in a P5 conference. I am sure this was "inaccurate" since they all "know" the best teams must be in the P5.
Also got a kick out of this one: "In addition, the overall and non-conference strength of schedule has been modernized to reflect a truer measure for how hard it is to defeat opponents. The strength of schedule is based on rating every game on a team's schedule for how hard it would be for an NCAA tournament-caliber team to win. It considers opponent strength and site of each game, assigning each game a difficulty score. Aggregating these across all games results in an overall expected win percentage versus a team's schedule, which can be ranked to get a better measure of the strength of schedule."
So I am quite certain that all P5 conference games will be "assigned" (whatever that means) a high "difficulty" score while virtually no games not involving a P5 opponent will measure up! I hate strength of schedule metrics. I get it that going 22-8 against a "tough" schedule is better (probably) than going 22-8 against a "weaker" schedule but not sure how to measure which team is better in that scenario - - - only which schedule was better. Take this example (and its totally hypothetical, it wouldn't really happen but illustrates the problem with SOS) - - - Team A and Team B play 20 games against exactly the same opponents in the same places (i.e home or away) and those 20 teams are a mix of great, good, OK and bad. Both A and B go 12-8 in those games- - winning and losing the same games. In their other 10 games, Team A plays 10 patsies (from their weak conference) and wins them all. Team B plays 10 better (but not great or even good teams) and wins them all as well. Team B would have a higher SOS and a higher NET but its entirely possible and even probable (based on the results of the 20 games that were the same) that Team A would beat all of them as well. Nothing in those results really says Team B is better - - only that Team B played better (less bad??) teams.
I think it has to count for something when a team goes 18-12 against a tough schedule and another team went 23-7 against a weaker schedule but just not sure how you measure it. But the real problem is that the metric BY DEFINITION does not measure your performance, it measures the performance of who you played and then assumes that because someone played better teams than you, it must be a better team than you. There is no metric that measures your performance against that schedule.
I'd love to see a metric that ranks strength of schedule on winning percentage against certain levels of teams - - so its what you did with your chances against Top teams that counts, not how many chances you had. So, in my example, if in the 20 common games they both were 1 and 3 against "great" teams, 3 and 5 against "good" teams, 4-0 against OK teams and 4-0 against "bad" teams and Team A beat all "bad" teams in their other 10 and Team B beat all "OK" teams in their other 10, their NET would be the same because their winning percentages in each group (great, good, OK and bad) would be the same). This has its flaws too though. If a team plays no games against great teams (for example) whats their percentage for that group? And is it fair if a school plays only one game against a "good" team and wins it so it gets a 1.000 winning percentage for that category while another team goes 8-2 against good teams (for an .800 winning percentage). Note sure i actually believe 1-0 is better than 8-2 against good teams?
But back to my main point, i guarantee you this is more "accurate" because it ranks Power 5 teams higher in general than the old way!