Whenever teams join a new conference, commentators debate whether their "styles of play" can succeed in their new conference. This has become a clichéd debate this past decade as a plethora of universities have ditched their old ties to pursue new conferences, in search of as much television revenue as possible. Even though changing-conference discussions occur a lot, though, no one actually gets past anecdata in discussing whether teams perform any differently after changing leagues. So we figured we'd give it a crack.
Though recent seasons have seen the most realignment, the shifting of the modern college football landscape really began in 2004, when Miami and Virginia Tech left the Big East for the ACC. So we tracked how teams performed after realignment from 2004-2013.
We measured on-field performance using Ed Feng's The Power Rank. It takes into account strength of schedule, margin of victory (but diminishing returns for blowouts), offensive and defensive stats adjusted by opponent, and home field advantage. The index gives each team a rating that reflects their expected margin of victory against an average team.
Since schools are evaluated across multiple recent seasons by their prospective conference suitors, we used an average of the last four years teams spent in their old conference to give us a base point. We then compared that score to how teams fared in the first four years in their new conference. To gauge what marginal improvement there is in each season in the new conference, we compared each year to the one preceding it. We grouped teams by what conference they joined and by what conference they exited. To be included, teams had to play in the FBS for at least four seasons prior to a conference shift.
How it works
Miami had a rating of 23.2 during their last four years in the Big East (2000-2003). Which means they were more than three touchdowns better than the average team. However, in their first year in the ACC they rated 18, and their second year in they rated 15.3. For the first year in the ACC, they would be given a score of -5.2 (18-23.2) and for their second year they would be given a score of -2.7 (15.3-18).
This reflects that after two years in the ACC Miami was only about two touchdowns better than the average opponent, which means Miami got worse immediately after joining the ACC. The team the ACC got was not the same one that played for national championships in 2001 and 2002 representing the Big East. Then head coach Larry Coker lost three games in his three seasons in the Big East. In his three seasons in the ACC he lost 12 games before getting fired.
As with all other schools in this dataset, there is a chicken-and-egg thing going on. Miami may have had trouble adjusting to play in the ACC, which may have been slightly more competitive than the Big East. But it's likely the team was de-improving anyhow, probably because Coker had a diminishing supply of players recruited by previous head coach Butch Davis. Although the causality of changes in performance after shifting conferences is likely different for each team, this analysis can at least tells us if teams actually do any better or worse when playing a new set of teams.
We found teams don't really play any better or worse after changing conferences. Overall, there was less than a one-point shift in any of the four seasons after teams joined a new conference, with two seasons being positive and two negative leaving teams in about the same position they were prior to changing leagues.
Those that join Power 5 conferences tend to perform slightly worse, but these effects are driven by few teams. Colorado and Utah got even worse after joining the Pac-12. Nebraska has gotten a tad worse each season after joining the Big 10. On the other hand, Missouri and Texas A&M improved after joining the SEC.
The teams that improved most were those joining the Mountain West or MAC. These teams come from the shittiest of all places—Sun Belt, C-USA, independents, and the WAC. After joining the MAC, Temple (who previously was an independent and before that, a Big East member) improved a hell of a lot under Al Golden. The Mountain West got its biggest boost from perpetual-conference-shifter TCU who dominated the Mountain West after leaving C-USA. Those Andy Dalton-led teams got the school into the Big 12 where the Horned Frogs have fared worse. But that probably has more do with their subpar quarterback play since Dalton left and less to do with schedule changes.
Below is a graph and accompanying table that showing how teams fared after joining new conferences. For simplicity the American Athletic Conference (AAC) and the Big East are one unit, since it's really the same conference despite the basketball schools winning name usage rights. (Don't ask if you don't know.)
|Conference||Year 1||Year 2||Year 3||Year 4|
Some conferences do not have four bars because they haven't had new teams for four seasons. For example, West Virginia and TCU have only been in the Big 12 for two seasons, which gives the conference only two data points. Also the sample size for each data point can change depending on how long conferences have had some of their new members. The ACC has three schools contributing to their data points for years three and four (Boston College, Miami, Virginia Tech) but five schools contributing to their data points for year one since Pittsburgh and Syracuse have only one full year of ACC play under them.
But the conference teams join is only half of the story. We also grouped teams by which conferences they left.
Teams that improved most were those who quit being independent and those leaving C-USA. Teams that left the MAC, Mountain West, and Big East tended to get worse. Other conferences didn't see much change or saw year-to-year swings of teams improving and de-improving, which resulted in little net change.
Here is a graph and accompanying table showing how teams fared after leaving their old conferences.
|Conference||Year 1||Year 2||Year 3||Year 4|
When broken out by conference, fluctuations should be expected, since sample sizes hover near five on average. But grouping all schools together gives us a bit larger pool (of 47) to prevent outliers from distorting everything. We found that, on average, teams don't change much one way or the other. As for the anecdota involving teams performing better or worse in new conferences? The reasons are likely multiple and beyond the scope of merely playing different teams. If a conference's "style of play" is indeed a concern, schools are adapting pretty damn fast to their new leagues. Which is to say that having good players and coaches is good, and not having them is not, and if you have a theory past that to prove why teams do well or poorly, it would take a lot of work to prove it.
Charts by Sam Woolley