Boyd's World-> Breadcrumbs Back to Omaha-> Month-to-Month Correlations About the author, Boyd Nation

Month-to-Month Correlations

Publication Date: November 7, 2000

Brrr?

One of the major arguments put forward by the defenders of the NCAA's pro-Northern tournament selection practices is that the Northern teams are hurt by the cold weather (which is certainly true; a quick look at the EFI's will show a good bit of truth in that) in the early season, but that they improve steadily over the course of the season and are deserving of their higher-than-their-record-would-call-for seedings by the end of the season. This argument is somewhat convenient, since virtually all games between Northern teams and those from the big conferences take place early in the season, but I'd like to try and see if there's any factual basis for it.

I'm not, of course, arguing that Northern teams are not damaged by their lack of early-season practice time outdoors; that would be ridiculous. However, I don't see any evidence that they tend to improve over the course of the season, at least relative to the rest of the country. You could probably make that argument based on the lack of postseason success, but the lack of data points there suggests that another approach might be more convincing.

Unrelated to this issue, but also addressed by the research I describe below, is the notion that some teams are to be less lightly considered because of the pattern of their season -- teams that win early more than later tend to be considered to have had worse season than the other way around, regardless of the existence of counterexamples like Texas in 2000.

Month-to-Month Correlations

If the Northern teams do tend to improve over the course of the season, then that should show up in their results. What I've done is to take the season and divide it into five periods of approximately one month each:

Period     Dates

   1     1/20-2/14
   2     2/15-3/14
   3     3/15-4/14
   4     4/15-5/14
   5     5/15-6/17

I then computed the ISR's separately for each of these periods. If teams do change quality drastically over the course of the season, there should be relatively little correlation between them from month to month. Here are the results showing the correlations from month to month, followed by the correlations with each period to the year as a whole:

      2      3      4      5
1    .29    .26    .27    .22
2           .73    .73    .50
3                  .77    .45
4                         .58

1  .34
2  .87
3  .90
4  .91
5  .68

In interpreting these, it's important to note that periods 1 and 5 have fewer games involved than the other periods, which cuts badly into their accuracy as standalone data points. However, looking at the other periods, it looks to me like there is no major difference between the predictiveness of the other three periods.

In other words, there's not much to either of the arguments mentioned in the first section. Late February and early March results seem just as predictive of success over the rest of the season as do those from early May. If the NCAA wants to boost the Northern programs, there are things that they can do, such as changing the season dates, but they shouldn't pretend that those programs have already improved to the point of deserving high seedings.

Boyd's World-> Breadcrumbs Back to Omaha-> Month-to-Month Correlations About the author, Boyd Nation