Yesterday, philosopher of science Bruce Gordon interviewed physicist Jed Macosko and law professor Jeff Stake about how to read college rankings. What, exactly, lies behind those numbers, especially the ones from the iconic U.S. News & World Report? Are they something you can bank on or something you should know more about first?
Macosko and Stake think you should know more. As Gordon’s introduction puts it, rankings are big business and can lead to outright fraud:
A recent stark example of the financial implications of college and university rankings is the case of Moshe Porat, former dean of Temple University’s Fox Business School. Porat was convicted on November 29, 2021 of engaging in a fraudulent scheme to move the business school to the top of U.S. News & World Report’s national rankings. “He, along with two of his subordinates,” writes Paul Caron, “had for years knowingly embellished the data they were sending on Fox’s students to the magazine U.S. News & World Report, allowing its online MBA program to achieve its No. 1 ranking for four straight years. The distinction helped Fox more than double its enrollment for the program between 2014 and 2017, raking in millions in tuition payments from students and donor dollars.”Bruce Gordon, “The Unreasonable Sway of College and University Rankings: An Interview with Jeffrey Stake and Jed Macosko” at Expensivity (December 7, 2021)
That could have been the tuition money you borrowed, based on rankings. Fox may be a good business school anyway but the rankings didn’t tell you what you needed to know. Some rankings watchers are fighting the corruption by developing alternative types of assessments, using the new tools of machine learning.
Jed Macosko, who got involved with an alternative approach called Academic Influence, recounts an experience at the university where he teaches that set him thinking about all this:
I remember, in my second year at Wake Forest University, we had a planning session that all faculty were invited to attend. I was in a small group with our provost and said something naïve. I think it was, “If we can just do such and such, then our university will really shoot up in the US News rankings.” The provost gently set me straight and told me that the top-30 spot that Wake Forest held among so-called national universities was something that we could all be grateful for but not something that could improve much by our efforts. At the time, I didn’t really understand what he was saying, but, since he was so much older and wiser than me, I knew that what he was saying must be correct. It was only later, probably after working with AcademicInfluence.com for a year or two, that I understood what he meant.Bruce Gordon, “The Unreasonable Sway of College and University Rankings: An Interview with Jeffrey Stake and Jed Macosko” at Expensivity (December 7, 2021)
As Bruce Gordon asked yesterday, what did the provost mean? Wake Forest was already #26. Here’s where Jeff Stake could shed some light:
Starting with the US News ranking this year, the difference between schools ranked 13 and 12 is twice the difference between schools ranked 42 and 40. It would be easier for a law school to change rank from 86 to 60 than for a law school to change rank from 8 to 3.Bruce Gordon, “The Unreasonable Sway of College and University Rankings: An Interview with Jeffrey Stake and Jed Macosko” at Expensivity (December 7, 2021)
So if we are not looking at the top tier, school rankings are much more fluid than we might have supposed. School 60 and school 86 could change places over the few years the student attends. .
But that’s not the main problem. The main problem is the perverse incentives that a ranking system necessarily creates. As Stake goes on to explain, the people who do the rankings have internalized last year’s list so when they are asked again …
Because the yearly ranking is important to many of them, it cannot help but have some impact on the thoughts going through their heads as they reply to that survey, especially when they do not have a lot of other information about schools they are evaluating. An economist colleague, Michael Alexeev, and I were able to show that this echo effect is real. The views of law school personnel, lawyers, and students applying to law schools are all influenced by the US News ranking. And it’s a real problem for our system of higher education.Bruce Gordon, “The Unreasonable Sway of College and University Rankings: An Interview with Jeffrey Stake and Jed Macosko” at Expensivity (December 7, 2021)
So the people doing the ranking know more than the rest of us but maybe not as much as we might have thought. But, Gordon asked, what difference does it really make to the prospective student? In Macosko’sview,
The bottom line is that our students are best served when the schools they attend do their absolute best to lower costs and add value… The consumers hope that they will get the best value for the lowest cost. Usually, free market forces will dictate that this kind of thing happens, more or less. But when the “value” of one’s education is largely determined by where it ranks on a list, and when that list is self-reinforcing with the “echo” that Jeff just described, then it short-circuits all the helpful free market forces which would normally push schools to provide a greater value for a smaller price.Bruce Gordon, “The Unreasonable Sway of College and University Rankings: An Interview with Jeffrey Stake and Jed Macosko” at Expensivity (December 7, 2021)
Unfortunately, despite these limitations, students often treat widely known rankings as an infallible guide. Last month, Macosko told Mind Matters News, citing Stake,
Most people would say that they don’t use just one ranking. But in the end, that’s exactly what many people do. One person I interviewed, Jeff Stake, a law professor who in some ways “trolled” US News for their unhealthy rankings on his website called The Ranking Game, said that prospective law students would walk up to various law school tables set up at law school fairs, and in their hands would be the US News ranking issue, opened to the page that listed the law schools. They would look at the name of the law school, check their magazine, and then either walk away or stay and chat, depending on where the school’s ranking fell in their assessment of what schools they thought they could get into.News, “US News’ law school rankings are losing ground, analyst says” at Mind Matters News
Macosko thinks that Big Data, generated by machine learning, can help end the tyranny of the Single Number, mainly by offering more different ways of ranking colleges according to the student’s own needs. His own group, AcademicInfluence.com, uses a ranking system developed by programmer Erik J. Larson, author of The Myth of Artificial Intelligence (2021). It applies an algorithm to Wikipedia entries to determine which schools are the most influential. Many measures of influence depend on rough values like numbers of hits on pages. Larson’s approach is much subtler. Here’s an example of how he uses Wikipedia searches:
[y]ou start counting from the beginning of the description of the topic to the occurrence of the name of the person, that distance measured, combined with the original person page distance measured, in other words, person to topic, and then from that topic to person, as a weighted statistical combination of those two will give you a rough idea of how influential Wikipedia, and therefore, via sort of reasonable extension, the world, thinks that person is to that topicNews, “How Erik Larson hit on a method for deciding who is influential” at Mind Matters News(April 30, 2021)
So if the student is looking for a school that is influential, Larson’s method, which depends on Big Data searches, offers a clearer picture than hits alone. As noted elsewhere, a two-headed kitten can generate millions of hits and have absolutely no influence. A school whose name comes up quickly in most discussions of a discipline probably has a lot of influence in that area, even if it is not a household name.
Blame Goodhart’s Law, if you want
The underlying problem is a principle called Goodhart’s Law: Once a ranking becomes a target, it loses information. That’s just how incentives work. As in physics, the simple act of measuring invariably disturbs what you are trying to measure. For example if we make “Healthy Food Choices” a school ranking criterion, soon most schools will be offering food that students like that it has managed to get labeled as “Healthy Food Choices.”
It’s not just school rankings either. Business studies professor Gary Smith has written on this problem as it applies to peer review and also to academic publishing: “In becoming a target, publication has ceased to be a good measure” because “publication counts and citation indexes are too noisy and too easily manipulated to be reliable.”
Macosko should be glad to know that AcademicInfluence.com’s influence-based ranking is less vulnerable to Goodhart’s Law than most systems because its algorithm depends not on surveys but on publicly available large databases that cannot be easily adjusted so as to change the rankings.
None of the analysts is saying that rankings are useless but rather that students should look at different systems and remain aware of Goodhart’s Law. Their message may be getting through. Recently, business mag Forbes profiled AcademicInfluence.com, noting that its algorithm “uses artificial intelligence technology to search massive databases and measure the impact of work by who’ve been affiliated with colleges and universities throughout the world.” That’s information we may not learn from surveys of academics. But Big Data can, in principle, unearth them.