What You Need to Know about College Rankings
Last week, the Wall Street Journal and Time Higher Education released their second annual college rankings. While I appreciate some differences in their methodology from what we see each year from the US News & World Report, I didn’t have to look much further than their page introducing the Top 10 to be reminded of the many reasons I feel frustrated by college rankings in general. The way university rankings are covered as a horse race—MIT moves up to second, Stanford falls to sixth—belies the reality that colleges and universities move much more like oil tankers than thoroughbreds; they stay on a course without much change except over very long periods of time. Annual rankings, which purport to show changes in institutional quality year over year, can’t really say a whole lot when the methodology draws heavily on factors that are largely fixed, except for microscopic changes in hiring or from student survey responses. This is why schools in the Top 10 are nearly always in the Top 10, and why you’ll never see a school jump 20 slots in a single year.
Below are three quotes from the WSJ/THE’s introduction to this year’s rankings that reinforce some of the shortcomings of any standardized, methodological system for helping you find your best fit college.
“The University of California, Los Angeles is still the top public university, holding on to 25th place…”
I’ve spoken with enough students in Southern California to know that UCLA is the dream school. It would be impossible for many of them to even name 24 schools, much less to identify 24 that are “better” than UCLA. But here the WSJ/THE rankings take the position that UCLA just is #25, leading students to believe their dream college is somehow less than other schools they know little about. In my experience, this makes students feel less excited about a school they’ve always loved, as though an outsider’s ranking makes their perspective less authentic. Inherent in this quote is also a perpetuation of the belief that private schools are somehow better than public schools—24 of one before we get to the first of the other—which misses much of the reason families choose public colleges. Among them: lower cost, greater access, and diversity more representative of the state in which they are situated.
“…Cornell University slips out of this elite group, dropping one place to 11th.”
The idea that Cornell is somehow outside of an “elite group” because it’s ranked at #11 instead of #10 is laughable to me. Is there really that big a difference?
Rankings are necessarily delivered on an ordinal scale: #12 is not as good as # 11, both are not as good as # 10, and being included in the Top 10 is a big deal only because of our arbitrary love of round numbers. But an ordinal scale also gives the impression that the gap in institutional quality is marked evenly by number: the difference between #12 and #11 is the same as the difference between #11 and #10, and that relationship holds for any three numbers you choose anywhere across the rankings. But that seems impossible.
What’s probably closer to the truth is that there are a small handful of (about 15 to 20) schools that will always be situated at the top of the rankings, another group of about 20 to 30 that are in the next tier, and a further group of 50 or so to round out the “top 100.” When we don’t have access to the overall ratings, we can’t see how close Cornell (#11) is to Harvard (#1), or how close #99 actually is to #63, when you really dig into the data. This especially hurts schools that are only marginally “worse” than their peers. By dumping them deep on the third or fourth page of a rankings website, they’ll never be seen by families who use the rankings as their primary research tool and may never draw the attention of a student who would truly succeed there.
The biggest problem I have with the rankings, though, is their circularity. When we use metrics like “reputation for excellence” to rank schools, we’re necessarily begging the question at hand. Many who are charged with determining institutional excellence for the sake of rankings look to rankings to determine institutional excellence. The same phenomenon occurs with students. As long as the “top” students in the country continue to be drawn to the schools at the top of the list, those schools will remain at the tops of those lists. You won’t be surprised to hear that the schools that “produce” the best graduates are the schools that attract the best freshmen.
In this way, “top” schools are and always have worked with better ingredients than their competitors. What is most interesting to me, as someone who helps students to find their best fit colleges every day, is asking where an individual student can see the greatest improvement above and beyond their median performance. What school does the best job at making you a better and more successful version of yourself than you would have been if you’d gone somewhere else? The problem with the answer to that question is that it depends on who you are, on what you want out of your college experience, and on the ways that you’ll avail yourself of the opportunities provided to you. Unfortunately for publishers in the rankings business, that answer won’t sell any magazine subscriptions, because the methodology behind it is entirely up to you.