![](https://pyxis.nohib.com/v1/imgs/98d/dd2/084b3a56056a6380fb2acd6eb27bbed308-yale-.rsquare.w400.jpg)
Since 1983, U.S. News & World Report’s annual ranking of colleges and universities has been a driving force in American higher education. But recently, the ground behind the rankings has begun to shift. In November, Yale and Harvard jolted the academic world when they announced that they would yank their law schools from the list altogether — a decision that has had a domino effect on other prestigious law and medical programs. Yale’s decision came on the heels of persistent criticism about how U.S. News & World Report calculates its rankings, and whether its results present a distorted picture of what schools really offer their students. (The magazine has adjusted its formula for law schools to address some of these concerns.) Among the most prominent critics over the years has been Colin Diver. A former University of Pennsylvania law-school dean and president of Reed College — which stopped participating in the U.S. News list in the mid-’90s — Diver has written extensively about what he sees as the wrongheadedness of college rankings, including publishing a book on the topic, Breaking Ranks, last year. I spoke with him about the rankings’ profound influence on higher learning, why they’re facing a reckoning now, and whether they’re in danger of extinction anytime soon.
Last fall, a group of the top law schools, including Yale, Harvard, Columbia, and Stanford, pulled out of the U.S. News & World Report rankings. And then in recent weeks, a similar thing has happened with medical schools: Harvard, Stanford, Columbia, and Penn have all dropped out. In a New York Times op-ed you wrote in November, you pointed out that the rankings have faced decades of criticism from public officials and university presidents, and yet they have remained as influential as ever through all of it. But, you wrote, “something tells me this time is different.” I’m wondering why you think this momentum shift is happening now.
I think there has been a lot of mounting public exposure of the failings and flaws of rankings in the last year or so. Just in terms of news stories — you have the dean of the business school at Temple getting not only prosecuted, but convicted of federal wire fraud and then going to jail for lying to U.S. News & World Report. You have a whole bunch of universities either being accused of or admitting to fudging their data. And then you have the Columbia story, which was huge because Columbia is a respected, influential, prominent, internationally known university and one of the Ivy League greats. And here they were being very credibly accused by a statistics professor on their own faculty of inflating five of the different measures that went into the U.S. News formula, and admitting in the end to inflating two of them. I don’t know about the other three, but I read Michael Thaddeus’s report. In fact, he shared it with me before he went public with it, and I think it’s very credible. It’s based on serious scholarly exploration of what the underlying data were.
Michael Thaddeus being the Columbia professor who accused the school of fudging its data.
That story generated a huge amount of press and attention. And the fact that U.S. News then pulled them out of the rankings for the rest of 2022, and then unceremoniously dumped them from No. 2 to No. 18 for the 2023 rankings, itself showed just the unreliability and malleability of the rankings. So there was that.
The publication of my book, I think, deserves some of the credit. I know from conversations with some of the law-school deans that they paid attention to it, and they also have paid attention to the example of little Reed College out in Portland. But the point is that you can dismiss Reed College dropping out, but you can’t dismiss Yale Law School dropping out. You can’t dismiss Harvard Medical School dropping out. Yale is widely regarded as the best law school in the country. It’s the richest, most well-endowed law school in the country. It has a reputation that is very secure. And they’re very frustrated with the rankings, and all the law-school deans I ever talked to felt the same way. Yale felt that this movement toward trying to emphasize public service in various ways, both by admitting more lower-income students and then by trying to encourage and subsidize them going into public-service work — those things were being thwarted and undermined by the rankings formula that U.S. News uses.
And so they decided for all these reasons, “Let’s just pull out.”
And that sent a real shock wave through the system.
And Harvard made the decision almost simultaneously, although I think they announced it a day afterwards. Then all these other law schools felt emboldened to take the risk and say, “We’re going to pull out too.” The calculation was, “We might actually get punished by U.S. News in some fashion or another. We’re less likely to get punished if we’re at the very top and if there’s a whole bunch of us doing this. But we get the advantage of taking the high road, of being associated with a noble cause,” which is the cause of emphasizing that preparation for the profession of law should be focused on doing justice and serving the public good, rather than just privileging the privileged and feeding the big private law-firm industry.
So why now? Rankings had been discredited in the public eye in the last year or so to a degree that I think exceeded the discrediting in the public eye in previous years. Yale was in a unique position where they could lead the charge without really worrying about whether people would follow or not. And then the same thing happened in medicine. It raises the interesting question: What about the colleges?
Yeah, I was going there — when does this trend hit undergraduate colleges?
If it’s going to happen in the colleges, I think it has to happen with Princeton. I know the president of Princeton — I respect him. I appreciate the fact that he’s written a couple of op-eds highly critical of the rankings in recent months. And yet he’s remained silent on this crucial issue. So he publishes an op-ed in the Washington Post criticizing the rankings, but he does not say at the end, “And therefore, I’m going to pull Princeton out of the rankings.” Princeton is the functional equivalent of the Yale Law School. They’ve been top-ranked for a long, long time. They are, on a per-capita basis, I think probably the richest undergraduate institution in the history of humanity. And so they could lead the charge, and if they did, I suspect you’d see a similar phenomenon, a parade of virtue that would go down to around No. 15 or so.
Yale Law or Princeton’s reputation will be pristine regardless of what happens, and they’ll get flooded with applications regardless. That also applies to the top-tier schools who have joined them. You could imagine a situation where the A1 group is gone, but the second-tier schools who need the rankings more stay put. What do you see as the probability that a large cohort would follow the most prestigious places?
What you described is what’s happening. Most of the top-15 law schools jumped on the bandwagon and within two, three, four weeks, they announced they were going to cease cooperation. But then a bunch of the schools that follow behind that, the next tier, either remained quiet and just continued to cooperate, or they publicly said, “We’re going to keep cooperating.” The school where I used to teach, Boston University, is a good example. Their dean publicly said, in essence, that they were sticking with U.S. News because it provided useful free advertising for them. That’s a school that has in recent years moved up from around 25th to this year’s 17th. And they’re thinking, “We have more to lose if we drop out than we have to gain. We might get punished. We don’t necessarily get the same PR benefit from associating ourselves with noble causes. And there’s a possibility that a few of the ones that are above us, the boycotters that are just above us, might get punished by U.S. News and we can move up a few notches.”
And I can see the deans of the next-tier schools saying that. The same is probably going to be true with the medical schools. You’ve got a somewhat different pattern there, because first of all, there’s two rankings, one for research schools and one for primary-care schools. But I think most people regard the research-schools rankings as the most prestigious and the highest status. So you’ve got a bunch of the top research schools having now quickly followed in Harvard’s footsteps. But others, I think, are going to bide their time and decide what to do and make their own calculation. And in both cases, I think they’re going to wait to see what happens to the boycotters.
In the past, U.S. News has punished schools that refused to cooperate. Reed College was the most famous example of that, back in 1995, 1996. But people think even Columbia just got punished by U.S. News, and there’s some other examples that one could point to. And I think a lot of people are going to say, “Let’s wait and see what happens to these boycotters. There will be rankings coming out in the spring and we’ll find out, and then maybe we’ll join or maybe we won’t.”
You’ve been on the anti-college-rankings train for a long time — you’ve written about this issue at least as far back as 2005. What initially made you think that the rankings were a bad idea, and have your reasons changed over the years?
What got me started was my experience as a law-school dean, at the University of Pennsylvania. U.S. News started to rank law schools in my second year, back in 1990, and my life changed as a result in important ways. First of all, the rankings started to provide a very visible, prominent measure of the performance of our law school and all of our competitors’ law schools. Applicants became very, very finely attuned to the rankings. Our alumni became almost obsessed with them, and I also noticed that the behavior of law-school administrators changed. And to some extent, even my behavior changed.
One example, which I gave in the book, is that U.S. News gave a very heavy weight to the average LSAT score of entering students in their formula. That caused our competitors and us to give more weight to LSAT scores in our admissions process, more weight than I really felt comfortable with. I always saw the LSAT as a useful indicator of aptitude, but it didn’t really tell you much about attitude. And that was, to me, the more important factor: Did somebody have fire in the belly? Did they really care about the intellectual pursuit of understanding of the law? And did they have a whole set of personal characteristics that I felt were essential for practicing lawyers to have?
So we gave much less weight to that. Then I saw lots of other things starting to change. Some of our competitors started to reduce the size of their first-year entering class so as to be able to further raise their average LSAT scores and the average college GPAs of their entering students. And they made up for it by increasing the number of transfer students they admitted, and they started to market themselves and started trying to pick off our best students. And then I would hear about misreporting, or fudging, or massaging of data. I would hear about schools that reported 99 percent placement success in years when I knew that that was impossible. So that got me going.
But I think over the years, I’ve become more aware of the way in which I think rankings are not only foolish and stupid, but they’re actually harmful. They distort the behavior of educators in ways that I think is often harmful to education, and they distort the understanding of applicants.
Before the rankings, schools like Harvard and Yale still had remarkable reputations, but they weren’t so much molding themselves to certain standards.
Yeah, that’s exactly right. In the book, I quote an article written by Henry Hansmann, a longtime law professor at Yale, who said in print what I understood at that time. Which is that of the people who applied to Harvard and Yale law schools, about half went to Yale and half went to Harvard. And the half that went to Yale went there because it had a reputation for being a very academic institution, a good place to prepare for a career in teaching, for example. And people went to Harvard because they wanted to go to work in big law firms. And after the rankings started, and Yale was ranked No. 1 every year, Yale would get 80 percent or 85 percent of the students who were admitted to both schools. And they went there just because Yale was ranked No. 1. So I think it had the effect of subtly transforming these institutions, and making them much more homogeneous because there was a single measure of academic excellence that people started to believe.
There are other rankings of colleges out there — for instance, one that measures schools by social mobility, where state colleges do well. Do you favor any listing ranking system at all, or do you think the whole idea is toxic?
I don’t have a very strong objection to specialized rankings that focus on a single criterion or dimension. So a good example would be social-mobility rankings. And Third Way has just published, and I guess is going to continue to publish, social-mobility rankings. There are rankings that list schools based on their racial diversity, which are fine. There are rankings by wealth, rankings by endowment per student. There are rankings by graduation rates. There’s a lot of specialized rankings. I have no great objection to those because they can give a potential applicant useful information.
My objection is focused primarily overwhelmingly on what I call “Best College” rankings, which take multiple criteria of educational performance and excellence and smush them together, formulaically into a single number, and purport to claim that number and the ranking that goes with that number is the key to determining relative quality. I don’t care what formula you use, what data you use, what criteria you use; that approach seems to me to be just so fundamentally flawed. And the reason is because there are so many different kinds of institutions.
It’s not a one-size-fits-all situation.
The genius of American higher education is that it’s a bottom-up system that is grown up to meet multiple demands. It features institutions with all kinds of different missions, goals, and characters, and it serves a constituency that has an enormous variety of needs and wants and preferences in terms of what they’re looking for in college. So a single template, a single measure, is just impossible. And that’s my objection. I’d love to see that wither. So for sure, let the specialized rankings flourish.
So you basically want to go back more to the way it was in 1982, when students had to do a bit more research without the benefit of rankings. This kind of research would be easier now with the internet, but that’s what you’re advocating, right?
In 1982, there was frankly very little publicly accessible information about higher education.
The whole thing was a lot more low-key than it is now.
You could go to a library in 1980, and you wouldn’t find a lot of data. I give U.S. News credit for making a lot of that data more public, but I also give credit to a lot of other people, including the U.S. Department of Education. They have required any institution that gets any kind of federal aid, which is almost all of them, to report hundreds and hundreds of data points. And then all of that data is available to the public for free. And the colleges themselves, most of them, make the data available on their website, although it takes a little digging.
So we’re not going to go back to 1980. I mean, the world has changed fundamentally. Back in 1980, there were hardly any rankings on anything. I suppose the argument that somebody like U.S. News would make is that it costs money to put together the specialized rankings, and you’ve got to pay for that mostly by having advertisers on your website, and so you’ve got to draw eyeballs to those websites. And the only proven way to draw eyeballs to the website is to have a “Best College” ranking, which has, in flashing neon lights, numbers associated with every college. And that once people are on the website, people can poke around and look at all the other information they have on there.
You don’t think that’s a good enough justification.
I don’t buy it. I think information is way cheaper nowadays, thanks to the internet. And there’s lots and lots of organizations that have, like Third Way, produced credible rankings based on a single factor and published it on the internet. And that’s readily available. Just do a Google search for college rankings, or rankings of social mobility, or whatever, and you get it. So I don’t buy the argument that we still have to have “Best College” rankings.
This interview has been edited for length and clarity.