Don’t Panic About Taiwan
Alarm Over a Chinese Invasion Could Become a Self-Fulfilling Prophecy
Ranking universities might seem like intellectual inside baseball, an academic game of interest only to professors and to prospective students. But these rankings are more important than most people realize, particularly since institutions of higher education are meant both to engage in the disinterested pursuit of knowledge and to serve a broader societal purpose—in the case of international relations, to inform good policies.
The National Research Council (NRC) rankings of Graduate Programs are today’s gold standard; climbing them is one of the most important goals there is for a university. But the NRC approach siloes academic disciplines and discourages real-world relevance among scholars. It encourages political science departments, in particular, to slight the subfield of international relations and other policy-relevant areas in favor of narrower academic concerns.
This trend worries some distinguished political scientists, including MIT’s Stephen Van Evera, who deplores that much of academia is now in the thrall of a “cult of the irrelevant”; New York University’s Lawrence Mead, who bemoans the “new scholasticism” of much of contemporary political science; and Yale’s Ian Shapiro, who fears a “flight from reality” in the social sciences more generally. But it should also be a concern outside the ivory tower that international relations scholars are being pushed to the sidelines in policy debates by the discipline, ignoring the concerns of policymakers, members of Congress, and the citizens who pay the taxes that ultimately fund university research.
Graduates from Columbia University's School of Journalism cheer during the university's commencement ceremony in New York May 16, 2012.
Keith Bedford / Reuters
MAKE THE GRADE
It is impossible to abandon rankings outright, since the impulse to grade things seems hard-wired into human nature. Rankings also serve an important bureaucratic purpose. University administrators crave simple metrics of performance, which help guide decisions on where to invest scarce resources. They steer students and their parents toward some institutions and away from others. Finally, they help government and philanthropists make decisions about where to award lucrative grants and donations. In other words, rankings save work, eliminating the time-consuming tasks of reading of book manuscripts or carefully learning about the substance of academic fields.
The ease of using them explains, in part, why university rankings are such big business. Today, there is a veritable cottage industry for them. They run the gamut from the simple U.S. News & World Report to the NRC approach. University rankings have also gone global: foreign scholars, new private companies such as Quacquarelli Symonds, and long-standing publications such as The Times Higher Education Supplement have all entered the rankings market to tell professors where they sit in the global intellectual pecking order.
To distinguish itself, the NRC recently sought to replace simple “reputational” rankings such as those of U.S. News and World Report with a more “scientific” assessment of doctoral programs. But there were serious problems with what that included and left out. The most recent round of NRC rankings was completed in 2006 but not released until 2010, a clear harbinger of trouble. Critics, admittedly some from institutions that did not fare well, found many errors in the data. But even one member of the NRC’s Committee on the Assessment of Research-Doctorate Programs, former Columbia Provost Jonathan Cole, resigned in protest, arguing in The Chronicle of Higher Education that “the report’s quality was not worthy of publication.”
If the NRC rankings had been merely analytically flawed, that would be only a minor problem. But in addition, these rankings were also systematically biased.
For one, the NRC measured academic excellence by looking at a variety of parochial measures, including publications per faculty member and citations per publication. But the NRC only counted work published in disciplinary journals, while excluding books and non-peer-reviewed magazines (like Foreign Affairs). The NRC also calculated faculty productivity and intellectual impact exclusively by tallying scholarly articles (and limited it to those covered by the Web of Science, the most well-known index of this type). In addition, the NRC considered percent faculty with grants, awards per faculty member, percent interdisciplinary faculty, measures of ethnic and gender diversity, average GRE scores for admitted graduate students, the level of financial support for them, the number of Ph.D.s awarded, the median time to degree, and the percentage of students with academic plans, among other factors.
The NRC then analyzed these characteristics in two ways. The first was through a survey of faculty, which weighted the importance of each characteristic for the overall ranking. The second was through a reputational measure, which was smuggled back in by asking faculty to rank a subset of departments, whose characteristics were then used as a benchmark for positioning other departments. After all of this, the results of both were reported not in a straightforward ranking but in a series of overlapping bands of 95 percent confidence intervals, which, ironically, showed that the differences between most doctoral programs are actually substantively very small.
The NRC’s methodology biased its rankings against two kinds of scholarship: international relations scholarship, which is often book-oriented; and policy-relevant scholarship, which often appears in non-peer-reviewed journals. That leads to vast undervaluation of many leading scholars and, accordingly, their schools. As an illustration, the late international relations scholar Kenneth Waltz’s hugely influential book Theory of International Politics constitutes 64 percent of his enormous count of nearly 5,000 Social Science Citation Index hits, the most commonly used indicator of scholarly impact. To exclude his books from any ranking of impact is hard to justify. It also discourages ranked programs from promoting authorship of in-depth policy relevant work.
Given policy schools’ focus on graduate teaching and Ph.D. supervision, the NRC rankings also discounted the achievements of scholars there, or scholars who teach only undergraduates. That means that even excellent international relations scholars at, for example, undergraduate colleges such as Dartmouth, and at master’s programs such as Harvard’s Kennedy School, Johns Hopkins’ SAIS, Georgetown’s Walsh School, and George Washington’s Eliot School, get left out in the cold. In turn, that gives universities little incentive to invest in policy scholarship, policy school teaching, or undergraduate teaching. That is unfortunate because many of the leading scholars of international relations now teach at such schools.
Further, despite giving credit for interdisciplinary faculty, the NRC’s ranking of disciplinary departments ignores interdisciplinary programs, supposedly the academic wave of the future. This discourages universities from nurturing those departments, instead fostering ever more specialized scholarship conducted in isolated intellectual silos. Such siloing only reinforces the strictly disciplinary orientation of scholarship and steers scholars away from work that does not fall neatly within one scholarly approach.
MEASURE ME
To illustrate the sensitivity of the rankings to the measures included, and to provide an alternative way to size up departments’ broader impact, we ranked ten leading universities using purely disciplinary measures of productivity. Then we re-ranked them using some alternative measures of scholarly excellence specific to the subfield of international relations and indicators of broader policy relevance. The results are a jumble.
Rank of universities using disciplinary measures of productivity and alternative measures of productivity.
Campbell and Desch
When scores are updated based on publications in scholarly international relations journals, such as International Security, International Organization, and World Politics, Princeton, Stanford, MIT, Harvard, Ohio State, Chicago, Columbia, and UC San Diego move up the ladder (IR Ex h-index). Schools such as Georgetown, which have a substantial policy focus, also fare better.
When books are added to the equation, the rankings are scrambled once again, with Brown capturing the top spot, joined in the top ten for the first time by Indiana Bloomington, Santa Barbara, UC Berkeley, and Cornell.
One cut at trying to rank programs by the presence of their faculty in non-academic publications involved factoring in Foreign Affairs and Foreign Policy articles. In the first case, new entrants into the top ten include Georgetown, Johns Hopkins, and Virginia. In the second case, institutions such as Michigan State and Pennsylvania move up to join Harvard, Princeton, Georgetown, Stanford, and Columbia.
A slightly different measure of policy relevance was a ranking of IR programs by the number of their faculty who have won the Council on Foreign Relations’ International Affairs Fellowships, which allow them to serve for a year in a government position. Berkeley leads in this category.
Finally, it is possible to rank the relevance of political science departments based upon the number of times their faculty testify before Congress, controlling for the size of the department. Here, UC Santa Barbara comes out on top, followed by Georgetown, Virginia, Maryland, Columbia, George Washington, UC Irvine, MIT, Stanford, and Princeton.
These are just a small sample of the rankings that we tallied. With the generous support of the Carnegie Corporation of New York, we have ranked the top fifty political science departments based on 37 different measures of scholarly excellence and broader policy relevance of their international relations faculty. We have done the same thing for the 442 individual scholars in that group.
Simply put, when you rank political science departments by disciplinary, subfield, and broader relevance criteria, you get very different results. Given that, we believe that broader criteria of scholarly excellence and relevance ought to be part of how all departments are ranked. We are not advocating junking traditional criteria for academic rankings; rather, we urge that such narrow and disciplinarily focused criteria simply be balanced with some consideration of the unique aspects of international relations and also take account of the broader impact of scholarly work.
INTELLECTUAL ISOLATIONISM
There are at least three good reasons why we should care if academic political science departments define excellence narrowly and thereby rank themselves into irrelevance.
First, despite all of the incentives in the academy for professors to retreat into intellectual isolationism and otherwise hide in the ivory tower, there remains much useful academic work that should be of interest to policymakers and the rest of society. Reflecting this, former Secretary of Defense Robert M. Gates launched the Minerva Initiative in April 2008 on the premise that “throughout the Cold War, universities had been vital centers of new research” and that U.S. national security policymakers had tapped intellectual “resources outside of government” to assist them in formulating policy. One can point to instances -- the wars in Vietnam and Iraq -- in which, had the scholarly consensus against involvement informed policy, the country’s strategic interest would have been better served.
Second, if civic-mindedness is not sufficient to encourage scholars to think about how they can become more relevant, self-interest should be. In tight economic times, federal support for universities is coming under renewed congressional scrutiny. Senators Tom Coburn and John McCain recently amended an appropriations bill to eliminate political science funding from the budget of the National Science Foundation. Congressional scrutiny of political science funding is nothing new: in the mid-1990s, Democratic Senator Barbara Mikulski of Maryland raised uncomfortable questions about why the NSF did not support more “strategic and applied” scholarship, particularly given the concern at the time about reducing the federal budget deficit.
Third, if political science cannot persuade politicians and the American public that what it offers has the potential to improve their lives -- perhaps even save as many as would a cure for cancer -- it risks losing government support. International relations should be able to make this case, if only the rest of the discipline would not ignore its unique contributions. Ironically, scholarship contributing to U.S. national security was one of the two exceptions to the Coburn amendment; it is precisely such policy-relevant work that the NRC rankings devalue.
If we are right, three steps follow: First, academics need to recognize that the current approaches toward rankings provide incentives for professors to navel-gaze. Employing a broader set of criteria should encourage them to occasionally look up from their desks and out from the ivory tower.
Second, stakeholders within and outside academia should take all rankings with a grain of salt. Even the most sophisticated ones have flaws and biases, and capture only indirectly and poorly important things such as creative thinking and exciting teaching. Rankings of all kinds should be downgraded in university decision-making. Of course, this means that university faculty and administrators will have to put in the hard work of familiarizing themselves with the substance of the academic fields they oversee. But doing so will ultimately produce better scholarship that also speaks to audiences outside university walls.
Finally, ranking universities using a one-size-fits-all template and looking strictly at academic concerns fosters a guild mentality that violates academia’s implied social contract to also address the concerns of the wider community. Our ranking mania should not lead scholars to neglect this obligation.