Delusions of Dominance
Biden Can’t Restore American Primacy—and Shouldn’t Try
Last summer, the killings of two unarmed African American men—Eric Garner in Staten Island, New York, and Michael Brown in Ferguson, Missouri—by white police officers reignited the national conversation about racial inequality in the United States. In both cases, grand juries declined to indict the officers involved. The rulings provoked a wave of protest marches, rallies, and road blockades across the country, as demonstrators of all skin colors proclaimed to the nation and to the world that “black lives matter.”
The upheaval has stood in stark contrast to the promise of a transformation in race relations that President Barack Obama’s inauguration appeared to hold six years ago. For many of Obama’s supporters, his election represented a milestone in U.S. history, marking the dawn of a “postracial” society—a new era in which skin color would no longer stand as a barrier to opportunity or achievement. Obama himself embraced this imagery, insisting that “there’s not a black America and white America and Latino America and Asian America; there’s the United States of America.” Although he acknowledged the country’s history of racial division and conflict, he clearly envisioned a future in which racial distinctions would fade into insignificance, and he promoted himself as an avatar of that future.
Such lofty rhetoric already seems dated. But even as recent protests over race affirmed racial inequality as a defining feature of American life, it also offered a reminder of just how much the racial landscape in the United States has changed since the mid-twentieth century. Analyzing U.S. race relations in 1944, the Swedish economist Gunnar Myrdal identified what he called “an American dilemma”: the wide gap between the American ideals of liberty and equality and the actual conditions of African American life. In Myrdal’s view, racism was the root cause of the problem. Myrdal found that white Americans’ support for segregation sprang from a widespread belief in black inferiority and that blacks’ disadvantaged status tended to reinforce this sentiment. For Americans to resolve this clash between ideals and reality, Myrdal argued, something had to give: either whites’ racial attitudes had to change, allowing for fairer treatment of blacks, or the circumstances of African American life had to improve, triggering a change in attitudes.
In the subsequent decades, one of Myrdal’s prescriptions did come true—but only one: white Americans’ attitudes toward race have indeed been revolutionized. Yet across a wide range of measures—including income, employment, education, health, housing, and criminal justice—African Americans and other minorities of color have continued to lag behind whites, with severe consequences not only for those disadvantaged groups but also for American society as a whole. African Americans, for example, are nearly three times as likely as non-Hispanic whites to be poor, almost six times as likely to be incarcerated, and only half as likely to graduate from college. The average wealth of white households in the United States is 13 times as high as that of black households.
Disparities of this sort have remained a permanent feature of American life even after the civil rights movement of the 1950s and 1960s removed the two most egregious means of reproducing racial inequality: state-sanctioned segregation and explicit discrimination. Today, overt expressions of prejudice are almost universally considered illegitimate. Mass belief in biological or sociological theories that imply the superiority of some racial groups over others has nearly vanished. Americans now broadly accept, at least in principle, the basic premises of legal, social, and political equality across racial lines. Most Americans celebrate diversity in workplaces, schools, and public settings. And the antidiscrimination laws of the 1960s, especially the Civil Rights and Voting Rights Acts, have succeeded at establishing the idea, if not the practice, of racial integration as a public good.
Yet half a century after the peak of the civil rights movement, its gains continue to be undercut by multiple sources of inequality that reinforce one another and place barriers in the way of progress. These patterns discredit the notion of a postracial society. Rather, the United States is a postracist country—a place where the role of race is more subtle and hidden from view than before, but no less potent. And a new American dilemma—the continuity of racial inequality in the midst of racial change—has confounded policymakers and commentators alike.
Neither the left nor the right has produced convincing explanations of this predicament, much less solutions to it. The two sides’ shortfalls stem from a common problem: a focus on individuals rather than institutions, which obscures the powerful role that history has played in shaping today’s inequalities. Historical legacies are the key reason numerous civic, social, and economic institutions continue to affect marginalized communities in deeply unequal ways, even though these institutions appear to be race neutral on the surface.
It will take new approaches to uncover and dismantle such mechanisms. One promising route stems, surprisingly, from the financial sector. In the wake of the 2008 financial crisis, regulators in the United States and Europe have compelled major banks to conduct assessments known as stress tests, which diagnose the banks’ unseen weaknesses and vulnerabilities to shocks. Many of the same hidden forces that financial stress tests reveal—faulty assumptions, a lack of internal safeguards, unrecognized biases—are also at work in a broad range of public and private institutions in the United States in ways that contribute to racial inequality. Policymakers should consider adapting the stress-test model to help identify and counteract such forces by designing what would amount to stress tests for institutional racism.
Many on the political right believe that the United States has become a genuinely colorblind society—a level playing field that rewards discipline and hard work. In this view, racial disparities reflect deficiencies of merit on the part of less successful individuals and groups rather than flawed institutions that perpetuate unequal outcomes. By this logic, the failure of African Americans and other minorities to prosper stems from personal failings, such as a lack of initiative. The conservative commentator John McWhorter, for instance, believes that the lack of group progress by black Americans derives from a “post-colonial inferiority complex.” “Like insecure people everywhere,” he writes, blacks “are driven by a private sense of personal inadequacy to seeing imaginary obstacles to their success supposedly planted by others.”
Such arguments reflect the belief shared by many white Americans that racism has by now become close to irrelevant. As the sociologist Lawrence Bobo has documented, white Americans generally assume that the end of state-sponsored segregation and the legal prohibition of discrimination removed structural barriers to African American advancement. In a 2013 Gallup poll, for instance, 83 percent of white Americans said that factors other than discrimination were to blame for African Americans’ lower levels of employment, lower incomes, and lower-quality housing. Indeed, white Americans often attribute racial inequality to the incapacity of minorities themselves—a phenomenon that Bobo calls “laissez-faire racism.” Similarly, the political scientist Martin Gilens has shown that the chronic poverty of African American communities reinforces common stereotypes held by whites about blacks, whom whites often view as lazy and irresponsible. Widespread assumptions of this kind eroded white support for the New Deal welfare state after the 1960s and paved the way for the welfare cutbacks of the 1990s, which disproportionately hurt people of color.
But the idea that American society is now colorblind neglects a crucial fact: apparently race-neutral practices often mask deeply unequal arrangements. For example, Jim Crow–era voting restrictions, such as poll taxes, literacy tests, and the grandfather clause, were adopted precisely because they appeared to be race neutral and therefore compliant with the 15th Amendment, which prohibits race-based voting discrimination. And even today, seemingly race-neutral policies often reinforce racial inequalities, although what is different now is that the officials behind them may harbor no racist intentions. To take one well-known example, cocaine drug laws that are based on the principle of colorblindness—distinguishing not between black and white defendants but between crimes involving crack cocaine and those related to powder cocaine—have resulted in grossly disproportionate punishments for black offenders.
A second strain of conservative thinking finds the root of racial disparities in the dysfunctional behavioral patterns prominent in certain African American communities. To many right-wing observers, the characteristics of black poverty—joblessness, broken families, welfare dependency, drugs, crime, violence—suggest a set of pathological cultural traits that black communities share. Republican Representative Paul Ryan of Wisconsin, for example, recently attributed urban poverty to a “tailspin of culture, in our inner cities in particular, of men not working and just generations of men not even thinking about working or learning the value and the culture of work.” By this logic, government-funded social assistance, especially the welfare programs of the War on Poverty that President Lyndon Johnson launched in the 1960s, has only fostered self-destructive behavior by encouraging dependency and isolation.
Taken to the extreme, arguments about the collective flaws of particular groups assume a dark pseudoscientific cast, exemplified by The Bell Curve, the now-infamous 1994 book by the psychologist Richard Herrnstein and the political scientist Charles Murray. The book argued that intelligence is significantly influenced by hereditary factors, suggesting that genetics might partly explain differences in IQ scores and disparities in the social and economic success of white and black Americans. Although such arguments have mostly vanished from mainstream discourse, even on the right, they often lurk in the background of debates about race, especially those relating to education.
The fundamental problem with these conservative explanations is not that they blame inequality on social maladies such as unemployment, family disruption, and violence; these conditions certainly weigh heavily on many minority communities. Rather, the right-wing arguments falter by focusing too heavily on the beliefs and behavior of individuals and subgroups in explaining collective outcomes and by disregarding institutional and historical factors. Scholars such as the sociologist William Julius Wilson have shown that the problems plaguing minority communities result not from innate group traits but from broader structural dynamics. For example, as U.S. industry declined in the second half of the twentieth century, the industrial jobs that had long nourished a broad middle class disappeared from U.S. cities. They were increasingly replaced by service industry jobs, which feature a sharp dividing line between well-paid positions for the highly educated and more meager opportunities for those with fewer skills. As a result, the postindustrial U.S. economy has saddled many urban neighborhoods with limited job prospects and few realistic economic alternatives, producing many dysfunctional behaviors prominent today. Although working-class whites were also affected by deindustrialization, their better connections to job networks and relatively greater access to family wealth helped buffer some of the social dislocation that devastated poor and working-class communities of color.
When it comes to explaining racial inequalities, conservatives are not alone in ascribing too much influence to the beliefs and behavior of individuals; liberals tend to make the same error. Whereas conservatives focus on individual black Americans and African American communities, liberals question the motives and actions of white individuals and groups. Specifically, many on the left detect pervasive negative attitudes toward blacks among whites, often hidden from public view.
Few liberals argue that a large number of white Americans consciously wish to hold back blacks and other minorities. The common left-wing position is somewhat more nuanced, suggesting that although explicit expressions of prejudice are now frowned on, racial stereotypes remain a powerful framing device in social and political interactions and in policy debates, often in ways that remain concealed behind the rhetoric of colorblindness. Liberals sometimes refer to this framing device as “dog whistle” messaging: intentionally stoking white uneasiness about minorities while maintaining the ability to plausibly deny any specific racist intent.
One familiar example is the infamous “Willie Horton” ad crafted by then Vice President George H. W. Bush’s presidential election campaign in 1988, which attacked Massachusetts Governor Michael Dukakis, the Democratic candidate, for a prison furlough program that had allowed a convicted murderer to commit another gruesome crime. Although the ad’s script referred to the crime in neutral terms, the ad’s true centerpiece was Horton’s menacing mug shot, which invoked stereotypical racial imagery to send a clear, unspoken message: Dukakis wasn’t tough enough to stand up to scary black men. Images of this kind activate not the conscious, explicit racism that was once common but rather implicit biases that operate on the subconscious level.
As evidence that white racism remains widespread, liberals also point to revelations that well-known figures, such as the celebrity chef Paula Deen and the former owner of the Los Angeles Clippers basketball team, Donald Sterling, occasionally use racist language or have racial prejudices. Such instances of bigotry, however, tend to exaggerate the power that any lingering racial bias still holds over American society as a whole. For much of the twentieth century, a belief in black inferiority was indeed a mainstream view among white Americans, and much prejudice remains today. But this sentiment is now unquestionably less widespread and virulent than it once was. In the mid-twentieth century, most Americans polled by opinion researchers routinely expressed opposition to school integration and interracial marriage. Today, Americans’ views have moved uniformly in favor of racial equality, from a dismal four percent in favor of interracial marriage in 1958, for example, to nearly 90 percent today.
Discrimination, too, has gone from a social norm to an outlawed and widely despised relic of the past. The exclusion of African Americans from schools, workplaces, and neighborhoods was once a core feature of American life, sanctioned by the Supreme Court in Plessy v. Ferguson, the infamous 1896 case that upheld the doctrine of “separate but equal.” This exclusion occurred not only in the South, where white supremacy was firmly entrenched, but also in the North, where public policy, white prejudice, and biased business practices often combined to prevent African Americans from making social progress. Such practices, of course, occasionally find an echo in modern days, for example, in the form of the mortgage-lending procedures that disproportionately exposed minority borrowers to the risky subprime loans that triggered the financial collapse of 2008 and led to widespread foreclosures in minority communities. But despite the persistence of some explicit discrimination, it is increasingly difficult to argue that the deliberate discriminatory actions of individuals can alone account for contemporary patterns of exclusion and disadvantage.
Moreover, a wide array of laws and protective mechanisms now exist to block and neutralize intentional discrimination. On the whole, such bulwarks have been remarkably successful. The multifront battle that African Americans waged against exclusion over the course of the twentieth century has largely been won. Overt discrimination has declined dramatically in employment, education, housing, government contracting, and voting. In large part, this progress resulted from the implementation of the Civil Rights Act of 1964, the Voting Rights Act of 1965, and the Fair Housing Act of 1968.
To be fair, government enforcement of these acts has varied in intensity, with more significant advances achieved in some areas than in others. In employment, for example, the United States has developed a comparatively strong enforcement regime. Antidiscrimination protections and diversity norms have become a new standard for public and private employers, including large corporations, universities, and the military. This standard is reinforced by the courts, federal and state administrative authorities, and private litigation. And at the ballot box, the full enfranchisement of African Americans has transformed the U.S. electorate and empowered minority political participation. Although protections of the right to vote were substantially weakened by Shelby County v. Holder, the 2013 Supreme Court decision that gutted the enforcement powers of the Voting Rights Act, access to voting remains a universally recognized citizenship right.
By contrast, efforts to desegregate the U.S. educational system have achieved only limited success. Since 1955, when the U.S. Supreme Court ordered the desegregation of public schools to proceed “with all deliberate speed,” repeated rollback attempts have managed to keep some divisions in place. In 1974, for example, the Supreme Court struck down a metropolitan busing plan in Detroit that aimed to transport children to schools outside their immediate neighborhoods. And in a 2007 case that overturned plans to integrate public schools in Seattle, Washington, and Louisville, Kentucky, Chief Justice John Roberts proclaimed that “the way to stop discrimination on the basis of race is to stop discriminating on the basis of race.” This position represented an ironic reversal of the Court’s 1978 decision in Regents of the University of California v. Bakke—one of the earliest rulings to sanction race-conscious remedies—in which Justice Harry Blackmun admonished: “In order to get beyond racism, we must first take account of race. There is no other way. And in order to treat some persons equally, we must treat them differently.” When it comes to desegregating housing, too, efforts at reform have achieved only limited success, as the federal government’s attempts to integrate neighborhoods have often met with local resistance and judicial override.
For all these shortfalls, however, antidiscrimination laws have clearly worked to reduce racial gaps in multiple ways. Economists such as James Heckman have found that the Civil Rights Act measurably improved black employment opportunities and wages in the South in the 1960s and 1970s. And the economist Roland Fryer has shown that civil rights reforms have produced a positive, albeit modest, impact on African American economic and social outcomes across the board, including in employment, health, and incarceration rates. Although significant racial disparities remain in all these areas, Fryer found that the influence of direct discrimination has fallen dramatically vis-à-vis other factors, such as unequal educational opportunities. He estimates, for example, that differences in educational achievement accounted for more than 70 percent of the black-white wage disparity for young men in 2006 and that, controlling for education, young black women actually earned more than young white women. Discrimination, of course, remains a substantial barrier for racial minorities, especially in employment and particularly for young African Americans from the inner cities, who are frequently the targets of stereotyping. But lingering racist sentiments are no longer powerful enough to account for the full breadth of the divide.
The shortcomings of conservative and liberal arguments mean that American political discourse lacks a compelling explanation for what the sociologist Eduardo Bonilla-Silva has called “racism without racists”: the ways in which systematic patterns and practices of American society often advance the interests of whites and harden racial disparities. What’s missing is an understanding of how race works at the level of institutions. How do institutions that are, on their face, scrupulously race neutral nevertheless produce racially imbalanced outcomes?
A focus on institutions was once a staple of civil rights activism. Stokely Carmichael, a leader in the Black Power movement in the 1960s, coined the phrase “institutional racism” to describe discrimination that “originates in the operation of established and respected forces in the society, and thus receives far less public condemnation than [individual racism],” as Carmichael and the political scientist and fellow activist Charles Hamilton wrote in 1967. This kind of thinking became something of an article of faith for some on the left, but it did not develop much beyond the polemic rhetoric of activists. As the social sciences increasingly embraced an understanding of human behavior that focused intensively on individuals and their beliefs and habits, no serious exploration of the institutional racism hypothesis ever got off the ground. As a result, policymakers have been left with a poor understanding of the institutional mechanics that perpetuate racially disparate conditions.
A renewed focus on how racism survives through social and political institutions would draw on the numerous advances that the social sciences have achieved in recent decades, illuminating how institutional forces shape personal decisions and identities and how the interaction of individuals with their surroundings systematically influence behavior. But which institutions deserve the most scrutiny?
The most obvious and important candidate is the U.S. criminal justice system, which today incarcerates about 900,000 African Americans—a number that accounts for close to half of all the inmates in the United States. No aspect of contemporary American life presents starker racial disparities. People of color (including blacks, Hispanics, and other minorities) represent about 40 percent of the U.S. population but account for around 60 percent of those imprisoned. The U.S. Bureau of Justice Statistics estimates that one in three African American men will go to prison at some point in his life. According to the American Civil Liberties Union, one in every 15 African American men is incarcerated, as opposed to only one in every 106 white men.
Liberals (and some conservatives) often argue that these disparities stem from the “war on drugs” of the 1980s, which introduced policing tactics and sentencing rules that hit African Americans particularly hard. But in fact, the problem has its roots in the immediate aftermath of the civil rights movement and the racial unrest that led to riots in most major U.S. cities in the late 1960s and early 1970s. Although the rates of incarceration among blacks and whites were nearly the same for much of the Jim Crow era, racial differences began to emerge and widen during the early 1970s. At a time when African Americans were beginning to benefit from civil rights gains—as the black middle-class grew, poverty rates declined, and thousands of blacks ascended to public office—rising black incarceration rates functioned as a stealth counterweight to political and economic progress.
This was partly by design. The law-and-order policies pursued by some conservative lawmakers and officials, including President Richard Nixon, were framed as a deliberate reaction to the disorder that many white voters associated with the civil rights movement. The Nixon era introduced a bundle of new laws at the federal and state level that lengthened sentences, allowed juveniles to be tried as adults, relaxed prohibitions on the electronic surveillance of alleged criminal activity, and created perverse incentives for local police to win more federal funding by increasing arrest rates. In the 1970s, at the height of U.S. industrial decline, these laws took a disproportionate toll on minority communities in hollowed-out inner cities. These patterns have created ripple effects across African American neighborhoods, not least by producing a phenomenon known as intergenerational incarceration: the tendency of the children of convicted felons to grow up in foster homes, engage in violence and crime, and ultimately fall into poverty and homelessness or end up being incarcerated themselves.
High unemployment rates—partly the result of deindustrialization—only compounded these trends, sparking a damaging feedback loop. As the sociologist Devah Pager has shown, black men with criminal records have a particularly difficult time finding work, as supposedly race-neutral hiring practices often allow racial stereotypes to enter into employers’ decision-making. She has found, for example, that whereas white job applicants with criminal records are half as likely to receive a job interview as equally qualified whites with no records, black applicants with criminal records pay a much greater penalty: they are one-third as likely to be interviewed as other black candidates.
The criminal justice system is hardly the only part of U.S. society where institutional racism operates. The broader economy is full of often hidden forces that combine to exclude people of color from well-functioning, fair economic markets and connect them instead to markets that are distorted or broken. The historian Devin Fergus has demonstrated, for example, that seemingly obscure tax and insurance policies often serve to penalize neighborhoods with large minority populations. Fergus has chronicled the consequences of a California law that allowed Zip Code–based profiling in underwriting car insurance, which effectively sorted black drivers into higher-risk categories that required higher payments. As a result, geographic location mattered more than one’s driving record in calculating premiums—a policy that Fergus calls a “ghetto tax.” In effect, as Fergus puts it, “the premiums of rural and suburban motorists were underwritten by central and inner-city motorists.” Although the particular Californian law described by Fergus has since been repealed, many others like it across the country are still in force. Profiling of this kind mirrors other racially skewed practices, such as neighborhood-based discrimination in real estate, known as redlining, in which banks and financial firms limit the loans and other financial services available to residents of neighborhoods with large minority populations.
A racial-equality stress test could become an effective tool for identifying institutional mechanisms that perpetuate racial inequality.
Similarly, researchers including the psychologist Naa Oyo Kwate have illustrated how apparently apolitical marketing practices, such as the advertising of fast food and alcohol, involve de facto racial profiling in their targeting of minority-populated urban neighborhoods. Kwate has documented how retail companies search consumer databases for certain groups of names in an effort to identify minority consumers and then market particular types of products to them. On the surface, this practice might appear to simply harness market forces, as companies use information about their current customers to search for new ones. But that simplistic explanation presumes that there is something natural about consumer categories and ethnic or racial patterns of taste and behavior. In fact, the opposite is true: Kwate has shown that consumer preferences are strongly shaped and constrained by patterns of advertising and marketing, leaving minority neighborhoods with limited choices and often contributing to poor health outcomes.
Finally, education is yet another arena in which a variety of forces combine to produce inequality in indirect ways. Long after official desegregation, U.S. public schools are actually more segregated today than they were in the 1960s. Schools reflect and reinforce unequal outcomes rooted in de facto residential segregation and economic inequality. Disparities between the educational opportunities enjoyed by whites and those open to minorities reduce the equalizing potential of education.
These are the kinds of complex issues that Obama likely had in mind when, in the midst of public protests in Staten Island and Ferguson, he told an interviewer that racial disparities in the United States amounted to a deeply ingrained “systemic problem,” not amenable to one-dimensional diagnoses or treatments. Proponents of racial equality should, of course, continue to root out prejudice and fight to strengthen antidiscrimination laws. But simply eliminating prejudice is no longer enough. Nor is it realistic to expect that a rising tide will lift all boats and that gradual economic growth will ultimately bring minority communities out of poverty. Racial divisions appear entrenched, and existing approaches have proved incapable of loosening their hold. The postracist epoch demands new tools for uncovering the structural forces that allow racial inequality to persist.
One idea that holds promise is the introduction of rigorous diagnostic methods similar to the stress tests that banks and financial firms use to understand their vulnerabilities and weaknesses. Financial stress tests, a legacy of the 2010 Dodd-Frank Act, which sought to bolster financial regulation, were developed to evaluate the liquidity of banks and their ability to mitigate the risks posed by the financial system. Washington now requires banks with assets valued at $10 billion or more to submit to annual reviews—either internally directed or sponsored by the government—that evaluate the degree to which these banks might be exposed to risks in times of crisis or as a result of global economic shocks. The assessments provide a warning that can protect taxpayers from the costs of bailing out collapsing banks.
A similar assessment method—racial-equality stress tests—could become an effective tool for identifying institutional mechanisms that perpetuate racial inequality. Tools of this kind would permit scholars and policymakers to investigate whether nominally race-neutral policies are in fact masking practices that contribute to racial inequality. Such stress tests would allow regulators to place a wide array of institutions under scrutiny. For example, a test applied to large employers might reveal whether their hiring practices treat white and black ex-felons unequally. Other candidates for stress tests would be school districts that practice so-called no-tolerance policies: punishing students through long-term suspensions or handing over the task of enforcing school discipline to the juvenile criminal justice system. Using the new assessment methods, policymakers could evaluate the extent to which these practices accelerate the path to future criminality and incarceration. Stress tests could also estimate the extent to which interventions such as restorative justice programs, which allow students to settle disputes through conflict-resolution techniques, could break the so-called school-to-prison pipeline in black communities.
Similarly, stress tests could be used to assess the adverse effects of distorted markets on African Americans—treating certain retail sectors, rather than individual companies, as institutions. For instance, tests could estimate the health costs of living in a community saturated with fast-food restaurants and the impact of potential policy interventions on reducing such vulnerabilities. This line of inquiry could help policymakers assess the likely effects of specific interventions, such as state or federal tax incentives that might encourage more grocery stores and restaurants that serve healthy food to open in specific neighborhoods. Alternatively, officials could consider regulations that would require a certain ratio of grocery stores to fast-food outlets within certain areas or zoning policies that would explicitly limit the concentration of fast-food chains.
Putting racial-equality stress tests into practice would take effort. The federal government could mandate stress tests for institutions within its jurisdiction, such as courts and prisons, and could encourage their use by offering block grants to states that pledged to conduct such tests on their own. State governments could take the same approach: mandating stress tests for institutions they directly regulate, such as school districts, and providing funding for voluntary testing at the local level. And of course, there is no reason to rely solely on government action; individual institutions could voluntarily administer stress tests to themselves.
None of this will be possible without the concerted support of citizens and advocacy groups. And even if governments and institutions decided to adopt such an approach, their efforts would need to be backed by rigorous research. Universities and think tanks can start paving the way, incubating ideas about racial-equality stress tests, so that if and when their political moment arrives, policymakers won’t have to start from scratch.
Disrupting the forces that reproduce racial inequality in American life will be a generational challenge; there are no quick fixes. The idea of large-scale stress-testing of institutions might seem unrealistic—but it is hardly any less realistic than simply hoping that in the absence of fresh thinking or new approaches, Americans will suddenly wake up one day to find that they live in a blissfully postracial country.