Courtesy Reuters

Peering into the Future

INTELLIGENCE ESTIMATES AFTER THE COLD WAR

As the Soviet Union, Germany and the Middle East have recently reminded us, no one knows the future. Yet, consciously or not, foreign policy makers constantly make predictions. Will a foreign leader act rationally? Will an allied country be reliable? The consequences of wrong guesses can be catastrophic, so policymakers turn to national intelligence for help. Despite the end of the Cold War, the need for good intelligence estimates continues.

Intelligence analysts sift through reams of information, trying to sort the accurate from the erroneous, and when not enough facts are available, estimating what the picture would look like if all the facts were available. Current intelligence, intelligence about current events, is mainly reportorial and interpretive: "Saddam Hussein lambasted the U.S. government again yesterday. He seems to be trying to drive a wedge between Washington and Paris." While the line often blurs, estimative intelligence is more concerned with what might be or might happen: "Is Iraq still hiding weapons of mass destruction? Will Saddam still be in power a year from now?" Like all kinds of intelligence, estimative intelligence starts with the available facts, but then it trespasses into the unknown and the unknowable, the regions where we simply lack facts. Is it any wonder that national intelligence estimates are sometimes wrong?

Why take the risks? Why not stick strictly to the facts? One reason is that facts about crucial international issues are rarely conclusive. There is often enough evidence to indict, rarely enough to convict. Yet policymakers are under enormous pressure to make decisions. In some cases they can wait for more information, but in others waiting is itself a decision with irreversible consequences. In the words of a White House official, "Insight is more scarce than information." To help policymakers interpret the available facts, to suggest alternative patterns that available facts might fit, to provide informed assessments of the range and likelihood of possible outcomes, these are the roles of estimative intelligence.

THE COLD WAR RECORD

President Harry S. Truman created a civilian Central Intelligence Agency in 1947, but neither it nor military intelligence predicted the North Korean invasion in 1950. General Douglas MacArthur’s Tokyo headquarters consistently misestimated North Korean and Chinese behavior. In response, when General Walter Bedell Smith became director of the CIA in October 1950, he created a new art form called National Intelligence Estimates (NIEs), to be agreed on at the highest levels in the intelligence community.

The NIEs are produced by the National Intelligence Council, which represents the entire intelligence community and reports to the director in his role as head of the community rather than as head of the CIA. (Roughly half the NIC’s national intelligence officers come from the CIA, a quarter from other parts of the government, State, Defense, Energy, and a quarter from universities or private nonprofit organizations.) The NIC coordinates estimative views from the CIA, the Defense Intelligence Agency, the four military services, the National Security Agency, the State Department’s Intelligence and Research Bureau, and the intelligence units of Energy, Treasury and the FBI. The heads of those organizations constitute the National Foreign Intelligence Board. They review and approve each estimate before it is published and sent to the president and other top officials.

How well did the estimative process do during the Cold War? It is the nature of intelligence that successes often remain hidden, while failures become public, so the ledger cannot be balanced until the documentary records are fully available to future historians. Nonetheless, after comparing a series of National Intelligence Estimates selected with such open sources as The Economist, The New York Times and other papers, historian Ernest R. May concluded that the estimates came out reasonably well. They gave policymakers and their staffs information not found in the press and focused on longer-term questions journalists usually slight. Lucid analytic success, however, does not ensure policy impact. Pessimistic estimates about Vietnam in the 1960s, for example, were analytic successes but unwelcomed downtown and failed to prevent disastrous policy choices.

There were, of course, some notable failures, such as the 1962 estimate that Russian President Nikita Khrushchev would not place missiles in Cuba, the 1973 failure to foresee the Yom Kippur War, the analytical disarray in 1978 that prevented the drafting of any estimate about the fall of the shah of Iran, and the 1989 prediction that Saddam Hussein would not make trouble for the next three years. The central estimative issues during the Cold War, however, concerned the Soviet Union. Critics complain that the intelligence community consistently overestimated Soviet military strength, but the record is not so simple. While intelligence was good at predicting the development of new Soviet weapons, it was sometimes wrong about the quantities and qualities produced and deployed. The spurious bomber and missile gaps of the 1950s reflected Soviet deception and exaggeration in the days before reconnaissance satellites. In the late 1960s and early 1970s, intelligence underestimated the buildup of Soviet strategic forces. So the errors were not all in one direction. Moreover, the formal estimative process provided a means for agencies that disagreed with the overall intelligence community view to make their alternative conclusions known to decision-makers.

Some critics go further. Senator Daniel Patrick Moynihan (D-N.Y.) has argued that the failure to predict the demise of the Soviet Union led to a decade of wasted military expenditures in the 1980s, and that this should be grounds for abolishing the CIA and giving its functions to the State Department. Again, the record is more complex. The intelligence community accurately reported a slowdown in the Soviet economy, although it did not adequately estimate the rapidity of economic collapse. And the questions posed by policymakers were not about some abstract future, but about whether even a weakening Soviet economy could support a formidable military threat. The intelligence community estimated correctly that the Soviets could.

As for the timing, the intelligence analysts were not alone. Almost everyone (including Mikhail Gorbachev) failed to predict that the Soviet Union would collapse in 1991. The exact timing was probably an accident of history. Had the Politburo picked a less activist and more conservative general secretary in 1985, it is quite plausible that the Soviet Union would have declined more gradually through the end of the century. And a declining empire with nuclear weapons could have posed a significant military threat.

If anything, the experience with predicting the demise of the Soviet Union should make one wary of too much consensus and of reducing the number of sources of analysis. It should also make one wary of abolishing the CIA. Eliminating the community’s chief source of nondepartmental analysis would weaken estimates. In policy circles, the old adage is that where you stand depends on where you sit. In intelligence, what you foresee is often affected by where you work. The primary duty of departmental analysts is to respond to the needs of their organizations. Diplomats are supposed to negotiate solutions. Even in apparently hopeless situations, they tend to press departmental analysts for the one chance in a hundred that might permit success. Generals are supposed to win battles. Even in hopeful situations, they tend to press their intelligence analysts for estimates of what they will have to face if worst comes to worst. Thus one type of departmental analysis tends toward optimism, the other toward pessimism. It is not a matter of intellectual dishonesty but of analysts simply doing their jobs.

The best solution to such human and bureaucratic problems is multiple points of view that are brought together in one place so policymakers can see the sources of differences and make their own assessments. During the Cold War, the CIA provided nondepartmental assessments with which departmental assessments could be compared. Estimates reflected the consensus of the community, if there was one. If not, agencies that disagreed with the majority’s conclusions could insert their own views. No intelligence agency had a corner on the truth, but this process helped policymakers thread their way between wishful thinking and worst-case scenarios during a long Cold War with a dangerous and deceptive adversary.

MYSTERIES AND SECRETS

Even though there is no longer one overriding threat, the need for estimative intelligence continues. In a world where rapid change has become the norm, uncertainties abound. The current threats to American security are not entirely new, but they are more diverse. And they are complicated by the "return of history", the thawing of ethnic and religious conflicts that had been partly frozen by Cold War blocs. What are the prospects that transnational terrorists will perpetrate another attack like that on the World Trade Center? Where and how quickly will weapons of mass destruction spread? Will economic and social turmoil in Russia or Ukraine lead to the loss of nuclear weapons? Will friendly countries be torn apart by ethnic conflicts and demands for self-determination? What forces and weapons will American troops confront in future peacekeeping operations or regional conflicts?

Some problems threaten our national welfare rather than traditional national security. Policymakers also need intelligence about the transnational drug trade and whether foreign governments are using bribes to cheat American businesses and are meeting their commitments to protect the world’s atmosphere, oceans and endangered species. In response, the National Intelligence Council has created a new national intelligence officer for global and multilateral issues to develop estimates on such topics.

One problem for intelligence in the post-Cold War world is knowing where to invest diminishing analytic resources. Skilled analysts are needed to deal with new questions, but personnel are being cut 17 percent over four years, and nearly 25 percent over this decade. In such a setting, how many Somali-speaking analysts should be retained? And will the intelligence community preserve a surge capacity when CNN or something else suddenly puts the next Somalia on the agenda?

Behind these management issues lies a larger problem: understanding the structure of world politics that underlies estimative analysis. During the Cold War the world was bipolar, with most political issues influenced by the U.S.S.R. and the United States. Today the structure of power is like a three-dimensional chess game. The top, military board is unipolar, with the United States being the only country capable of projecting global military force. The middle, economic board is tripolar. The United States, the European Union and Japan account for two-thirds of the world economy. China’s dramatic economic growth may make this board quadripolar by the turn of the century. The bottom board consists of diverse transnational relationships outside the control of governments, including financial flows, drug trafficking, terrorism and degradation of the ozone layer. On this board, there are no poles.

Greater complexity in the structure of power means greater uncertainty in estimating the future. Polities often undergo dramatic, nonlinear change, but such changes have become much more frequent than during the Cold War. In the 1980s, for example, if one were estimating the number of nuclear weapons South Africa would have in the 1990s, one would have calculated what their uranium enrichment plant could produce and answered "six or seven." But the correct answer today turns out to be zero because of radical political discontinuities associated with the transition to majority rule and the end of the Cold War. Similarly, if one were to estimate today how many nuclear weapons a country with no nuclear facilities might have in five years, the linear answer would be zero. But that would change if the country were able to purchase stolen nuclear weapons on the transnational black market.

Yet another complication for estimators after the Cold War is the increase in the ratio of mysteries to secrets in the questions that policymakers want answered. A secret is something concrete that can be stolen by a spy or discerned by a technical sensor, such as the number of SS-18 missiles in the Soviet Union or the size of their warheads. A mystery is an abstract puzzle to which no one can be sure of the answer. For example, will President Boris Yeltsin be able to control inflation in Russia a year from now? No one can steal that secret from Yeltsin. He does not know the answer. He may not even be in office a year from now.

RESPONSES TO UNCERTAINTY

The National Intelligence Council has tried to cope with this uncertainty in various ways. Most important, it has increased its emphasis on alternative scenarios rather than single-point predictions. The job, after all, is not so much to predict the future as to help policymakers think about the future. No one can know the future, and it is misleading to pretend to. On the other hand, to tell policymakers how complex things are will only echo Harry Truman’s request for a one-armed analyst: "No more ‘on the one hand and on the other hand.’" Analysts owe policymakers a forthright appraisal of the best estimate.

In lieu of predicting the future, the National Intelligence Estimates describe the range of possible outcomes, including relatively unlikely ones that could have major impact on American interests, and indicate which outcomes they think are most likely and why. They then predict the absolute likelihood of each outcome, mindful of the fact that they tread on very uncertain ground.

Rather than use vague words like "possibly" or "small but significant chance," where feasible the estimates present judgments of likelihood as numerical percentages or bettor’s odds. To be sure, this is a controversial approach; it is impossible to explain why something is one chance in two or one chance in three. Even so, the policymakers are better served than if the NIC simply tells them something is "possible," which is equivalent to telling them there is a 1 to 49 percent chance it will happen, not much help to someone trying to make an important decision. Moreover, if the intelligence community is really uncertain about the likelihood of an outcome, or if agencies disagree over that likelihood, the easiest way to depict that to a busy reader is simply to present a range of probability, saying, for example, that there is a 30 to 50 percent chance it will happen.

After the most likely scenarios have been constructed and presented, analysts must ask another set of questions before the estimate is done. What would it take for this estimate to be dramatically wrong? What could cause a radically different outcome? This task is not the same as a worst-case analysis. If the most plausible scenarios are pessimistic, the analysts must ask what it would take to produce a favorable result. What would such an outcome look like, and how would they know if events were heading in that direction?

Experts often resist this exercise. Since they know their country or region and have already presented all the plausible scenarios, why waste any effort on scenarios that are by definition highly unlikely? The answer is that such questions help to alert the policymakers to low-probability but high-impact contingencies against which they might plan. It also informs intelligence agencies about obscure indicators about which they should be collecting information.

Perhaps if estimators of Soviet strength in the 1980s had asked explicitly what it would take to greatly weaken the Soviet Union and what such a stricken colossus would look like, analysts and policymakers would have been more attentive to offbeat indicators and less surprised by the outcome. One reason, for example, Royal Dutch Shell survived the 1973 oil crisis better than other companies is that its planners did not merely do best estimates of future oil prices but also contemplated scenarios of dramatic price changes that were considered highly unlikely at the time.

Good analysts will also explicitly identify their key assumptions and uncertainties, so that policymakers are aware of the foundations of the estimate. Obviously it is impossible to identify all the assumptions behind the NIC’s analyses. Everyone assumes that the future will more or less resemble the past; for instance, all expect the sun to rise in the east. Someday that might not be true, but it will probably remain true for the time frame of today’s estimates. Other assumptions might seem obvious but nonetheless be worth highlighting. In the 1980s, for example, if one were estimating Iraq’s ability to build a nuclear weapon, one could reasonably have assumed that Baghdad would use only the most modern and efficient techniques. But U.S. intelligence missed a critical part of Iraq’s program that included electromagnetic isotope separation, an antiquated technique the United States abandoned in the 1940s. Had the assumption been explicit, some analyst or policymaker might have thought to ask what Saddam’s program might look like if the assumption were relaxed.

Estimates start with a section that highlights assumptions and end, where appropriate, with a section that highlights key uncertainties. After all is said and done, what are the biggest gaps in U.S. intelligence? This exercise not only helps alert policymakers to the limits of estimates, but also informs intelligence collectors of the needs for further information. In fact, one job of national intelligence officers is to serve as issue coordinators, to identify gaps in the community’s knowledge and provide that information to the director of the CIA and the executive committee of the intelligence community to help them plan collection programs.

Another way to enrich national intelligence estimates is to explore the reasons agencies hold different views on specific issues. Providing alternative views is better than suppressing them in favor of vague or ambiguous consensus; yet alternative views have often been presented without much explanation of the basis of the disagreement. Such explanations can be illuminating. Are the facts in dispute? Are agencies and their staffs using different conceptual frameworks? Is it a cup-half-full versus cup-half-empty dispute? Policymakers are most helped by estimates that indicate clearly what all agencies agree is known and not known, what they disagree about and what the evidence is for each position. Indeed, differing interpretations do not need agency sponsors. In the always foggy estimative arena, analysts within agencies often differ on how to interpret sparse or ambiguous material. The most responsible course in such cases is to describe the various plausible interpretations and lay out the evidence for each.

As for the problem that there is now a greater proportion of mysteries to secrets in estimative questions, the solution lies in paying more attention to outside and open sources of information. A high proportion of the information needed to analyze Cold War subjects involved secrets that had to be clandestinely collected, while open sources often provided little help. That is still true today for closed societies such as Iraq or North Korea. But on many key issues, clandestine sources may provide only a small, though still useful, portion. Open sources provide context. The combination provides a unique resource that policymakers could not obtain merely from reading the journals, assuming they had time to do so.

In a sense, intelligence analysts are like people assembling a jigsaw puzzle who have some nifty nuggets inside a box but need to see the picture on the cover to understand how they fit. Those pictures are drawn by outsiders in universities, think tanks, businesses, nongovernmental organizations and the press. National estimates on many subjects today greatly benefit from the insights of outside analysts. It is important for intelligence analysts to keep up with open literature. Managers should also look to outside training and use consultants and conferences. And in estimates, it helps to describe the range of academic views so that policymakers can calibrate where the intelligence community stands. In some cases, outside experts may even answer key estimative questions or produce parallel estimates.

Innovations and enhancements aside, it does not matter how good the National Intelligence Council’s estimates are unless it gets them into the minds of policymakers. Most high-level policymakers are swamped with information and have little time to read. They spend their days drinking from a fire hose of information. The basic paradox of government is that it rests on a sea of paper, but the higher you go, the more it becomes an oral culture. The finest analytic work that is too long to read, or that arrives when its issue is not on the front burner, is likely to be placed in a pile on the back corner of the desk that is reserved for papers too interesting to throw away but not urgent enough to read immediately. Every few weeks or months, most of that pile is discarded unread.

To respond to this situation, the NIC has devised two new estimative art forms. Following the example of Britain’s Joint Intelligence Council, which produces three-page estimates for the cabinet, the NIC has developed a short "President’s Summary" designed explicitly for top policymakers. The complete version of the estimate, with details and justifications, is a useful tool for staffs and lower levels of policy bureaucracies. Moreover, estimating must be seen as a continuous process. When new information or developments come in, recipients of estimates are provided with a short memorandum that updates them.

Good estimating requires constant contact with policymakers so that written products are keyed to their agendas. Policymakers are often too distracted to ask for estimates, but they will read or listen if the timing is right. The production of some estimates must be geared to upcoming events such as the visit of a foreign prime minister or a presidential trip abroad. When warranted, estimates or special NIC memos can be produced in a matter of days.

Even efficient and timely publication is not the whole answer. The estimative process involves contact with decision-makers before and after publication. The purpose of estimating is not publication, but getting ideas into policymakers’ minds. Oral estimating is another important way of doing that. Listenership is sometimes more important than readership.

In short, estimation as a process requires constant interaction between national intelligence officers and policymakers both before and after publication. Such contact raises the red flag of politicization, of consciously or unconsciously crossing the line between objective analysis and a statement of policy preferences. National intelligence organizations must constantly be alert to that danger. Fortunately, the taboo against trespassing into policy is so deeply ingrained in the intelligence culture that there are frequent reminders. In addition, estimators often present unpopular information. With particularly sensitive estimates that could undermine a policy or a foreign leader if leaked to the press, the NIC is prepared to limit distribution to a narrow list of people with a need to know, but not to change the nature of the conclusions.

MAKING BETTER CHOICES

Estimates focused heavily on the Soviet Union during the Cold War. In its wake, policymakers still need estimative intelligence to help them understand the more diffuse and ambiguous threats and opportunities they face. Ideological divisions are less likely to obstruct analysis, but greater uncertainties make analysis more difficult. The greater the uncertainty, the greater the scope and need for estimative intelligence. But the task is not simple prediction. Estimators are not fortune-tellers; they are educators. Rather than trying to predict the future, estimators should deal with heightened uncertainty by presenting alternative scenarios. To be useful, estimates must describe not only the nature and probability of the most likely future paths, but they must also investigate significant excursions off those paths and identify the signposts that would tell us we are entering such territory.

Estimates are ways of summarizing what is known and structuring the remaining uncertainties. Sometimes they will be wrong; sometimes, even when correct, they will be ignored. As in the case of the 1990 estimate that correctly predicted the violence in the former Yugoslavia, policymakers can draw a variety of conclusions about whether or not to intervene. Often estimates will be unpopular when they cast doubt on preferred options or put awkward new issues on the policy agenda. But, properly conceived and effectively presented, estimative intelligence can help policymakers make better choices in a future that will contain a more complex mix of threats and opportunities.

Related Articles

This site uses cookies to improve your user experience. Click here to learn more.

Continue

Close We are offering free and open access for a short period of time. Read more about why we are doing this.

Days
Hrs
Min
Sec