Richard K Betts. Foreign Affairs. Volume 81, Issue 1. January/February 2002.
The Limits of Prevention
As the dust from the attacks on the World Trade Center and the Pentagon was still settling, the chants began: The CIA was asleep at the switch! The intelligence system is broken! Reorganize top to bottom! The biggest intelligence system in the world, spending upward of $30 billion a year, could not prevent a group of fanatics from carrying out devastating terrorist attacks. Drastic change must be overdue. The new conventional wisdom was typified by Tim Weiner, writing in The New York Times on October 7: “What will the nation’s intelligence services have to change to fight this war? The short answer is: almost everything.”
Yes and no. A lot must, can, and will be done to shore up U.S. intelligence collection and analysis. Reforms that should have been made long ago will now go through. New ideas will get more attention and good ones will be adopted more readily than in normal times. There is no shortage of proposals and initiatives to shake the system up. There is, however, a shortage of perspective on the limitations that we can expect from improved performance. Some of the changes will substitute new problems for old ones. The only thing worse than business as usual would be naive assumptions about what reform can accomplish.
Paradoxically, the news is worse than the angriest critics think, because the intelligence community has worked much better than they assume. Contrary to the image left by the destruction of September 11, U.S. intelligence and associated services have generally done very well at protecting the country. In the aftermath of a catastrophe, great successes in thwarting previous terrorist attacks are too easily forgotten—successes such as the foiling of plots to bomb New York City’s Lincoln and Holland tunnels in 1993, to bring down 11 American airliners in Asia in 1995, to mount attacks around the millennium on the West Coast and in Jordan, and to strike U.S. forces in the Middle East in the summer of 2001.
The awful truth is that even the best intelligence systems will have big failures. The terrorists that intelligence must uncover and track are not inert objects; they are living, conniving strategists. They, too, fail frequently and are sometimes caught before they can strike. But once in a while they will inevitably get through. Counterterrorism is a competitive game. Even Barry Bonds could be struck out at times by a minor-league pitcher, but when a strikeout means people die, a batting average of less than 1.000 looks very bad indeed.
It will be some time before the real story of the September 11 intelligence failure is known, and longer still before a reliable public account is available. Rather than recap the rumors and fragmentary evidence of exactly what intelligence did and did not do before September 11, at this point it is more appropriate to focus on the merits of proposals for reform and the larger question about what intelligence agencies can reasonably be expected to accomplish.
Spend a Lot to Get a Little
One way to improve intelligence is to raise the overall level of effort by throwing money at the problem. This means accepting additional waste, but that price is paid more easily in wartime than in peacetime. Unfortunately, although there have certainly been misallocations of effort in the past, there are no silver bullets that were left unused before September 11, no crucial area of intelligence that was neglected altogether and that a few well- targeted investments can conquer. There is no evidence, at least in public, that more spending on any particular program would have averted the September 11 attacks. The group that carried them out had formidable operational security, and the most critical deficiencies making their success possible were in airport security and in legal limitations on domestic surveillance. There are nevertheless several areas in which intelligence can be improved, areas in which previous efforts were extensive but spread too thinly or slowed down too much.
It will take large investments to make even marginal reductions in the probability of future disasters. Marginal improvements, however, can spell the difference between success and failure in some individual cases. If effective intelligence collection increases by only five percent a year, but the critical warning indicator of an attack turns up in that five percent, gaining a little information will yield a lot of protection. Streamlining intelligence operations and collection is a nice idea in principle but risky unless it is clear what is not needed. When threats are numerous and complex, it is easier to know what additional capabilities we want than to know what we can safely cut.
After the Cold War, intelligence resources went down as requirements went up (since the country faced a new set of high-priority issues and regions). At the end of the 1990s there was an uptick in the intelligence budget, but the system was still spread thinner over its targets than it had been when focused on the Soviet Union. Three weeks before September 11, the director of central intelligence (DCI), George Tenet, gave an interview to Signal magazine that now seems tragically prescient. He agonized about the prospect of a catastrophic intelligence failure: “Then the country will want to know why we didn’t make those investments; why we didn’t pay the price; why we didn’t develop the capability.”
The sluice gates for intelligence spending will open for a while. The challenge is not buying some essential element of capability that was ignored before but helping the system do more of everything and do it better. That will increase the odds that bits and pieces of critical information will be acquired and noticed rather than falling through the sieve.
Another way to improve intelligence is to do better at collecting important information. Here, what can be improved easily will help marginally, whereas what could help more than marginally cannot be improved easily. The National Security Agency (NSA), the National Imagery and Mapping Agency (NIMA), and associated organizations can increase “technical” collection—satellite and aerial reconnaissance, signals intelligence, communications monitoring—by buying more platforms, devices, and personnel to exploit them. But increasing useful human intelligence, which everyone agrees is the most critical ingredient for rooting out secretive terrorist groups, is not done easily or through quick infusions of money.
Technical collection is invaluable and has undoubtedly figured in previous counterterrorist successes in ways that are not publicized. But obtaining this kind of information has been getting harder. For one thing, so much has been revealed over the years about U.S. technical collection capabilities that the targets now understand better what they have to evade. State sponsors of terrorism may know satellite overflight schedules and can schedule accordingly activities that might otherwise be observable. They can use more fiber-optic communications, which are much harder to tap than transmissions over the airwaves. Competent terrorists know not to use cell phones for sensitive messages, and even small groups have access to impressive new encryption technologies.
Human intelligence is key because the essence of the terrorist threat is the capacity to conspire. The best way to intercept attacks is to penetrate the organizations, learn their plans, and identify perpetrators so they can be taken out of action. Better human intelligence means bolstering the CIA’s Directorate of Operations (DO), the main traditional espionage organization of the U.S. government. The DO has been troubled and periodically disrupted ever since the evaporation of the Cold War consensus in the late stage of the Vietnam War provoked more oversight and criticism than spies find congenial. Personnel turnover, tattered esprit, and a growing culture of risk aversion have constrained the DO’s effectiveness.
Some of the constraint was a reasonable price to pay to prevent excesses, especially in a post-Cold War world in which the DO was working for the country’s interests rather than its survival. After the recent attacks, however, worries about excesses have receded, and measures will be found to make it easier for the clandestine service to operate. One simple reform, for example, would be to implement a recommendation made by the National Commission on Terrorism a year and a half ago: roll back the additional layer of cumbersome procedures instituted in 1995 for gaining approval to employ agents with “unsavory” records—procedures that have had a chilling effect on recruitment of the thugs appropriate for penetrating terrorist units.
Building up human intelligence networks worldwide is a long-term project. It inevitably spawns concern about waste (many such networks will never produce anything useful), deception (human sources are widely distrusted), and complicity with murderous characters (such as the Guatemalan officer who prompted the 1995 change in recruitment guidelines). These are prices that can be borne politically in the present atmosphere of crisis. If the sense of crisis abates, however, commitment to the long-term project could falter.
More and better spies will help, but no one should expect breakthroughs if we get them. It is close to impossible to penetrate small, disciplined, alien organizations like Osama bin Laden’s al Qaeda, and especially hard to find reliable U.S. citizens who have even a remote chance of trying. Thus we usually rely on foreign agents of uncertain reliability. Despite our huge and educated population, the base of Americans on which to draw is small: there are very few genuinely bilingual, bicultural Americans capable of operating like natives in exotic reaches of the Middle East, Central and South Asia, or other places that shelter the bin Ladens of the world.
For similar reasons there have been limitations on our capacity to translate information that does get collected. The need is not just for people who have studied Arabic, Pashto, Urdu, or Farsi, but for those who are truly fluent in those languages, and fluent in obscure dialects of them. Should U.S. intelligence trust recent, poorly educated immigrants for these jobs if they involve highly sensitive intercepts? How much will it matter if there are errors in translation, or willful mistranslations, that cannot be caught because there are no resources to cross-check the translators? Money can certainly help here, by paying more for better translators and, over the long term, promoting educational programs to broaden the base of recruits. For certain critical regions of the world, however, there are simply not enough potential recruits waiting in the wings to respond to a crash program.
Money can buy additional competent people to analyze collected information more readily than it can buy spies who can pass for members of the Taliban—especially if multiplying job slots are accompanied by enhanced opportunities for career development within intelligence agencies to make long service attractive for analysts. Pumping up the ranks of analysts can make a difference within the relatively short time span of a few years. The U.S. intelligence community has hundreds of analysts, but also hundreds of countries and issues to cover. On many subjects the coverage is now only one analyst deep—and when that one goes on vacation, or quits, the account may be handled out of the back pocket of a specialist on something else. We usually do not know in advance which of the numerous low-priority accounts might turn into the highest priority overnight (for example, Korea before June 1950, or Afghanistan before the Soviet invasion).
Hiring more analysts will be a good use of resources but could turn out to have a low payoff, and perhaps none at all, for much of what they do. Having half a dozen analysts on hand for some small country might be a good thing if that country turns out to be central to the campaign against terrorists, but those analysts need to be in place before we know we need them if they are to hit the ground running in a crisis. In most such cases, moreover, those analysts would serve their whole careers without producing anything that the U.S. government really needs, and no good analyst wants to be buried in an inactive account with peripheral significance.
One option is to make better use of an intelligence analyst reserve corps: people with other jobs who come in to read up on their accounts a couple of days each month to maintain currency, and who can be mobilized if a crisis involving their area erupts. There have been experiments with this system, but apparently without enough satisfaction to institutionalize it more broadly.
Of course, the quantity of analysts is less important than the quality of what they produce. Postmortems of intelligence failures usually reveal that very bright analysts failed to predict the disaster in question, despite their great knowledge of the situation, or that they had warned that an eruption could happen but without any idea of when. In fact, expertise can get in the way of anticipating a radical departure from the norm, because the depth of expert knowledge of why and how things have gone as they have day after day for years naturally inclines the analyst to estimate that developments will continue along the same trajectory. It is always a safer bet to predict that the situation tomorrow will be like it has been for the past dozen years than to say that it will change abruptly. And of course, in the vast majority of cases predictions of continuity are absolutely correct; the trick is to figure out which case will be the exception to a powerful rule.
A standard recommendation for reform—one made regularly by people discovering these problems for the first time—is to encourage “outside the box” analyses that challenge conventional wisdom and consider scenarios that appear low in probability but high in consequence. To some, this sort of intellectual shake-up might well have led the intelligence system, rather than Tom Clancy, to anticipate the kamikaze hijacking tactic of September 11.
All well and good. The problem, however, lies in figuring out what to do with the work this great analysis produces. There are always three dozen equally plausible dangers that are possible but improbable. Why should policymakers focus on any particular one of these hypothetical warnings or pay the costs of taking preventive action against all of them? One answer is to use such analysis to identify potential high-danger scenarios for which low-cost fixes are available. If President Bill Clinton had gotten a paper two years before September 11 that outlined the scenario for what ultimately happened, he probably would not have considered its probability high enough to warrant revolutionizing airport security, given all the obstacles: vested interests, opposition to particular measures, hassles for the traveling public. He might, however, have pushed for measures to allow checking the rosters of flight schools and investigating students who seemed uninterested in takeoffs and landings.
Another problem frequently noted is that the analytical corps has become fully absorbed in current intelligence, leaving no time for long-term research projects that look beyond the horizon. This, too, is something that more resources can solve. But as good a thing as more long-range analysis is, it is uncertain how productive it would be for the war on terrorism. The comparative advantage of the intelligence community over outside analysts is in bringing together secret information with knowledge from open sources. The more far-seeing a project, the less likely secret information is to play a role in the assessment. No one can match the analysts from the CIA, the Defense Intelligence Agency (DIA), or the NSA in estimating bin Laden’s next moves, but it is not clear that they have a comparative advantage over Middle East experts in think tanks or universities when it comes to estimating worldwide trends in radical Islamist movements over the next decade. Such long-term research is an area in which better use of outside consultants and improved exploitation of academia could help most.
The War At Home
There is a world of difference between collecting intelligence abroad and doing so at home. Abroad, intelligence operations may break the laws of the countries in which they are undertaken. All domestic intelligence operations, however, must conform to U.S. law. The CIA can bribe foreign officials, burglarize offices of foreign political parties, bug defense ministries, tap the phones of diplomats, and do all sorts of things to gather information that the FBI could not do within the United States without getting a warrant from a court. Collection inside the United States is the area where loosened constraints would have done most to avert the September 11 attacks. But it is also the area in which great changes may make Americans fear that the costs exceed the benefits—indeed, that if civil liberties are compromised, “the terrorists will have won.”
A Minnesota flight school reportedly alerted authorities a month before September 11 that one of its students, Zacarias Moussaoui, was learning to fly large jets but did not care about learning to take off or land. Moussaoui was arrested on immigration charges, and French intelligence warned U.S. officials that he was an extremist. FBI headquarters nevertheless decided against seeking a warrant for a wiretap or a search, reportedly because of complaints by the chief judge of the Foreign Intelligence Surveillance Court about other applications for wiretaps. After September 11, a search of Moussaoui’s computer revealed that he had collected information about crop-dusting aircraft—a potential delivery system for chemical or biological weapons. U.S. officials came to suspect that Moussaoui was supposed to have been the fifth hijacker on United Airlines flight 93, which went down in Pennsylvania.
In hindsight, the hesitation to mount aggressive surveillance and searches in this case—hesitation linked to a highly developed set of legal safeguards rooted in the traditional American reverence for privacy—is exactly the sort of constraint that should have been loosened. High standards for protecting privacy are like strictures against risking collateral damage in combat operations: those norms take precedence more easily when the security interests at stake are not matters of your country’s survival, but they become harder to justify when national security is on the line.
There have already been moves to facilitate more extensive clandestine surveillance, and there have been reactions against going too far. There will be substantial loosening of restraint on domestic intelligence collection, but how far it goes depends on the frequency and intensity of future terror attacks inside the United States. If there are no more that seem as serious as September 11, compromises of privacy will be limited. If there are two or three more dramatic attacks, all constraint may be swept away.
It is important to distinguish between two types of constraints on civil liberties. One is political censorship, like the suppression of dissent during World War I. There is no need or justification for this; counterterrorism does not benefit from suppression of free speech. The other type involves compromises of individual privacy, through secret surveillance, monitoring of communications, and searches. This is where pressing up to the constitutional limits offers the biggest payoff for counterterrorist intelligence. It also need not threaten individuals unnecessarily, so long as careful measures are instituted to keep secret the irrelevant but embarrassing information that may inadvertently be acquired as a by- product of monitoring. Similarly, popular but unpersuasive arguments have been advanced against the sort of national identification card common in other democratic countries. The U.S. Constitution does not confer the right to be unidentified to the government.
Even slightly more intrusive information-gathering will be controversial, but if it helps to avert future attacks, it will avert far more draconian blows against civil liberties. Moreover, Americans should remember that many solid, humane democracies—the United Kingdom, France, and others—have far more permissive rules for gathering information on people than the United States has had, and their citizens seem to live with these rules without great unease.
Red Tape and Reorganization
In a bureaucracy, reform means reorganization; reorganization means changing relationships of authority; and that means altering checks and balances. Five days after September 11, Tenet issued a directive that subsequently was leaked to the press. In it he proclaimed the wartime imperative to end business as usual, to cut through red tape and “give people the authority to do things they might not ordinarily be allowed to do. … If there is some bureaucratic hurdle, leap it. … We don’t have time to have meetings about how to fix problems, just fix them.” That refreshing activism will help push through needed changes. Some major reorganization of the intelligence community is inevitable. That was the response to Pearl Harbor, and even before the recent attacks many thought a major shake-up was overdue.
The current crisis presents the opportunity to override entrenched and outdated interests, to crack heads and force the sorts of consolidation and cooperation that have been inhibited by bureaucratic constipation. On balance, reorganization will help—but at a price: mistakes will increase, too. As Herbert Kaufman revealed in his classic 1977 book Red Tape, most administrative obstacles to efficiency do not come from mindless obstructionism. The sluggish procedures that frustrate one set of purposes have usually been instituted to safeguard other valid purposes. Red tape is the warp and woof of checks and balances. More muscular management will help some objectives and hurt others.
The crying need for intelligence reorganization is no recent discovery. It is a perennial lament, amplified every time intelligence stumbles. The community has undergone several major reorganizations and innumerable lesser ones over the past half- century. No one ever stays satisfied with reorganization because it never seems to do the trick—if the trick is to prevent intelligence failure. There is little reason to believe, therefore, that the next reform will do much better than previous ones.
Reorganizations usually prove to be three steps forward and two back, because the intelligence establishment is so vast and complex that the net impact of reshuffling may be indiscernible. After September 11, some observers complained that the intelligence community is too regionally oriented and should be organized more in terms of functional issues. Yet back in the 1980s, when William Casey became President Ronald Reagan’s DCI and encountered the functional organization of the CIA’s analytical directorate, he experienced the reverse frustration. Rather than deal with functional offices of economic, political, and strategic research, each with regional subunits, he shifted the structure to one of regional units with functional subunits. Perhaps it helped, but there is little evidence that it produced consistent improvement in analytical products. There is just as little evidence that moving back in the other direction will help any more.
What about a better fusion center for intelligence on counterterrorism, now touted by many as a vital reform? For years the DCI has had a Counter-Terrorism Center (CTC) that brings together assets from the CIA’s directorates of operations and intelligence, the FBI, the DIA, the State Department, and other parts of the community. It has been widely criticized, but many believe its deficiencies came from insufficient resources—something reorganization alone will not cure. If the CTC’s deficiencies were truly organizational, moreover, there is little reason to believe that a new fusion center would not simply replace those problems with different ones.
Some believe, finally, that the problem is the sheer complexity and bulk of the intelligence community; they call for it to be streamlined, turned into a leaner and meaner corps. Few such proposals specify what functions can be dispensed with in order to thin out the ranks, however. In truth, bureaucratization is both the U.S. intelligence community’s great weakness and its great strength. The weakness is obvious, as in any large bureaucracy: various forms of sclerosis, inertia, pettiness, and paralysis drive out many vibrant people and deaden many who remain. The strength, however, is taken for granted: a coverage of issues that is impressively broad and sometimes deep. Bureaucratization makes it hard to extract the right information efficiently from the globs of it lying around in the system, but in a leaner and meaner system there will never be much lying around.
Some areas can certainly benefit from reorganization. One is the integration of information technologies, management systems, and information sharing. Much has been done within the intelligence community to exploit the potential of information technology in recent years, but it has been such a fast-developing sector of society and the economy in general that constant adaptation may be necessary for some time.
Another area of potential reorganization involves making the DCI’s authority commensurate with his or her responsibility. This is a long- standing source of tension, because roughly 80 percent of the intelligence establishment (in terms of functions and resources) has always been located in the Defense Department, where primary lines of authority and loyalty run to the military services and to the secretary of defense. The latest manifestation of this problem was the increased priority given during the 1990s to the mission of support for military operations (SMO)—a priority levied not only on Pentagon intelligence agencies but on the CIA and others as well. Such a move was odd, given that military threats to the United States after the Cold War were lower than at any other time in the existence of the modern intelligence community, while a raft of new foreign policy involvements in various parts of the world were coming to the fore. But the SMO priority was the legacy of the Persian Gulf War and the problems in intelligence support felt by military commanders, combined with the Clinton administration’s unwillingness to override strong military preferences.
Matching authority and responsibility is where the test of the most immediate reform initiative—or evidence of its confusion—will come. Early reports on the formation of the Office of Homeland Security indicated that the new director, Tom Ridge, will be responsible for coordinating all of the agencies in the intelligence community. This is odd, because that was precisely the function for which the office of Director of Central Intelligence was created in the National Security Act of 1947. The position of DCI was meant to centralize oversight of the dispersed intelligence activities of the military services, the State Department, and the new Central Intelligence Agency, and to coordinate planning and resource allocation among them.
As the community burgeoned over the years, adding huge organizations such as the NSA, the CIA, NIMA, and others, the DCI remained the official responsible for knitting their functions together. The DCI’s ability to do so increased at times, but it was always limited by the authority of the secretary of defense over the Pentagon’s intelligence agencies. Indeed, hardly anyone but professionals within the intelligence community understands that there is such a thing as a DCI. Not only the press, but presidents and government officials as well never refer to the DCI by that title; they always speak instead of the “Director of the CIA,” as if that person were simply an agency head, forgetting the importance of the larger coordination responsibility.
Is Ridge to become the central coordinating official in practice that the DCI is supposed to be in principle? If so, why will he be better positioned to do the job than the DCI has been in the past? The DCI has always had an office next to the White House as well as at the CIA, and Ridge will have to spend most of his time on matters other than intelligence. A special review by a group under General Brent Scowcroft, the new head of the President’s Foreign Intelligence Advisory Board, has reportedly recommended moving several of the big intelligence agencies out of the Defense Department, putting them under the administrative control of the DCI. That would certainly give the DCI more clout to back up the responsibility for coordination. Such a proposal is so revolutionary, however, that its chances of adoption seem slim.
The real problem of DCIs in doing their jobs has generally been that presidents have not cared enough about intelligence to make the DCI one of their top advisers. Assigning coordination responsibility to Ridge may work if the president pays more attention to him than has been paid to the DCI, but otherwise this is the sort of reform that could easily prove to be ephemeral or unworkable—yet advertised as necessary in the short term to proclaim that something significant is being done.
From Age-Old To New-Age Surprise
The issue for reform is whether any fixes at all can break a depressing historical pattern. After September 11, intelligence officials realized that fragmentary indicators of impending action by bin Laden’s network had been recognized by the intelligence system but had not been sufficient to show what or where the action would be. A vague warning was reportedly issued, but not one that was a ringing alarm. This is, sadly, a very common occurrence.
What we know of intelligence in conventional warfare helps explain why powerful intelligence systems are often caught by surprise. The good news from history is that attackers often fail to win the wars that they start with stunning surprises: Germany was defeated after invading the Soviet Union, Japan after Pearl Harbor, North Korea after 1950, Argentina after taking the Falkland Islands, Iraq after swallowing Kuwait. The bad news is that those initial attacks almost always succeed in blindsiding the victims and inflicting terrible losses.
Once a war is underway, it becomes much harder to surprise the victim. The original surprise puts the victim on unambiguous notice. It shears away the many strong reasons that exist in peacetime to estimate that an adversary will not take the risk of attacking. It was easier for Japan to surprise the United States at Pearl Harbor than at Midway. But even in the midst of war, surprise attacks often succeed in doing real damage: recall the Battle of the Bulge or the Tet offensive. For Americans, September 11 was the Pearl Harbor of terrorism. The challenge now is to make the next attacks more like Midway than like Tet.
Surprise attacks often succeed despite the availability of warning indicators. This pattern leads many observers to blame derelict intelligence officials or irresponsible policymakers. The sad truth is that the fault lies more in natural organizational forces, and in the pure intractability of the problem, than in the skills of spies or statesmen.
After surprise attacks, intelligence postmortems usually discover indicators that existed in advance but that were obscured or contradicted by other evidence. Roberta Wohlstetter’s classic study of Pearl Harbor identified this as the problem of signals (information hinting at the possibility of enemy attack) getting lost in a crescendo of “noise” (the voluminous clutter of irrelevant information that floods in, or other matters competing for attention). Other causes abound. Some have been partially overcome, such as technical limitations on timely communication, or organizational obstacles to sharing information. Others are deeply rooted in the complexity of threats, the ambiguity of partial warnings, and the ability of plotters to overcome obstacles, manipulate information, and deceive victims.
One reason surprise attacks can succeed is the “boy who cried wolf” problem, in which the very excellence of intelligence collection works against its success. There are often numerous false alarms before an attack, and they dull sensitivity to warnings of the attack that does occur. Sometimes the supposed false alarms were not false at all, but accurate warnings that prompted timely responses by the victim that in turn caused the attacker to cancel and reschedule the assault—thus generating a self-negating prophecy.
Attacks can also come as a surprise because of an overload of incomplete warnings, a particular problem for a superpower with world- spanning involvements. In the spring of 1950, for example, the CIA warned President Harry Truman that the North Koreans could attack at any time, but without indications of whether the attack was certain or when it would happen. “But this did not apply alone to Korea,” Truman noted in his memoirs. The same reports also continually warned him of many other places in the world where communist forces had the capability to attack.
Intelligence may correctly warn of an enemy’s intention to strike and may even anticipate the timing but still guess wrong about where or how the attack will occur. U.S. intelligence was warning in late November 1941 that a Japanese strike could be imminent but expected it in Southeast Asia. Pearl Harbor seemed an impractical target because it was too shallow for torpedo attacks. That had indeed been true, but shortly before December the Japanese had adjusted their torpedoes so they could run in the shallows. Before September 11, similarly, attacks by al Qaeda were expected, but elsewhere in the world, and not by the technical means of kamikaze hijacking.
The list of common reasons why attacks often come as a surprise goes on and on. The point is that intelligence can rarely be perfect and unambiguous, and there are always good reasons to misinterpret it. Some problems of the past have been fixed by the technically sophisticated system we have now, and some may be reduced by adjustments to the system. But some can never be eliminated, with the result being that future unpleasant surprises are a certainty.
Reorganization may be the proper response to failure, if only because the masters of intelligence do not know how else to improve performance. The underlying cause of mistakes in performance, however, does not lie in the structure and process of the intelligence system. It is intrinsic to the issues and targets with which intelligence has to cope: the crafty opponents who strategize against it, and the alien cultures that are not transparent to American minds.
Reform will happen and, on balance, should help. But for too many policymakers and pundits, reorganization is an alluring but illusory quick fix. Long-term improvements are vaguer and less certain, and they reek of the lamp. But if the United States is going to have markedly better intelligence in parts of the world where few Americans have lived, studied, or understood local mores and aspirations, it is going to have to overcome a cultural disease: thinking that American primacy makes it unnecessary for American education to foster broad and deep expertise on foreign, especially non-Western, societies. The United States is perhaps the only major country in the world where one can be considered well educated yet speak only the native tongue.
The disease has even infected the academic world, which should know better. American political science, for example, has driven area studies out of fashion. Some “good” departments have not a single Middle East specialist on their rosters, and hardly any at all have a specialist on South Asia—a region of more than a billion people, two nuclear-armed countries, and swarms of terrorists. Yet these same departments can afford a plethora of professors who conjure up spare models naively assumed to be of global application.
Reforms that can be undertaken now will make the intelligence community a little better. Making it much better, however, will ultimately require revising educational norms and restoring the prestige of public service. Both are lofty goals and tall orders, involving general changes in society and professions outside government. Even if achieved, moreover, such fundamental reform would not bear fruit until far in the future.
But this is not a counsel of despair. To say that there is a limit to how high the intelligence batting average will get is not to say that it cannot get significantly better. It does mean, however, that no strategy for a war against terror can bank on prevention. Better intelligence may give us several more big successes like those of the 1990s, but even a .900 average will eventually yield another big failure. That means that equal emphasis must go to measures for civil defense, medical readiness, and “consequence management,” in order to blunt the effects of the attacks that do manage to get through. Efforts at prevention and preparation for their failure must go hand in hand.