Ever since Ronald Reagan promised to make “free enterprise and personal responsibility” a central theme of his presidency, talk about responsibility has become an important part of American political life. After Reagan, similar appeals quickly entered the vocabulary of liberal politicians and even egalitarian philosophers. Despite their deep political differences, all took to defending their favorite public policies in the name of responsibility. And despite their widely diverging ideas about who should be held accountable for their actions—Are the poor at fault for being in need? Can someone who was traumatized as a child be responsible for the crimes he commits as an adult?—each believes that the state is justified in discriminating between those it deems to have acted responsibly and those it deems to have acted irresponsibly.
Perhaps the most striking example of this transformation is the influence that notions of responsibility have had on the American welfare state. When Congress decided to “end welfare as we know it” with bipartisan support in 1996, the desire to punish people who fail to live up to their responsibilities was an important goal of both parties. For that reason, a work requirement that excluded people who weren’t employed from most forms of state assistance—even if they were raising children or couldn’t find a job because of a recession—became the cornerstone of the most significant reform of the American welfare state since the war on poverty. It should hardly come as a surprise, then, that welfare reform quite literally bore its appeal to responsibility on its sleeve: Its official name was the “Personal Responsibility and Work Opportunity Reconciliation Act.”
These developments are most pronounced in the United States, a country in whose political firmament self-reliance has always shone a little more brightly than solidarity. But the idea has also spread to countries like the United Kingdom, Spain, and Germany. It is therefore no exaggeration to say that responsibility has quickly become central to the political imagination throughout North America, Western Europe, and beyond. Economic questions that might have been settled from the point of view of structural considerations—say, the likely macroeconomic effect of making welfare payments conditional—now turn on the deeply moralized determination of individual action and culpability. Meanwhile, those who are dependent on state assistance now need to demonstrate that they are not in a state of need because they have made frivolous choices. As a result, they spend a good part of their time on such activities as taking drug tests to prove they are clean or applying for jobs they have little chance of getting to demonstrate they are actively seeking work.
This punitive focus on the actions of particular individuals is deeply pernicious. It leads us to underestimate what we owe our fellow citizens irrespective of their choices. It encourages us to disregard the larger structural factors that shape the most important economic outcomes. Perhaps most importantly, it blinds us to other important political values, like the desire to live in a society of equals—one in which the poor, rather than being mere objects of pity, actually enjoy equal standing in the eyes of their more affluent compatriots.
The rise of the far-right populists—from Donald Trump in the United States and Marine Le Pen in France to Viktor Orbán in Hungary—has given us a taste of what happens to liberal democracies when large swaths of the population feel that they are no longer making economic progress; when they fear that their standard of living is liable only to deteriorate in the future; and when they are furious that (to add insult to injury) political and economic elites seem to be blaming them for their troubles. Stemming the rise of this populist tide will be the most urgent political task of the coming decades. In places like France, where the populists are still in the opposition, it means working hard to expose the easy solutions they propose to complex problems as cynical make-belief. In places like the United States, where they are already in power, it means fighting the repeal of basic social entitlements—and vigorously resisting any attempt to undermine democratic norms and independent power centers. But in both places, defending the liberal order will also require something more far-reaching: a vision for a reformed welfare state that can empower citizens facing the deep social upheaval of automation and globalization, and calm some of their well-founded economic anxiety. A growing swath of the population is starting to conclude that they may find it impossible to find decent employment no matter how hard they try. If we are to reestablish the foundations for the broad-based prosperity on which democracy’s stability has always depended, it’s all the more urgent that we overcome a single-minded focus on personal responsibility that has narrowed what kinds of policy ideas we take seriously, and what kinds of economic institutions we can envisage. It is time to rethink the meaning—and indeed the promise—of responsibility.
A Narrow Notion of Responsibility
Responsibility can mean many different things in its various everyday uses. It can be a call to help others, for example by serving one’s country, by taking care of one’s children, or simply by helping an old lady cross the street. It can be an admonishment to act morally in one’s private life, for example by being a good parent or an honest partner. Finally, it can also mean facing up to mistakes, taking ownership over poor decisions instead of quibbling them away with bad excuses. But while everyday uses of “responsibility” retain a great variety of meanings, the political meaning of the word has narrowed significantly over the years.
In his first inaugural address, John F. Kennedy famously challenged American citizens to “ask not what your country can do for you; ask what you can do for your country.” What he had in mind was an older notion of responsibility, one in which taking responsibility was directed outward as much as inward. Responsibility, in this view, might mean serving one’s community, becoming an officer in a local voluntary association, or running for public office. It was, in short, a duty to do good works.
Today, by contrast, politicians rarely talk about our responsibility to serve some cause or entity greater than ourselves. When they invoke personal responsibility, they mean something that might also have the urgency of an obligation, and yet is unambiguously directed inward. Citizens exercise personal responsibility when they do the hard work and make the prudent choices that will ensure they have enough money to take care of their own needs. Citizens fail to exercise personal responsibility, by contrast, when they are so lazy, or make such foolish choices, that they wind up asking for collective assistance. Responsibility, in short, has become a form of accountability for our actions.
This also helps to explain the central political distinction that is now made in cases when somebody is in need of help. In theory, a whole range of questions might determine how generous a state should be toward its neediest members. How much money does the state have at its disposal? What other important goals might suffer if more money is spent on social assistance? How much suffering would the proposed spending alleviate? And are its beneficiaries likely to profit from the assistance they receive, potentially putting them in a position to better their situation or rejoin the workforce in the future?
All of these important questions are either pushed into the background or altogether eclipsed by the dominant concern about responsibility-as-accountability. Instead, the key question that now determines whether somebody should receive public assistance boils down to an inquisition into the history of their actions: whether they are in need because of their own choices or for reasons beyond their control, such as a genetic disease or a car accident. If the former, we don’t owe them anything. If the latter, we might give them a hand.
A Welfare State for the Virtuous
The same distinction between the responsible and the irresponsible also helps to resolve a longstanding debate about recent developments in the welfare state. In the late 1980s, when growth slowed, globalization increased competition, and an aging population made public pension schemes like Social Security less sustainable, many political scientists foretold a complete erosion of the welfare state. But over the course of the 1990s, it became clear that even political leaders who were openly hostile to many social entitlements had barely managed to reverse the trend. Though the growth of the welfare state slowed under the leadership of Ronald Reagan in the United States and Margaret Thatcher in Great Britain, for example, its share of the GDP continued to rise. With few exceptions, the wholesale abolition of existing entitlement programs was either never attempted or failed in the face of widespread opposition. On the whole, as Paul Pierson has put it, the “irresistible forces” that were supposed to undermine the welfare state met “immovable objects” in popular attachment to the existing programs.
As scholars like Jacob Hacker (a frequent co-author of Pierson’s) have pointed out, however, this story may be overly optimistic. Though he acknowledges that very few welfare programs were abolished wholesale, Hacker also shows that the real degree of protection against social risks was significantly curtailed over the past decades. In some cases, important forms of social assistance did not keep up with inflation, losing a large part of their real value over the years. In other cases, forms of assistance that were once meant fully to protect citizens against major life risks became supplementary. Pensions are a great example: In lieu of the comparatively generous defined benefit pensions that predominated in the postwar era, Hacker argues, most citizens now have to supplement their retirement income with private savings subsidized by tax incentive schemes like the
Roth IRA. The appearance of continuity, he thus concluded, actually concealed a lot of change.
The debate between Pierson and Hacker remained unsatisfyingly inconclusive. Looked at from the vantage point of the overall degree of social protection afforded by contemporary welfare state institutions, it is difficult to spot a clear narrative in the developments of the last decade: The welfare state hasn’t completely retrenched. But neither was it fully preserved. The changes that did take place seem to have no particular logic to them.
Looked at from the vantage point of personal responsibility, by contrast, the pieces of the puzzle start to fall into place. Those aspects of the welfare state that were perceived to be helping people irrespective of whether they had acted responsibly suffered large cuts: Welfare became workfare; upper limits were imposed on the receipt of unemployment benefits; food stamps and other social assistance programs of last resort became less generous; state pensions were reduced. In many countries, even social assistance programs in the form of non-cash payments were made conditional on good behavior: In the United Kingdom, for example, people can now be expelled from public housing if civil magistrates find that they have repeatedly engaged in forms of “anti-social behavior” like swearing, drinking in public, or making excessive noise.
At the same time, aspects of the welfare state that were perceived to be helping those people who were in need by no fault of their own were preserved or even expanded. Social assistance programs for the disabled are one good example: Compared to other forms of cash assistance, they have proven far more resilient to recent changes. The radical expansion of the earned income tax credit—and the introduction of similar programs in many countries in Western Europe and beyond—is an even more striking example: A very generous cash transfer to some of the poorest members of society, it is by definition restricted to people who are proving their willingness to “live up to their responsibility” by participating in the workforce.
The welfare state has neither straightforwardly shrunk nor straightforwardly grown, then. Instead, it has been reshaped to accord with the punitive assumptions of the age of responsibility. And so “responsibility-buffering” programs, which help citizens regardless of their past actions, have been cut. Meanwhile, “responsibility-tracking” programs, which reward citizens for the right actions, have been expanded. In the postwar era, visionaries like T. H. Marshall, the British sociologist who built much of the intellectual groundwork for the modern welfare state, conceived of entitlements as a social safety net that would help people as a matter of right, regardless of the reasons for their misfortune. Today, we are further away from Marshall’s vision than at any point since World War II.
What’s Wrong with the Age of Responsibility
John, let us imagine, loves to climb challenging mountains. He knows that he would need to take out a special insurance to cover the costs of a possible mountain rescue. Though he could afford to do so, he declines to take out additional coverage. Do we owe him any assistance when he then injures himself and gets stranded on a remote peak?
From the point of view of personal responsibility, the answer is reasonably clear. John is responsible for putting himself in harm’s way. He himself decided to decline the necessary insurance. There can be little doubt that he is in need because of his own choices. Insofar as the responsibility framework is concerned, that seems to settle the matter: We don’t owe him our assistance. But is that all there is to it?
Seeing this situation as so black-and-white is overly simple for a number of reasons. The first and most obvious point is that we may believe that we have obligations toward John that aren’t defeated by the fact that he has acted foolishly. He was wrong to decline an affordable form of insurance that would have covered the costs of his rescue. But by the same token, we may be acting unjustly if we insist on letting him perish by the side of a mountain. Two wrongs do not always make a right. What’s more, even if we think that we don’t have an obligation of justice to come to John’s rescue—that there is no sense in which we would be flouting a strict moral obligation if we declined his desperate pleas—we may yet be moved to help him as a matter of charity. In short, it is simply less clear that the fact that someone has acted irresponsibly in the past should undermine his present entitlements than the advocates of personal responsibility insist.
A single-minded focus on personal responsibility not only blinds us to weighty moral reasons; it also makes it difficult to give important practical considerations their full due. John may have children who would suffer enormously from his death. He might have rare skills that could make a big contribution to the happiness of the community or the growth of the economy. These considerations are inaccessible to us if we only think about the way in which John’s past actions account for his present predicament.
Focusing purely on John’s moral desert thus blinds us to factors that seem important for both moral and practical reasons. Though we need not ignore the fact that he has acted foolishly in the past—indeed, this may, for example, be a perfectly legitimate consideration in determining how much of a contribution he should be required to make to cover the cost of the rescue effort—it would be both foolish and rather bizarre to focus exclusively on his one poor decision.
A similar set of conclusions emerges in the policy realm if we avoid the temptation of focusing on the individual level and start to think about aggregate outcomes instead. Around the world, government agencies tasked with helping welfare recipients back into work now spend a lot of their time assessing whether or not they are doing enough to find a job. But this may be more useful in coming to a judgment about individuals—and providing a bureaucratic justification for striking them off the welfare rolls—than in creating a true incentive for them to find employment.
In fact, a recent field experiment in England suggests that this punitive focus on responsibility may even be counterproductive. Like similar agencies elsewhere, the job centers in Loughton, Essex, have been asking applicants what they have done over the past two weeks to procure a job. In the experiment, agents were instead trained to partner with jobseekers in designing a concrete plan of action for the future. Working together, they devised specific steps the applicant would undertake in the next two weeks. The results were striking. Not only did the number of jobseekers who managed to procure employment increase significantly; interestingly, employees of the job center itself also reported much higher levels of job satisfaction. Broadening the focus of social policy from individuals’ past actions to the specific interventions that might help them take control of their lives in the future can make a real difference.
Why Denial Won’t Help
In a true society of equals, the state would tender help in a spirit of respect. In such a society, the institutions of the welfare state might be organized in such a way as to convey a message of solidarity: “I can see that you are in need of help,” an imaginary, well-meaning bureaucrat might say. “As a fellow citizen, you deserve our compassion. Hopefully, this will help you get back on your feet.”
Today, we are moving further and further away from such a society. Any offer of assistance is now conditional on an applicant’s ability to demonstrate that he needs help for reasons beyond his control. This has three perverse consequences. First, it means that anybody who seeks assistance has to undergo what Jonathan Wolff has aptly called “shameful revelation”: To prove that he did not act irresponsibly, an applicant for welfare benefits has to answer highly intrusive questions. Second, it gives rise to what Yeheskel Hasenfeld has named the “paradox of the welfare state”: Put off by the prospect of a painful interrogation, many citizens who have genuine claims to assistance will abstain from pursuing their rights. And third, even people who successfully jump through all these humiliating hoops are treated as less than equals: Having proven that there is nothing they could possibly have done to avoid needing to ask for welfare entitlements, they are condescendingly offered a form of charity reserved for those so hapless and devoid of marketable talent that no amount of effort will be enough to save them. A focus on personal responsibility turns fellow citizens with a just claim to assistance into social inferiors to whom we can, at best, extend our charity.
This goes for thinkers who locate themselves on the left end of the political spectrum nearly as much as for those who see themselves on the right. Indeed, it is striking to what extent left-wing critics of personal responsibility have failed to call the key normative premise of the contemporary discourse about responsibility into doubt. Instead of pointing out that we might owe fellow citizens assistance even when they have made bad choices, or that we need to focus on structural factors that have nothing to do with individual culpability, they have tried to shift the debate to what they considered more hospitable terrain: They began to argue that most people who find themselves in need cannot be held responsible for their fate. (This tendency has been expressed in its purest form among post-Marxist critics of capitalism like G. A. Cohen; but it is surprisingly prominent even in the thought of liberal or even left-libertarian philosophers like Philippe van Parijs.)
While egalitarian thinkers once cared about the overall distribution of goods in a society, they have increasingly come to believe that there is nothing pernicious about inequalities that flow directly from our past choices. A growing crop of so-called “luck egalitarians,” for example, believe that the very definition of justice consists in ensuring that any (and only) material differences between individuals that are due to factors outside their control be fully compensated. If you have much more than I do because of sheer good luck—for example, because you inherited a substantial amount of money from your great-aunt—then this is a manifest injustice. But if you have more money than I do because we made different choices—for example, because I squandered all of my money on climbing mountains—then this is perfectly just; even though directing additional resources to me would narrow the wealth gap between us, it would upset “true” equality.
The upshot of this way of thinking about justice and equality is that the question of what does or does not constitute a genuine choice becomes absolutely crucial. And since many left-wingers do fervently seek to preserve social programs that help the poor, they have increasingly justified their favored policies by what I call a “denial of responsibility”: Over the past decades, a lot of their philosophical and political effort has been directed toward pointing out that most people are not responsible for most of their choices. Is somebody addicted to drugs? Their genes are to blame. Is somebody out of work? Discrimination is at fault. Has somebody amassed a lot of debt? If only they had inherited more money . . .
This denial of responsibility starts from intuitive assumptions. It seems obvious, for example, that an inheritance is a matter of luck. But in trying to identify all the things that are, strictly speaking, beyond our control, the denial of responsibility quickly gets into territory that is far less intuitive. Aren’t our talents just a matter of luck? After all, it is surely beyond our control whether we are born with a high or a low IQ. And might closer inspection not reveal that some of the choices we seemingly make turn out to be the result of factors for which we cannot claim true responsibility as well? After all, the real reason why you put more effort into your business than me may be that you were raised by more hard-working parents.
Most philosophers in this tradition quickly conclude that we need radical redistribution. While they accept that we can forfeit our claim of assistance if we have made bad choices, for example, they doubt that the poor can ever be responsible for their fate. And yet, it is hardly surprising that this response to the predominance of personal responsibility has proven politically toothless. Many people are willing to concede that social advantages bestowed upon the children of the rich are unfair: If somebody is able to attend a better college or to start a business thanks to their parents’ riches, this clearly gives them an unfair advantage over children who are excluded from such opportunities. But they quickly balk when philosophers take this idea to what they consider to be its logical end point: It is, for example, unlikely that ordinary voters will ever brook the idea that we should not be allowed to profit from our propensity toward hard work simply because our parents helped to imbue that virtue in us, or that armed robbers shouldn’t be held responsible for the violent acts they commit because they had a bad childhood. If we are to overcome the pernicious consequences of a punitive focus on personal responsibility, we therefore need to challenge the current way of thinking about blame and desert in a more radical manner.
A Positive Notion of Responsibility
Responsibility does have value. In fact, a world without responsibility would be nothing short of dystopian: Because we could never think that we are taking responsibility for ourselves, it would deny us a sense of control over our own fate. Because we could never think that we are taking responsibility for others, it would leave us incapable of expressing what is so significant about the people and the projects that fill our lives with meaning. Perhaps most importantly, because we could never think of other human beings as full agents capable of responsibility for their own actions, it would ultimately leave us bereft of some of the deepest human bonds, making a mockery of the meaning of friendship and even love. So the solution to the age of responsibility cannot be to reject the idea of responsibility altogether. Instead, we need to reclaim and redefine the concept, making it part of a larger vision for how the state can empower its citizens.
Personal responsibility has shrunk to a narrow, punitive core. Figuring out just how responsibility could, instead, play a positive, multi-faceted role in our politics and society is a large undertaking. Here, I can only offer three initial pointers about the way in which a positive notion of responsibility might allow us to broaden our political imagination—and revive the original promise of the welfare state.
One of the stranger things about our current way of thinking about personal responsibility is that it turns the welfare state into a mere handmaiden for realizing an abstract notion of justice. As mentioned, the idea, roughly, is that there are those who are deserving (the responsible) and those who are undeserving (the irresponsible). The role of the welfare state is to reward the deserving and to punish the undeserving. John Rawls would have called this mode of reasoning “pre-political”: It takes a strong notion of individual desert as its starting point, and then presses political institutions into the service of making the world accord more closely with this pre-existing conception of justice.
In a first step, we should turn this reasoning on its head by adopting a “political” mode of reasoning. Instead of starting with some abstract notion of individual desert, we should begin by asking what larger political purposes our institutions serve. What were the institutions we created actually meant to accomplish? Is it the purpose of the welfare state to reduce suffering or to help the economy grow, to stabilize the political system or to create the social bases for a society of equals (or perhaps all of the above)?
Once we have an account of the values we seek to pursue as a political community, we can start to think about how to treat particular individuals in light of these commitments. The answer might not always turn out to be unambiguous, of course. But even then, one of the big advantages of this approach is that it captures the most significant considerations, allowing us to see what is really at stake in difficult situations. It allows us to see, for example, that the question of how generous we should be toward people who have made some bad choice in their lives should turn not on some extremely intricate metaphysical questions about whether or not they are truly responsible for having acted as they did—but rather on much more straightforward, and important, questions such as whether we want to prioritize avoiding unnecessary suffering or expanding the economy.
Second, this allows us a more nuanced way to think about whether to make aspects of the welfare state conditional. A positive conception of responsibility would reject the idea that we should withhold welfare benefits from people who have made bad choices in the past because their supposedly irresponsible behavior has now made them undeserving of our assistance. But, at the same time, it would recognize that our welfare state institutions are designed to serve real, important purposes—and that we should design those institutions in such a way that they can effectively pursue those goals.
Whether or not those rules should include strong forms of conditionality depends on context. A welfare state whose primary purpose it is to avert unnecessary suffering should be more circumspect about making assistance conditional than one that is mostly geared toward growing the economy. By the same token, a rich country that can easily afford some additional welfare payments has less reason to impose such rules than a poor country that needs every extra dollar to pursue equally pressing goals that might otherwise remain unachievable. This indeterminacy is as it should be: The point here is not to legislate a one-size-fits-all answer; it is to recover a vocabulary that allows us to think through the inevitable trade-offs in a way that captures the most pertinent values that are at stake.
A final insight, meanwhile, is likely to apply in virtually all contexts. Most people are keen to exercise personal responsibility. They want to have a sense of control over their fate and to care for others—to shape their own lives and to entwine it with the well-being of somebody, or something, larger than themselves. So, while a lot of the recent reforms of the welfare state have been designed on an adversary model—in which legislators seek to create strong disincentives for indolence and bureaucrats are tasked with sanctioning offenders—policymakers should embrace a collaborative one instead. The mother in Texas who must spend her time clipping job ads to prove her worthiness at an unemployment office; the Florida man who must pee in a cup to receive food stamps; the student who has to scour the grocery store in Maine to find which foods have been deemed food-stamp eligible: These individuals have ironically been stripped of their ability to be responsible adults, instead living their lives within bumpers created by legislators.
A key task of the welfare state is to empower as many people as possible to exercise real agency in the world. Legislators should therefore seek to ensure that more people gain access to the material and educational resources they need to lead a life of responsibility. Meanwhile, bureaucrats involved in administering the welfare state should reconceive of themselves as partners who can help citizens in their pursuit of shared goals.
Once upon a time, responsibility was a noble ideal. It invoked a sense of duty and a striving for meaning. It called upon people to exercise solidarity toward their fellow citizens and to entwine their fate with something larger than themselves. There is something distinctly old-fashioned about all of this: Undoubtedly, it cannot be reimported wholesale into our very different era. And yet, there is no reason why we should not be able to recover the core of its appeal. Understood in a positive manner, personal responsibility is a deeply resonant value that promises to empower the “lowly” and to reconcile us to our political world. As such, it can play an important role in revitalizing our impoverished political and institutional imagination.
The welfare state has been reshaped to accord with the punitive assumptions of the age of responsibility.
The solution to the age of responsibility should be to reclaim it as part of a larger, positive vision.
TWENTY-YEARS AGO, Dwight Macdonald published a series of articles in Politics on the responsibility of peoples and, specifically, the responsibility of intellectuals. I read them as an undergraduate, in the years just after the war, and had occasion to read them again a few months ago. They seem to me to have lost none of their power or persuasiveness. Macdonald is concerned with the question of war guilt. He asks the question: To what extent were the German or Japanese people responsible for the atrocities committed by their governments? And, quite properly, he turns the question back to us: To what extent are the British or American people responsible for the vicious terror bombings of civilians, perfected as a technique of warfare by the Western democracies and reaching their culmination in Hiroshima and Nagasaki, surely among the most unspeakable crimes in history. To an undergraduate in 1945-46—to anyone whose political and moral consciousness had been formed by the horrors of the 1930s, by the war in Ethiopia, the Russian purge, the “China Incident,” the Spanish Civil War, the Nazi atrocities, the Western reaction to these events and, in part, complicity in them—these questions had particular significance and poignancy.
With respect to the responsibility of intellectuals, there are still other, equally disturbing questions. Intellectuals are in a position to expose the lies of governments, to analyze actions according to their causes and motives and often hidden intentions. In the Western world, at least, they have the power that comes from political liberty, from access to information and freedom of expression. For a privileged minority, Western democracy provides the leisure, the facilities, and the training to seek the truth lying hidden behind the veil of distortion and misrepresentation, ideology and class interest, through which the events of current history are presented to us. The responsibilities of intellectuals, then, are much deeper than what Macdonald calls the “responsibility of people,” given the unique privileges that intellectuals enjoy.
The issues that Macdonald raised are as pertinent today as they were twenty years ago. We can hardly avoid asking ourselves to what extent the American people bear responsibility for the savage American assault on a largely helpless rural population in Vietnam, still another atrocity in what Asians see as the “Vasco da Gama era” of world history. As for those of us who stood by in silence and apathy as this catastrophe slowly took shape over the past dozen years—on what page of history do we find our proper place? Only the most insensible can escape these questions. I want to return to them, later on, after a few scattered remarks about the responsibility of intellectuals and how, in practice, they go about meeting this responsibility in the mid-1960s.
IT IS THE RESPONSIBILITY of intellectuals to speak the truth and to expose lies. This, at least, may seem enough of a truism to pass over without comment. Not so, however. For the modern intellectual, it is not at all obvious. Thus we have Martin Heidegger writing, in a pro-Hitler declaration of 1933, that “truth is the revelation of that which makes a people certain, clear, and strong in its action and knowledge”; it is only this kind of “truth” that one has a responsibility to speak. Americans tend to be more forthright. When Arthur Schlesinger was asked by The New York Times in November, 1965, to explain the contradiction between his published account of the Bay of Pigs incident and the story he had given the press at the time of the attack, he simply remarked that he had lied; and a few days later, he went on to compliment the Times for also having suppressed information on the planned invasion, in “the national interest,” as this term was defined by the group of arrogant and deluded men of whom Schlesinger gives such a flattering portrait in his recent account of the Kennedy Administration. It is of no particular interest that one man is quite happy to lie in behalf of a cause which he knows to be unjust; but it is significant that such events provoke so little response in the intellectual community—for example, no one has said that there is something strange in the offer of a major chair in the humanities to a historian who feels it to be his duty to persuade the world that an American-sponsored invasion of a nearby country is nothing of the sort. And what of the incredible sequence of lies on the part of our government and its spokesmen concerning such matters as negotiations in Vietnam? The facts are known to all who care to know. The press, foreign and domestic, has presented documentation to refute each falsehood as it appears. But the power of the government’s propaganda apparatus is such that the citizen who does not undertake a research project on the subject can hardly hope to confront government pronouncements with fact.1
The deceit and distortion surrounding the American invasion of Vietnam is by now so familiar that it has lost its power to shock. It is therefore useful to recall that although new levels of cynicism are constantly being reached, their clear antecedents were accepted at home with quiet toleration. It is a useful exercise to compare Government statements at the time of the invasion of Guatemala in 1954 with Eisenhower’s admission—to be more accurate, his boast—a decade later that American planes were sent “to help the invaders” (New York Times, October 14, 1965). Nor is it only in moments of crisis that duplicity is considered perfectly in order. “New Frontiersmen,” for example, have scarcely distinguished themselves by a passionate concern for historical accuracy, even when they are not being called upon to provide a “propaganda cover” for ongoing actions. For example, Arthur Schlesinger (New York Times, February 6, 1966) describes the bombing of North Vietnam and the massive escalation of military commitment in early 1965 as based on a “perfectly rational argument”:
so long as the Vietcong thought they were going to win the war, they obviously would not be interested in any kind of negotiated settlement.
The date is important. Had this statement been made six months earlier, one could attribute it to ignorance. But this statement appeared after the UN, North Vietnamese, and Soviet initiatives had been front-page news for months. It was already public knowledge that these initiatives had preceeded the escalation of February 1965 and, in fact, continued for several weeks after the bombing began. Correspondents in Washington tried desperately to find some explanation for the startling deception that had been revealed. Chalmers Roberts, for example, wrote in the Boston Globe on November 19 with unconscious irony:
[late February, 1965] hardly seemed to Washington to be a propitious moment for negotiations [since] Mr. Johnson…had just ordered the first bombing of North Vietnam in an effort to bring Hanoi to a conference table where the bargaining chips on both sides would be more closely matched.
Coming at that moment, Schlesinger’s statement is less an example of deceit than of contempt—contempt for an audience that can be expected to tolerate such behavior with silence, if not approval.2
TO TURN TO SOMEONE closer to the actual formation and implementation of policy, consider some of the reflections of Walt Rostow, a man who, according to Schlesinger, brought a “spacious historical view” to the conduct of foreign affairs in the Kennedy administration.3 According to his analysis, the guerrilla warfare in Indo-China in 1946 was launched by Stalin,4 and Hanoi initiated the guerrilla war against South Vietnam in 1958 (The View from the Seventh Floor pp. 39 and 152). Similarly, the Communist planners probed the “free world spectrum of defense” in Northern Azerbaijan and Greece (where Stalin “supported substantial guerrilla warfare”—ibid., pp. 36 and 148), operating from plans carefully laid in 1945. And in Central Europe, the Soviet Union was not “prepared to accept a solution which would remove the dangerous tensions from Central Europe at the risk of even slowly staged corrosion of Communism in East Germany” (ibid., p. 156).
It is interesting to compare these observations with studies by scholars actually concerned with historical events. The remark about Stalin’s initiating the first Vietnamese war in 1946 does not even merit refutation. As to Hanoi’s purported initiative of 1958, the situation is more clouded. But even government sources5 concede that in 1959 Hanoi received the first direct reports of what Diem referred to6 as his own Algerian war and that only after this did they lay their plans to involve themselves in this struggle. In fact, in December, 1958, Hanoi made another of its many attempts—rebuffed once again by Saigon and the United States—to establish diplomatic and commercial relations with the Saigon government on the basis of the status quo.7 Rostow offers no evidence of Stalin’s support for the Greek guerrillas; in fact, though the historical record is far from clear, it seems that Stalin was by no means pleased with the adventurism of the Greek guerrillas, who, from his point of view, were upsetting the satisfactory post-war imperialist settlement.8
Rostow’s remarks about Germany are more interesting still. He does not see fit to mention, for example, the Russian notes of March-April, 1952, which proposed unification of Germany under internationally supervised elections, with withdrawal of all troops within a year, if there was a guarantee that a reunified Germany would not be permitted to join a Western military alliance.9 And he has also momentarily forgotten his own characterization of the strategy of the Truman and Eisenhower administrations: “to avoid any serious negotiation with the Soviet Union until the West could confront Moscow with German rearmament within an organized European framework, as a fait accompli“10—to be sure, in defiance of the Potsdam agreements.
But most interesting of all is Rostow’s reference to Iran. The facts are that there was a Russian attempt to impose by force a pro-Soviet government in Northern Azerbaijan that would grant the Soviet Union access to Iranian oil. This was rebuffed by superior Anglo-American force in 1946, at which point the more powerful imperialism obtained full rights to Iranian oil for itself, with the installation of a pro-Western government. We recall what happened when, for a brief period in the early 1950s, the only Iranian government with something of a popular base experimented with the curious idea that Iranian oil should belong to the Iranians. What is interesting, however, is the description of Northern Azerbaijan as part of “the free world spectrum of defense.” It is pointless, by now, to comment on the debasement of the phrase “free world.” But by what law of nature does Iran, with its resources, fall within Western dominion? The bland assumption that it does is most revealing of deep-seated attitudes toward the conduct of foreign affairs.
IN ADDITION to this growing lack of concern for truth, we find, in recent published statements, a real or feigned naiveté about American actions that reaches startling proportions. For example, Arthur Schlesinger, according to the Times, February 6, 1966, characterized our Vietnamese policies of 1954 as “part of our general program of international goodwill.” Unless intended as irony, this remark shows either a colossal cynicism, or the inability, on a scale that defies measurement, to comprehend elementary phenomena of contemporary history. Similarly, what is one to make of the testimony of Thomas Schelling before the House Foreign Affairs Committee, January 27, 1965, in which he discusses two great dangers if all Asia “goes Communist”?11 First, this would exclude “the United States and what we call Western civilization from a large part of the world that is poor and colored and potentially hostile.” Second, “a country like the United States probably cannot maintain self-confidence if just about the greatest thing it ever attempted, namely to create the basis for decency and prosperity and democratic government in the underdeveloped world, had to be acknowledged as a failure or as an attempt that we wouldn’t try again.” It surpasses belief that a person with even a minimal acquaintance with the record of American foreign policy could produce such statements.
It surpasses belief, that is, unless we look at the matter from a more historical point of view, and place such statements in the context of the hypocritical moralism of the past; for example, of Woodrow Wilson, who was going to teach the Latin Americans the art of good government, and who wrote (1902) that it is “our peculiar duty” to teach colonial peoples “order and self-control…[and]…the drill and habit of law and obedience….” Or of the missionaries of the 1840s, who described the hideous and degrading opium wars as “the result of a great design of Providence to make the wickedness of men subserve his purposes of mercy toward China, in breaking through her wall of exclusion, and bringing the empire into more immediate contact with western and Christian nations.” Or, to approach the present, of A.A. Berle, who, in commenting on the Dominican intervention, has the impertinence to attribute the problems of the Caribbean countries to imperialism—Russian imperialism.12
AS A FINAL EXAMPLE of this failure of skepticism, consider the remarks of Henry Kissinger in his concluding remarks at the Harvard-Oxford television debate on America’s Vietnam policies. He observed, rather sadly, that what disturbs him most is that others question not our judgment, but our motives—a remarkable comment by a man whose professional concern is political analysis, that is, analysis of the actions of governments in terms of motives that are unexpressed in official propaganda and perhaps only dimly perceived by those whose acts they govern. No one would be disturbed by an analysis of the political behavior of the Russians, French, or Tanzanians questioning their motives and interpreting their actions by the long-range interests concealed behind their official rhetoric. But it is an article of faith that American motives are pure, and not subject to analysis (see note 1). Although it is nothing new in American intellectual history—or, for that matter, in the general history of imperialist apologia—this innocence becomes increasingly distasteful as the power it serves grows more dominant in world affairs, and more capable, therefore, of the unconstrained viciousness that the mass media present to us each day. We are hardly the first power in history to combine material interests, great technological capacity, and an utter disregard for the suffering and misery of the lower orders. The long tradition of naiveté and self-righteousness that disfigures our intellectual history, however, must serve as a warning to the third world, if such a warning is needed, as to how our protestations of sincerity and benign intent are to be interpreted.
The basic assumptions of the “New Frontiersmen” should be pondered carefully by those who look forward to the involvement of academic intellectuals in politics. For example, I have referred above to Arthur Schlesinger’s objections to the Bay of Pigs invasion, but the reference was imprecise. True, he felt that it was a “terrible idea,” but “not because the notion of sponsoring an exile attempt to overthrow Castro seemed intolerable in itself.” Such a reaction would be the merest sentimentality, unthinkable to a tough-minded realist. The difficulty, rather, was that it seemed unlikely that the deception could succeed. The operation, in his view, was ill-conceived but not otherwise objectionable.13 In a similar vein, Schlesinger quotes with approval Kennedy’s “realistic” assessment of the situation resulting from Trujillo’s assassination:
There are three possibilities in descending order of preference: a decent democratic regime, a continuation of the Trujillo regime or a Castro regime. We ought to aim at the first, but we really can’t renounce the second until we are sure that we can avoid the third [p. 769].
The reason why the third possibility is so intolerable is explained a few pages later (p. 774): “Communist success in Latin America would deal a much harder blow to the power and influence of the United States.” Of course, we can never really be sure of avoiding the third possibility; therefore, in practice, we will always settle for the second, as we are now doing in Brazil and Argentina, for example.14
Or consider Walt Rostow’s views on American policy in Asia.15 The basis on which we must build this policy is that “we are openly threatened and we feel menaced by Communist China.” To prove that we are menaced is of course unnecessary, and the matter receives no attention; it is enough that we feel menaced. Our policy must be based on our national heritage and our national interests. Our national heritage is briefly outlined in the following terms: “Throughout the nineteenth century, in good conscience Americans could devote themselves to the extension of both their principles and their power on this continent,” making use of “the somewhat elastic concept of the Monroe doctrine” and, of course, extending “the American interest to Alaska and the mid-Pacific islands…. Both our insistence on unconditional surrender and the idea of post-war occupation…represented the formulation of American security interests in Europe and Asia.” So much for our heritage. As to our interests, the matter is equally simple. Fundamental is our “profound interest that societies abroad develop and strengthen those elements in their respective cultures that elevate and protect the dignity of the individual against the state.” At the same time, we must counter the “ideological threat,” namely “the possibility that the Chinese Communists can prove to Asians by progress in China that Communist methods are better and faster than democratic methods.” Nothing is said about those people in Asian cultures to whom our “conception of the proper relation of the individual to the state” may not be the uniquely important value, people who might, for example, be concerned with preserving the “dignity of the individual” against concentrations of foreign or domestic capital, or against semi-feudal structures (such as Trujillo-type dictatorships) introduced or kept in power by American arms. All of this is flavored with allusions to “our religious and ethical value systems” and to our “diffuse and complex concepts” which are to the Asian mind “so much more difficult to grasp” than Marxist dogma, and are so “disturbing to some Asians” because of “their very lack of dogmatism.”
Such intellectual contributions as these suggest the need for a correction to De Gaulle’s remark, in his Memoirs, about the American “will to power, cloaking itself in idealism.” By now, this will to power is not so much cloaked in idealism as it is drowned in fatuity. And academic intellectuals have made their unique contribution to this sorry picture.
LET US, HOWEVER, RETURN to the war in Vietnam and the response that it has aroused among American intellectuals. A striking feature of the recent debate on Southeast Asian policy has been the distinction that is commonly drawn between “responsible criticism,” on the one hand, and “sentimental,” or “emotional,” or “hysterical” criticism, on the other. There is much to be learned from a careful study of the terms in which this distinction is drawn. The “hysterical critics” are to be identified, apparently, by their irrational refusal to accept one fundamental political axiom, namely that the United States has the right to extend its power and control without limit, insofar as is feasible. Responsible criticism does not challenge this assumption, but argues, rather, that we probably can’t “get away with it” at this particular time and place.
A distinction of this sort seems to be what Irving Kristol, for example, has in mind in his analysis of the protest over Vietnam policy (Encounter, August, 1965). He contrasts the responsible critics, such as Walter Lippmann, the Times, and Senator Fulbright, with the “teach-in movement.” “Unlike the university protesters,” he points out, “Mr. Lippmann engages in no presumptuous suppositions as to ‘what the Vietnamese people really want’—he obviously doesn’t much care—or in legalistic exegesis as to whether, or to what extent, there is ‘aggression’ or ‘revolution’ in South Vietnam. His is a realpolitik point of view; and he will apparently even contemplate the possibility of a nuclear war against China in extreme circumstances.” This is commendable, and contrasts favorably, for Kristol, with the talk of the “unreasonable, ideological types” in the teach-in movement, who often seem to be motivated by such absurdities as “simple, virtuous ‘anti-imperialism,’ “who deliver “harangues on ‘the power structure,’ ” and who even sometimes stoop so low as to read “articles and reports from the foreign press on the American presence in Vietnam.” Furthermore, these nasty types are often psychologists, mathematicians, chemists, or philosophers (just as, incidentally, those most vocal in protest in the Soviet Union are generally physicists, literary intellectuals, and others remote from the exercise of power), rather than people with Washington contacts, who, of course, realize that “had they a new, good idea about Vietnam, they would get a prompt and respectful hearing” in Washington.
I am not interested here in whether Kristol’s characterization of protest and dissent is accurate, but rather in the assumptions on which it rests. Is the purity of American motives a matter that is beyond discussion, or that is irrelevant to discussion? Should decisions be left to “experts” with Washington contacts—even if we assume that they command the necessary knowledge and principles to make the “best” decision, will they invariably do so? And, a logically prior question, is “expertise” applicable—that is, is there a body of theory and of relevant information, not in the public domain, that can be applied to the analysis of foreign policy or that demonstrates the correctness of present actions in some way that psychologists, mathematicians, chemists, and philosophers are incapable of comprehending? Although Kristol does not examine these questions directly, his attitude presupposes answers, answers which are wrong in all cases. American aggressiveness, however it may be masked in pious rhetoric, is a dominant force in world affairs and must be analyzed in terms of its causes and motives. There is no body of theory or significant body of relevant information, beyond the comprehension of the layman, which makes policy immune from criticism. To the extent that “expert knowledge” is applied to world affairs, it is surely appropriate—for a person of any integrity, quite necessary—to question its quality and the goals it serves. These facts seem too obvious to require extended discussion.
A CORRECTIVE to Kristol’s curious belief in the Administration’s openness to new thinking about Vietnam is provided by McGeorge Bundy in a recent issue of Foreign Affairs (January, 1967). As Bundy correctly observes, “on the main stage…the argument on Viet Nam turns on tactics, not fundamentals,” although, he adds, “there are wild men in the wings.” On stage center are, of course, the President (who in his recent trip to Asia had just “magisterially reaffirmed” our interest “in the progress of the people across the Pacific”) and his advisers, who deserve “the understanding support of those who want restraint.” It is these men who deserve the credit for the fact that “the bombing of the North has been the most accurate and the most restrained in modern warfare”—a solicitude which will be appreciated by the inhabitants, or former inhabitants of Nam Dinh and Phu Ly and Vinh. It is these men, too, who deserve the credit for what was reported by Malcolm Browne as long ago as May, 1965:
In the South, huge sectors of the nation have been declared “free bombing zones,” in which anything that moves is a legitimate target. Tens of thousands of tons of bombs, rockets, napalm and cannon fire are poured into these vast areas each week. If only by the laws of chance, bloodshed is believed to be heavy in these raids.
Fortunately for the developing countries, Bundy assures us, “American democracy has no taste for imperialism,” and “taken as a whole, the stock of American experience, understanding, sympathy and simple knowledge is now much the most impressive in the world.” It is true that “four-fifths of all the foreign investing in the world is now done by Americans” and that “the most admired plans and policies…are no better than their demonstrable relation to the American interest”—just as it is true, so we read in the same issue of Foreign Affairs, that the plans for armed action against Cuba were put into motion a few weeks after Mikoyan visited Havana, “invading what had so long been an almost exclusively American sphere of influence.” Unfortunately, such facts as these are often taken by unsophisticated Asian intellectuals as indicating a “taste for imperialism.” For example, a number of Indians have expressed their “near exasperation” at the fact that “we have done everything we can to attract foreign capital for fertilizer plants, but the American and the other Western private companies know we are over a barrel, so they demand stringent terms which we just cannot meet” (Christian Science Monitor, November 26), while “Washington…doggedly insists that deals be made in the private sector with private enterprise” (ibid., December 5).16 But this reaction, no doubt, simply reveals, once again, how the Asian mind fails to comprehend the “diffuse and complex concepts” of Western thought.
IT MAY BE USEFUL to study carefully the “new, good ideas about Vietnam” that are receiving a “prompt and respectful hearing” in Washington these days. The US Government Printing Office is an endless source of insight into the moral and intellectual level of this expert advice. In its publications one can read, for example, the testimony of Professor David N. Rowe, Director of Graduate Studies in International Relations at Yale University, before the House Committee on Foreign Affairs (see note 11). Professor Rowe proposes (p. 266) that the United States buy all surplus Canadian and Australian wheat, so that there will be mass starvation in China. These are his words:
Mind you, I am not talking about this as a weapon against the Chinese people. It will be. But that is only incidental. The weapon will be a weapon against the Government because the internal stability of that country cannot be sustained by an unfriendly Government in the face of general starvation.
Professor Rowe will have none of the sentimental moralism that might lead one to compare this suggestion with, say, the Ostpolitik of Hitler’s Germany.17 Nor does he fear the impact of such policies on other Asian nations, for example, Japan. He assures us, from his “very long acquaintance with Japanese questions,” that “the Japanese above all are people who respect power and determination.” Hence “they will not be so much alarmed by American policy in Vietnam that takes off from a position of power and intends to seek a solution based upon the imposition of our power upon local people that we are in opposition to.” What would disturb the Japanese is “a policy of indecision, a policy of refusal to face up to the problems [in China and Vietnam] and to meet our responsibilities there in a positive way,” such as the way just cited. A conviction that we were “unwilling to use the power that they know we have” might “alarm the Japanese people very intensely and shake the degree of their friendly relations with us.” In fact, a full use of American power would be particularly reassuring to the Japanese, because they have had a demonstration “of the tremendous power in action of the United States…because they have felt our power directly.” This is surely a prime example of the healthy, “realpolitik point of view” that Irving Kristol so much admires.
But, one may ask, why restrict ourselves to such indirect means as mass starvation? Why not bombing? No doubt this message is implicit in the remarks to the same committee of the Reverend R.J. de Jaegher, Regent of the Institute of Far Eastern Studies, Seton Hall University, who explains that like all people who have lived under Communism, the North Vietnamese “would be perfectly happy to be bombed to be free” (p. 345).
Of course, there must be those who support the Communists. But this is really a matter of small concern, as the Hon Walter Robertson, Assistant Secretary of State for Far Eastern Affairs from 1953-59, points out in his testimony before the same committee. He assures us that “The Peiping regime…represents something less than 3 per cent of the population” (p. 402).
Consider, then, how fortunate the Chinese Communist leaders are, compared to the leaders of the Vietcong, who, according to Arthur Goldberg (New York Times, February 6, 1966), represent about “one-half of one percent of the population of South Vietnam,” that is, about one-half the number of new Southern recruits for the Vietcong during 1965, if we can credit Pentagon statistics.18
In the face of such experts as these, the scientists and philosophers of whom Kristol speaks would clearly do well to continue to draw their circles in the sand.
HAVING SETTLED THE ISSUE of the political irrelevance of the protest movement, Kristol turns to the question of what motivates it—more generally, what has made students and junior faculty “go left,” as he sees it, amid general prosperity and under liberal, Welfare State administrations. This, he notes, “is a riddle to which no sociologist has as yet come up with an answer.” Since these young people are well-off, have good futures, etc., their protest must be irrational. It must be the result of boredom, of too much security, or something of this sort.
Other possibilities come to mind. It may be, for example, that as honest men the students and junior faculty are attempting to find out the truth for themselves rather than ceding the responsibility to “experts” or to government; and it may be that they react with indignation to what they discover. These possibilities Kristol does not reject. They are simply unthinkable, unworthy of consideration. More accurately, these possibilities are inexpressible; the categories in which they are formulated (honesty, indignation) simply do not exist for the tough-minded social scientist.
IN THIS IMPLICIT DISPARAGEMENT of traditional intellectual values, Kristol reflects attitudes that are fairly widespread in academic circles. I do not doubt that these attitudes are in part a consequence of the desperate attempt of the social and behavioral sciences to imitate the surface features of sciences that really have significant intellectual content. But they have other sources as well. Anyone can be a moral individual, concerned with human rights and problems; but only a college professor, a trained expert, can solve technical problems by “sophisticated” methods. Ergo, it is only problems of the latter sort that are important or real. Responsible, non-ideological experts will give advice on tactical questions; irresponsible, “ideological types” will “harangue” about principle and trouble themselves over moral issues and human rights, or over the traditional problems of man and society, concerning which “social and behavioral science” has nothing to offer beyond trivalities. Obviously, these emotional, ideological types are irrational, since, being well-off and having power in their grasp, they shouldn’t worry about such matters.
At times this pseudo-scientific posing reaches levels that are almost pathological. Consider the phenomenon of Herman Kahn, for example. Kahn has been both denounced as immoral and lauded for his courage. By people who should know better, his On Thermonuclear War has been described “without qualification…[as]…one of the great works of our time” (Stuart Hughes). The fact of the matter is that this is surely one of the emptiest works of our time, as can be seen by applying to it the intellectual standards of any existing discipline, by tracing some of its “well-documented conclusions” to the “objective studies” from which they derive, and by following the line of argument, where detectable. Kahn proposes no theories, no explanations, no factual assumptions that can be tested against their consequences, as do the sciences he is attempting to mimic. He simply suggests a terminology and provides a facade of rationality. When particular policy conclusions are drawn, they are supported only by ex cathedra remarks for which no support is even suggested (e.g., “The civil defense line probably should be drawn somewhere below $5 billion annually” to keep from provoking the Russians—why not $50 billion, or $5.00?). What is more, Kahn is quite aware of this vacuity; in his more judicious moments he claims only that “there is no reason to believe that relatively sophisticated models are more likely to be misleading than the simpler models and analogies frequently used as an aid to judgment.” For those whose humor tends towards the macabre, it is easy to play the game of “strategic thinking” à la Kahn, and to prove what one wishes. For example, one of Kahn’s basic assumptions is that
an all-out surprise attack in which all resources are devoted to counter-value targets would be so irrational that, barring an incredible lack of sophistication or actual insanity among Soviet decision makers, such an attack is highly unlikely.
A simple argument proves the opposite. Premise 1: American decision-makers think along the lines outlined by Herman Kahn. Premise 2: Kahn thinks it would be better for everyone to be red than for everyone to be dead. Premise 3: if the Americans were to respond to an all-out countervalue attack, then everyone would be dead. Conclusion: the Americans will not respond to an all-out countervalue attack, and therefore it should be launched without delay. Of course, one can carry the argument a step further. Fact: the Russians have not carried out an all-out countervalue attack. It follows that they are not rational. If they are not rational, there is no point in “strategic thinking.” Therefore,….
Of course this is all nonsense, but nonsense that differs from Kahn’s only in the respect that the argument is of slightly greater complexity than anything to be discovered in his work. What is remarkable is that serious people actually pay attention to these absurdities, no doubt because of the facade of tough-mindedness and pseudo-science.
IT IS A CURIOUS and depressing fact that the “anti-war movement” falls prey all too often to similar confusions. In the fall of 1965, for example, there was an International Conference on Alternative Perspectives on Vietnam, which circulated a pamphlet to potential participants stating its assumptions. The plan was to set up study groups in which three “types of intellectual tradition” will be represented: (1) area specialists; (2) “social theory, with special emphasis on theories of the international system, of social change and development, of conflict and conflict resolution, or of revolution”; (3) “the analysis of public policy in terms of basic human values, rooted in various theological, philosophical and humanist traditions.” The second intellectual tradition will provide “general propositions, derived from social theory and tested against historical, comparative, or experimental data”; the third “will provide the framework out of which fundamental value questions can be raised and in terms of which the moral implications of societal actions can be analyzed.” The hope was that “by approaching the questions [of Vietnam policy] from the moral perspectives of all great religions and philosophical systems, we may find solutions that are more consistent with fundamental human values than current American policy in Vietnam has turned out to be.”
In short, the experts on values (i.e., spokesmen for the great religions and philosophical systems) will provide fundamental insights on moral perspectives, and the experts on social theory will provide general empirically validated propositions and “general models of conflict.” From this interplay, new policies will emerge, presumably from application of the canons of scientific method. The only debatable issue, it seems to me, is whether it is more ridiculous to turn to experts in social theory for general well-confirmed propositions, or to the specialists in the great religions and philosophical systems for insights into fundamental human values.
There is much more that can be said about this topic, but, without continuing, I would simply like to emphasize that, as is no doubt obvious, the cult of the experts is both self-serving, for those who propound it, and fraudulent. Obviously, one must learn from social and behavioral science whatever one can; obviously, these fields should be pursued as seriously as possible. But it will be quite unfortunate, and highly dangerous, if they are not accepted and judged on their merits and according to their actual, not pretended, accomplishments. In particular, if there is a body of theory, well-tested and verified, that applies to the conduct of foreign affairs or the resolution of domestic or international conflict, its existence has been kept a well-guarded secret. In the case of Vietnam, if those who feel themselves to be experts have access to principles or information that would justify what the American government is doing in that unfortunate country, they have been singularly ineffective in making this fact known. To anyone who has any familiarity with the social and behavioral sciences (or the “policy sciences”), the claim that there are certain considerations and principles too deep for the outsider to comprehend is simply an absurdity, unworthy of comment.
WHEN WE CONSIDER the responsibility of intellectuals, our basic concern must be their role in the creation and analysis of ideology. And, in fact, Kristol’s contrast between the unreasonable ideological types and the responsible experts is formulated in terms that immediately bring to mind Daniel Bell’s interesting and influential “The End of Ideology,” an essay which is as important for what it leaves unsaid as for its actual content.19 Bell presents and discusses the Marxist analysis of ideology as a mask for class interest, quoting Marx’s well-known description of the belief of the bourgeoisie “that the special conditions of its emancipation are the general conditions through which alone modern society can be saved and the class struggle avoided.” He then argues that the age of ideology is ended, supplanted, at least in the West, by a general agreement that each issue must be settled in its own terms, within the framework of a Welfare State in which, presumably, experts in the conduct of public affairs will have a prominent role. Bell is quite careful, however, to characterize the precise sense of “ideology” in which “ideologies are exhausted.” He is referring to ideology only as “the conversion of ideas into social levers,” to ideology as “a set of beliefs, infused with passion,…[which] …seeks to transform the whole of a way of life.” The crucial words are “transform” and “convert into social levers.” Intellectuals in the West, he argues, have lost interest in converting ideas into social levers for the radical transformation of society. Now that we have achieved the pluralistic society of the Welfare State, they see no further need for a radical transformation of society; we may tinker with our way of life here and there, but it would be wrong to try to modify it in any significant way. With this consensus of intellectuals, ideology is dead.
There are several striking facts about Bell’s essay. First, he does not point out the extent to which this consensus of the intellectuals is self-serving. He does not relate his observation that, by and large, intellectuals have lost interest in “transforming the whole of a way of life” to the fact that they play an increasingly prominent role in running the Welfare State; he does not relate their general satisfaction with the Welfare State to the fact that, as he observes elsewhere, “America has become an affluent society, offering place…and prestige…to the onetime radicals.” Secondly, he offers no serious argument to show that intellectuals are somehow “right” or “objectively justified” in reaching the consensus to which he alludes, with its rejection of the notion that society should be transformed. Indeed, although Bell is fairly sharp about the empty rhetoric of the “new left,” he seems to have a quite utopian faith that technical experts will be able to cope with the few problems that still remain; for example, the fact that labor is treated as a commodity, and the problems of “alienation.”
It seems fairly obvious that the classical problems are very much with us; one might plausibly argue that they have even been enhanced in severity and scale. For example, the classical paradox of poverty in the midst of plenty is now an ever-increasing problem on an international scale. Whereas one might conceive, at least in principle, of a solution within national boundaries, a sensible idea of transforming international society to cope with vast and perhaps increasing human misery is hardly likely to develop within the framework of the intellectual consensus that Bell describes.
THUS IT WOULD SEEM NATURAL to describe the consensus of Bell’s intellectuals in somewhat different terms from his. Using the terminology of the first part of his essay, we might say that the Welfare State technician finds justification for his special and prominent social status in his “science,” specifically, in the claim that social science can support a technology of social tinkering on a domestic or international scale. He then takes a further step, ascribing in a familiar way a universal validity to what is in fact a class interest: he argues that the special conditions on which his claim to power and authority are based are, in fact, the only general conditions by which modern society can be saved; that social tinkering within a Welfare State framework must replace the commitment to the “total ideologies” of the past, ideologies which were concerned with a transformation of society. Having found his position of power, having achieved security and affluence, he has no further need for ideologies that look to radical change. The scholar-expert replaces the “free-floating intellectual” who “felt that the wrong values were being honored, and rejected the society,” and who has now lost his political role (now, that is, that the right values are being honored).
Conceivably, it is correct that the technical experts who will (or hope to) manage the “industrial society” will be able to cope with the classical problems without a radical transformation of society. It is conceivably true that the bourgeoisie was right in regarding the special conditions of its emancipation as the only general conditions by which modern society would be saved. In either case, an argument is in order, and skepticism is justified when none appears.
Within the same framework of general utopianism, Bell goes on to pose the issue between Welfare State scholar-experts and third-world ideologists in a rather curious way. He points out, quite correctly, that there is no issue of Communism, the content of that doctrine having been “long forgotten by friends and foes alike.” Rather, he says,
the question is an older one: whether new societies can grow by building democratic institutions and allowing people to make choices—and sacrifices—voluntarily, or whether the new elites, heady with power, will impose totalitarian means to transform their societies.
THE QUESTION is an interesting one. It is odd, however, to see it referred to as “an older one.” Surely he cannot be suggesting that the West chose the democratic way—for example, that in England during the industrial revolution, the farmers voluntarily made the choice of leaving the land, giving up cottage industry, becoming an industrial proletariat, and voluntarily decided, within the framework of the existing democratic institutions, to make the sacrifices that are graphically described in the classic literature on nineteenth-century industrial society. One may debate the question whether authoritarian control is necessary to permit capital accumulation in the underdeveloped world, but the Western model of development is hardly one that we can point to with any pride. It is perhaps not surprising to find Walt Rostow referring to “the more humane processes [of industrialization] that Western values would suggest” (An American Policy in Asia). Those who have a serious concern for the problems that face backward countries, and for the role that advanced industrial societies might, in principle, play in development and modernization, must use somewhat more care in interpreting the significance of the Western experience.
Returning to the quite appropriate question, whether “new societies can grow by building democratic institutions” or only by totalitarian means, I think that honesty requires us to recognize that this question must be directed more to American intellectuals than to third-world ideologists. The backward countries have incredible, perhaps insurmountable problems, and few available options; the United States has a wide range of options, and has the economic and technological resources, though, evidently, neither the intellectual nor moral resources, to confront at least some of these problems. It is easy for an American intellectual to deliver homilies on the virtues of freedom and liberty, but if he is really concerned about, say, Chinese totalitarianism or the burdens imposed on the Chinese peasantry in forced industrialization, then he should face a task that is infinitely more important and challenging—the task of creating, in the United States, the intellectual and moral climate, as well as the social and economic conditions, that would permit this country to participate in modernization and development in a way commensurate with its material wealth and technical capacity. Large capital gifts to Cuba and China might not succeed in alleviating the authoritarianism and terror that tend to accompany early stages of capital accumulation, but they are far more likely to have this effect than lectures on democratic values. It is possible that even without “capitalist encirclement” in its various manifestations, the truly democratic elements in revolutionary movements—in some instances, soviets and collectives—might be undermined by an “elite” of bureaucrats and technical intelligentsia. But it is almost certain that capitalist encirclement itself, which all revolutionary movements now have to face, will guarantee this result. The lesson, for those who are concerned to strengthen the democratic, spontaneous, and popular elements in developing societies, is quite clear. Lectures on the two-party system, or even on the really substantial democratic values that have been in part realized in Western society, are a monstrous irrelevance, given the effort required to raise the level of culture in Western society to the point where it can provide a “social lever” for both economic development and the development of true democratic institutions in the third world—and, for that matter, at home.
A GOOD CASE CAN BE MADE for the conclusion that there is indeed something of a consensus among intellectuals who have already achieved power and affluence, or who sense that they can achieve them by “accepting society” as it is and promoting the values that are “being honored” in this society. It is also true that this consensus is most noticeable among the scholar-experts who are replacing the free-floating intellectuals of the past. In the university, these scholar-experts construct a “value-free technology” for the solution of technical problems that arise in contemporary society,20 taking a “responsible stance” towards these problems, in the sense noted earlier. This consensus among the responsible scholar-experts is the domestic analogue to that proposed, internationally, by those who justify the application of American power in Asia, whatever the human cost, on the grounds that it is necessary to contain the “expansion of China” (an “expansion” which is, to be sure, hypothetical for the time being)21—that is, to translate from State Department Newspeak, on the grounds that it is essential to reverse the Asian nationalist revolutions or, at least, to prevent them from spreading. The analogy becomes clear when we look carefully at the ways in which this proposal is formulated. With his usual lucidity, Churchill outlined the general position in a remark to his colleague of the moment, Joseph Stalin, at Teheran in 1943:
The government of the world must be entrusted to satisfied nations, who wished nothing more for themselves than what they had. If the world-government were in the hands of hungry nations there would always be danger. But none of us had any reason to seek for anything more…. Our power placed us above the rest. We were like the rich men dwelling at peace within their habitations.
For a translation of Churchill’s biblical rhetoric into the jargon of contemporary social science, one may turn to the testimony of Charles Wolf, Senior Economist of the Rand Corporation, at the Congressional Committee Hearings cited earlier:
I am dubious that China’s fears of encirclement are going to be abated, eased, relaxed in the long-term future. But I would hope that what we do in Southeast Asia would help to develop within the Chinese body politic more of a realism and willingness to live with this fear than to indulge it by support for liberation movements, which admittedly depend on a great deal more than external support…the operational question for American foreign policy is not whether that fear can be eliminated or substantially alleviated, but whether China can be faced with a structure of incentives, of penalties and rewards, of inducements that will make it willing to live with this fear.
The point is further clarified by Thomas Schelling: “There is growing experience, which the Chinese can profit from, that although the United States may be interested in encircling them, may be interested in defending nearby areas from them, it is, nevertheless, prepared to behave peaceably if they are.”
In short, we are prepared to live peaceably in our—to be sure, rather extensive—habitations. And, quite naturally, we are offended by the undignified noises from the servants’ quarters. If, let us say, a peasant-based revolutionary movement tries to achieve independence from foreign powers and the domestic structures they support, or if the Chinese irrationally refuse to respond properly to the schedule of reinforcement that we have prepared for them—if they object to being encircled by the benign and peace-loving “rich men” who control the territories on their borders as a natural right—then, evidently, we must respond to this belligerence with appropriate force.
IT IS THIS MENTALITY that explains the frankness with which the United States Government and its academic apologists defend the American refusal to permit a political settlement in Vietnam at a local level, a settlement based on the actual distribution of political forces. Even government experts freely admit that the NLF is the only “truly mass-based political party in South Vietnam”22 ; that the NLF had “made a conscious and massive effort to extend political participation, even if it was manipulated, on the local level so as to involve the people in a self-contained, self-supporting revolution” (p. 374); and that this effort had been so successful that no political groups, “with the possible exception of the Buddhists, thought themselves equal in size and power to risk entering into a coalition, fearing that if they did the whale would swallow the minnow” (p. 362). Moreover, they concede that until the introduction of overwhelming American force, the NLF had insisted that the struggle “should be fought out at the political level and that the use of massed military might was in itself illegitimate…. The battleground was to be the minds and loyalties of the rural Vietnamese, the weapons were to be ideas” (pp. 91-92; cf. also pp. 93, 99-108, 155f.); and, correspondingly, that until mid-1964, aid from Hanoi “was largely confined to two areas—doctrinal know-how and leadership personnel” (p. 321). Captured NLF documents contrast the enemy’s “military superiority” with their own “political superiority” (p. 106), thus fully confirming the analysis of American military spokesmen who define our problem as how, “with considerable armed force but little political power, [to] contain an adversary who has enormous political force but only modest military power.”23
Similarly, the most striking outcome of both the Honolulu conference in February and the Manila conference in October was the frank admission by high officials of the Saigon government that “they could not survive a ‘peaceful settlement’ that left the Vietcong political structure in place even if the Vietcong guerilla units were disbanded,” that “they are not able to compete politically with the Vietnamese Communists” (Charles Mohr, New York Times, February 11, 1966, italics mine). Thus, Mohr continues, the Vietnamese demand a “pacification program” which will have as “its core…the destruction of the clandestine Vietcong political structure and the creation of an iron-like system of government political control over the population.” And from Manila, the same correspondent, on October 23, quotes a high South Vietnamese official as saying that:
Frankly, we are not strong enough now to compete with the Communists on a purely political basis. They are organized and disciplined. The non-Communist nationalists are not—we do not have any large, well-organized political parties and we do not yet have unity. We cannot leave the Vietcong in existence.
Officials in Washington understand the situation very well. Thus Secretary Rusk has pointed out that “if the Vietcong come to the conference table as full partners they will, in a sense, have been victorious in the very aims that South Vietnam and the United States are pledged to prevent” (January 28, 1966). Max Frankel reported from Washington in the Times on February 18, 1966, that
Compromise has had no appeal here because the Administration concluded long ago that the non-Communist forces of South Vietnam could not long survive in a Saigon coalition with Communists. It is for that reason—and not because of an excessively rigid sense of protocol—that Washington has steadfastly refused to deal with the Vietcong or recognize them as an independent political force.
In short, we will—magnanimously—permit Vietcong representatives to attend negotiations only if they will agree to identify themselves as agents of a foreign power and thus forfeit the right to participate in a coalition government, a right which they have now been demanding for a half-dozen years. We well know that in any representative coalition, our chosen delegates could not last a day without the support of American arms. Therefore, we must increase American force and resist meaningful negotiations, until the day when a client government can exert both military and political control over its own population—a day which may never dawn, for as William Bundy has pointed out, we could never be sure of the security of a Southeast Asia “from which the Western presence was effectively withdrawn.” Thus if we were to “negotiate in the direction of solutions that are put under the label of neutralization,” this would amount to capitulation to the Communists.24 According to this reasoning, then, South Vietnam must remain, permanently, an American military base.
All of this is, of course, reasonable, so long as we accept the fundamental political axiom that the United States, with its traditional concern for the rights of the weak and downtrodden, and with its unique insight into the proper mode of development for backward countries, must have the courage and the persistence to impose its will by force until such time as other nations are prepared to accept these truths—or simply, to abandon hope.
IF IT IS THE RESPONSIBILITY of the intellectual to insist upon the truth, it is also his duty to see events in their historical perspective. Thus one must applaud the insistence of the Secretary of State on the importance of historical analogies, the Munich analogy, for example. As Munich showed, a powerful and aggressive nation with a fanatic belief in its manifest destiny will regard each victory, each extension of its power and authority, as a prelude to the next step. The matter was very well put by Adlai Stevenson, when he spoke of “the old, old route whereby expansive powers push at more and more doors, believing they will open until, at the ultimate door, resistance is unavoidable and major war breaks out.” Herein lies the danger of appeasement, as the Chinese tirelessly point out to the Soviet Union—which, they claim, is playing Chamberlain to our Hitler in Vietnam. Of course, the aggressiveness of liberal imperialism is not that of Nazi Germany, though the distinction may seem academic to a Vietnamese peasant who is being gassed or incinerated. We do not want to occupy Asia; we merely wish, to return to Mr. Wolf, “to help the Asian countries progress toward economic modernization, as relatively ‘open’ and stable societies, to which our access, as a country and as individual citizens, is free and comfortable.” The formulation is appropriate. Recent history shows that it makes little difference to us what form of government a country has so long as it remains an “open society,” in our peculiar sense of this term—that is, a society that remains open to American economic penetration or political control. If it is necessary to approach genocide in Vietnam to achieve this objective, than this is the price we must pay in defense of freedom and the rights of man.
In pursuing the aim of helping other countries to progress toward open societies, with no thought of territorial aggrandizement, we are breaking no new ground. In the Congressional Hearings that I cited earlier, Hans Morgenthau aptly describes our traditional policy towards China as one which favors “what you might call freedom of competition with regard to the exploitation of China” (op. cit., p. 128). In fact, few imperialist powers have had explicit territorial ambitions. Thus in 1784, the British Parliament announced: “To pursue schemes of conquest and extension of dominion in India are measures repugnant to the wish, honor, and policy of this nation.” Shortly after this, the conquest of India was in full swing. A century later, Britain announced its intentions in Egypt under the slogan “intervention, reform, withdrawal.” It is obvious which parts of this promise were fulfilled within the next half-century. In 1936, on the eve of hostilities in North China, the Japanese stated their Basic Principles of National Policy. These included the use of moderate and peaceful means to extend her strength, to promote social and economic development, to eradicate the menace of Communism, to correct the aggressive policies of the great powers, and to secure her position as the stabilizing power in East Asia. Even in 1937, the Japanese government had “no territorial designs upon China.” In short, we follow a well-trodden path.
It is useful to remember, incidentally, that the US was apparently quite willing, as late as 1939, to negotiate a commercial treaty with Japan and arrive at a modus vivendi if Japan would “change her attitude and practice towards our rights and interests in China,” as Secretary Hull put it. The bombing of Chungking and the rape of Nanking were unpleasant, it is true, but what was really important was our rights and interests in China, as the responsible, unhysterical men of the day saw quite clearly. It was the closing of the open door by Japan that led inevitably to the Pacific war, just as it is the closing of the open door by “Communist” China itself that may very well lead to the next, and no doubt last, Pacific war.
QUITE OFTEN, THE STATEMENTS of sincere and devoted technical experts give surprising insight into the intellectual attitudes that lie in the background of the latest savagery. Consider, for example, the following comment by the economist Richard Lindholm, in 1959, expressing his frustration over the failure of economic development in “free Vietnam”:
…the use of American aid is determined by how the Vietnamese use their incomes and their savings. The fact that a large portion of the Vietnamese imports financed with American aid are either consumer goods or raw materials used rather directly to meet consumer demands is an indication that the Vietnamese people desire these goods. for they have shown their desire by their willingness to use their piasters to purchase them.25
In short, the Vietnamese people desire Buicks and air-conditioners, rather than sugar refining equipment or road-building machinery, as they have shown by their behavior in a free market. And however much we may deplore their free choice, we must allow the people to have their way. Of course, there are also those two-legged beasts of burden that one stumbles on in the countryside, but as any graduate student of political science can explain, they are not part of a responsible modernizing elite, and therefore have only a superficial biological resemblance to the human race.
In no small measure, it is attitudes like this that lie behind the butchery in Vietnam, and we had better face up to them with candor, or we will find our government leading us towards a “final solution” in Vietnam, and in the many Vietnams that inevitably lie ahead.
Let me finally return to Dwight Macdonald and the responsibility of intellectuals. Macdonald quotes an interview with a death-camp paymaster who burst into tears when told that the Russians would hang him. “Why should they? What have I done?” he asked. Macdonald concludes: “Only those who are willing to resist authority themselves when it conflicts too intolerably with their personal moral code, only they have the right to condemn the death-camp paymaster.” The question, “What have I done?” is one that we may well ask ourselves, as we read each day of fresh atrocities in Vietnam—as we create, or mouth, or tolerate the deceptions that will be used to justify the next defense of freedom.