Ideology, Irrationality, and Collectively Self-defeating Behavior

Joseph Heath. Constellations: An International Journal of Critical & Democratic Theory. Volume 7, Issue 3, September 2000.

One of the most persistent legacies of Karl Marx and the Young Hegelians has been the centrality of the concept of “ideology” in contemporary social criticism. The concept was introduced in order to account for a very specific phenomenon, viz. the fact that individuals often participate in maintaining and reproducing institutions under which they are oppressed or exploited. In the extreme, these individuals may even actively resist the efforts of anyone who tries to change these institutions on their behalf. Clearly, some explanation needs to be given of how individuals could systematically fail to see where their interests lie, or how they might fail to pursue these interests once these have been made clear to them. This need is often felt with some urgency, since failure to provide such an explanation usually counts as prima facie evidence against the claim that these individuals are genuinely oppressed or exploited in the first place.

There is of course no question that this kind of phenomenon requires a special sort of explanation. Unfortunately, Feuerbach, Marx, and their followers took the fateful turn of attempting to explain these “ideological” effects as a consequence of irrationality on the part of those under their sway. While there are no doubt instances where practices are reproduced without good reason, the ascription of irrationality to agents is an explanatory device whose use carries with it significant costs, both theoretical and practical. In this paper, I will argue that many of the phenomena traditionally grouped together under the category of “ideological effects” can be explained without relinquishing the rationality postulate. Using the example of a collective action problem, I will try to show how agents can rationally engage in patterns of action that are ultimately contrary to their interests, and how they can rationally resist changing these patterns even when the deleterious or self-defeating character of their actions has been pointed out to them.

I think that an approach such as this, one that is sparing in its ascription of irrationality and error, has two principal advantages. First, it allows one to engage in social criticism while minimizing the tendency to insult the intelligence of the people on whose behalf the critical intervention has been initiated. This may reduce the tendency exhibited by some members of these groups to reassert their autonomy precisely by rejecting the critical theory that impugns their rationality. The second major advantage is also practical. The vast majority of oppressive practices, I will argue, are not reproduced because people have false or irrational beliefs. Because of this, simply persuading people to change their beliefs has no tendency to change the underlying mechanism through which the practices are reproduced. Thus the institutions acquire the appearance of being impervious to social criticism, only because people have been criticizing them for so long, but nothing has changed. Correctly diagnosing the mechanism through which they are reproduced has the potential to suggest new strategies for social change.

Ideology and Irrationality

One might begin by asking what the big problem is with the assumption that people are behaving irrationally. After all, everyone knows that people make mistakes, and do things without thinking. If people are acting in a way that is making their own lives miserable, it seems likely that they have fallen into some kind of error. When the peasant rallies to the defence of his feudal lord, or the hostage begins to promote the goals of her kidnappers, we are likely to think that these people are behaving in a muddle-headed way. We may even come up with a name for what has muddled them up, calling it “Christianity,” or “Helsinki syndrome.” When they persist in this behavior, even after having been told that they are suffering from one of these ailments, we begin to think that they are not just mistaken, but that they are in the grip of some deeper form of irrationality. This diagnosis seems fairly intuitive—so what is the big problem?

The problem is the one raised by Donald Davidson in his famous critique of “conceptual schemes.” Davidson’s argument is roughly as follows: there is no fact of the matter about what people mean by what they say. The meaning of their utterances is determined by the best interpretation that hearers confer upon them. However, the meaning that I ascribe to a person’s utterances depends in a crucial way upon the set of beliefs that I take that person to hold true. For instance, when people talk about meeting on “Thursday,” I can only figure out which day they are referring to by assuming that they share with me the belief that today is Tuesday. If I thought they believed that today was Monday, I would start to think that they meant Wednesday when they said “Thursday.” But since we can only find out what peoples’ beliefs are by asking them, and since they can only express their beliefs by putting them in the form of sentences that in turn require interpretation, any particular interpretation that we might confer upon a person’s utterances is massively underdetermined by the evidence available to us. There are an infinite number of ways to interpret anyone’s speech, each supported by the ascription of a different set of beliefs.

But then how do we ever understand one another? Davidson argues that all interpretations are constrained by a principle of charity. The best interpretation is the one that ascribes the most reasonable set of beliefs to that person, which is to say, the one that maximizes the number of true beliefs the person is thought to hold. From the standpoint of the hearer, this means that the best interpretation is the one that is consistent with the highest level of agreement between the speaker and the hearer. This requirement of charity is not a methodological assumption, it is a constitutive principle. To interpret someone is to interpret that person charitably—if you are not interpreting them charitably, then what you are doing simply does not count as interpretation.

To take a real-life example of this principle in action, consider the following episode from the history of ethnography. Lucien Lévy-Bruhl infamously suggested that he had discovered the existence of “pre-logical” cultures. He found that his subjects persistently made contradictory statements, incoherent observations, and generally believed false things. Later generations of ethnographers, of course, returned to these societies somewhat skeptical about this claim, and quickly discovered other ways of interpreting the kind of utterances that had stumped Lévy-Bruhl, interpretations that made the “natives” come out sounding a lot more reasonable. For example, by distinguishing between expressions that were meant literally and those that were meant metaphorically or figuratively, a substantial portion of the “contradictions” Lévy-Bruhl uncovered could be dismissed. The Davidsonian point is that these later interpretations were better than Lévy-Bruhl’s, not because they came closer to what the people “really” meant, but because they were more restrained in their ascription of error. They were right precisely because they made the natives sound reasonable. What other evidence could there be for the correctness of an interpretation?

The more general problem is this: suspending the assumption that people are by-and-large reasonable, and that their beliefs are predominantly true, removes the only constraint that prevents one from interpreting their utterances as meaning anything at all. The problem then is not that one can no longer construct a plausible explanation of their behavior, but that one can construct too many explanations, and it is hard to rule any of them out. This means that the critical theorist can only go so far in ascribing irrationality and error to people. Once she crosses a certain threshold, this ascription of error stops being an “expose” of their mistakes, and starts to count as evidence against the proposed interpretation of their behavior. It starts to suggest that, rather than having uncovered a massive, all-encompassing ideology, she has simply failed to understand what it is that people are doing. The interpretation that appeals less heavily to ideology then wins, for that very reason—it appeals less heavily to ideology.

Much of the history of critical theory in the twentieth century can be seen as an attempt to work around this problem—to find a way of advancing radical (i.e., uncharitable) social criticism without having it backfire on the critic. Unfortunately, one of the things that has seldom been questioned is the very basic assumption that when people act in a way contrary to their interests, they must somehow be acting irrationally. I would like to suggest that this is often not the case. While people do sometimes make mistakes and get confused, this is more the exception than the norm. I will try to show that individuals often get outcomes they do not want, not because they have chosen wrongly, but because they have chosen instrumentally. Thus greater attention to the structure of interaction reduces the need for a theory of ideology.

Collective Action Problems

The most common error that critical theorists have made, in my view, is to mistake the outcome of a collective action problem for an effect of ideology. Collective action problems arise in situations where agents can best pursue their own goals and projects only by imposing some kind of a cost upon others. The prisoner’s dilemma is the classic example: each suspect can reduce his own expected jail time by turning in his partner. Doing so, however, increases the amount of jail time that his partner must serve. As a result, both suspects turn each other in, and so both serve more jail time than either would have had they remained silent.

Many interactions involving large numbers of people have precisely the same structure. For example, most telephone companies did not used to bill their customers for individual calls to directory assistance. Instead, customers paid for the directory assistance service as part of their basic monthly package. The problem with this arrangement is that it generates overuse of the service, since the cost of serving any individual caller is paid by all of the firm’s customers. So individuals who were too lazy to look up a number could get someone else to do it for them, while effectively displacing the cost of this action onto others. But when everyone does this to each other, everyone winds up using more directory service, and paying more for it, than anyone actually wants to. As a result, when phone companies switched to charging individuals directly for calls to directory assistance, the volume of calls dropped dramatically. In a trial run in Cincinnati, imposition of a $0.20 per call charge reduced the average number of directory assistance calls from 80,000 to 20,000 per month. As a result, average residential telephone rates dropped by $0.65 per month.

The most significant thing about these collective action problems, from the standpoint of critical theory, is that agents often have a hard time getting out of them, even if they realize that they are engaging in collectively self-defeating behavior. The reason is that the mere recognition that the outcome is suboptimal does not change the incentives that each individual has to act in a way that contributes to it. If I am not being charged per call, then even if I realize that I should not overuse directory assistance, it does not mean that my phone bill will get any lower if I stop. It is only if everyone stops that I will begin to see a difference. But I have no control over what everyone else does (and furthermore, if everyone else stops overusing the service, and I continue to do so, then I am even better off). This is why it is called a collective action problem: in order to change the interaction, everyone has to stop doing what they have been doing (and often, everyone must also believe that everyone is going to stop).

One very good clue that people are stuck in a collective action problem is when everyone knows that there is a problem, but nothing ever changes. For example, it is very common these days to hear complaints about the way a “media circus” develops around certain events or stories, such as the Lewinsky-Clinton scandal, or the O.J. Simpson trial. One of the most commonly criticized characteristics of this pattern is the way that coverage achieves a “saturation” level—the clearest instance being when every major network is showing exactly the same thing, whether it be the O.J. chase or Clinton’s deposition. This is clearly a suboptimal outcome; if one channel is providing 24-hour live coverage of a particular story, then there is no point having the others do the same. The same applies when every news program covers exactly the same five or six stories in their evening broadcast.

In any case, what is interesting about this criticism is that it is not just circulated in the broader public sphere. When the journalists who are actually providing the “saturation” coverage are asked for their views, they also often say that the situation is ridiculous, that there are interesting stories being ignored, etc. In other words, the problem is not that the members of “the media” have mistaken priorities, or a poor understanding of what should be on television. They can see perfectly well what the problems are. The pattern persists because they are stuck in a suboptimal equilibrium. Stations compete with one another for viewers. Given a choice between a small portion of a large audience and a large portion of a small audience, it will often be in the interest of broadcasters to choose the former. When all stations reason the same way, all will provide exactly the same coverage. The result is simply a waste of one or more broadcast frequencies.

Thus the mere fact that people know that a certain social change would be in their interest (broadly construed) does not mean that they will have an incentive to do anything about it. I may know that it is in our interest, as telephone ratepayers, to use directory assistance in moderation, but that does not make it in my interest, or your interest, not to do so. In the same way, journalists may recognize that their entire profession loses credibility when behave as a pack or pursue lurid stories—and so it is not in their interest — but it may still be in the interest of each individual reporter, each individual news organization, to do so (since it is possible to increase one’s share of viewers even when the total number drops—exactly the same logic underlies “negative” campaign ads). From the outside, then, it may look as if people are confused about where their interests lie, that they are in the grip of some ideology. But upon closer examination, they turn out to be quite rational. They may even join the critical theorist in lamenting the sad consequences of their own actions.

This analysis invites us to look back on some of the classic cases of ideology to see if something similar might not be going on there as well. Take the working class, for instance. Once it was decided that workers would be better off under communism than under capitalism, many theorists simply assumed that workers would go out and overthrow the system. The fact that they failed to show up at the barricades was felt to require some explanation. Ideology was the most popular candidate. So Marx suggested that they were the victims of commodity fetishism: they mistook the social relations between individuals for objective relations between things, and so became convinced that the existing economic order was immutable. However, after half a century of Marxist critique, the working class still failed to make a revolution. Theorists began to suspect a deeper, more insidious form of ideology at work. The most popular diagnosis was consumerism: workers had become seduced by the materialistic values of late capitalism, and so failed to support the revolution because of a mistaken belief that they enjoyed living in suburban houses, using labor-saving appliances, eating TV dinners, etc.

These social critics simply failed to see the more obvious explanation. Revolutions are risky business. Setting up picket lines, not to mention barricades, is tiresome, difficult, often cold, and sometimes dangerous. Even if it were in the interests of the working class to bring about a socialist revolution, this does not make it in the interest of each individual worker to help out. There is no point going to the barricades unless thousands of your comrades intend to join you, but if thousands of your comrades are going anyhow, no one will miss you if you stay home. Revolutionary fervor can generate the solidarity needed to overcome this collective action problem in some instances. But in general there is no reason to think that workers will show any more solidarity with one another than phone company customers. And broad segments of the working class have consistently shown themselves willing to free-ride off each others’ collective achievements this is why unions usually seek legally enforced “closed-shop” arrangements.

The Myth of the Beauty Myth

Now consider a more controversial example. We often hear the complaint that cosmetics companies, the diet industry, plastic surgeons, and so forth, exploit women. In the mid-90s, women in the United States spent around $20 billion a year on cosmetics, a sum that could have been used to finance 400,000 day care centers, or 33,000 battered women’s shelters, or fifty women’s universities, and so on. This is clearly a suboptimal outcome. Furthermore, the fact that men (who earn more on average) spend only a fraction of this amount maintaining their appearance, and do not suffer much anxiety over their physical condition, adds insult to injury. The difference is also widely felt to perpetuate a set of gender roles that are disadvantageous to women. Thus feminists have for a long time argued that women need to free themselves from their dependence upon beauty, and the beauty industry.

But what has become most striking about this critique is that even though the vast majority of women accept it, it has little bearing on their personal conduct. Many women are perfectly capable of denouncing the objectifying male gaze and distorted body image, while at the same time counting calories and drinking skinny lattes. This observation has led many feminist theorists to suggest that there must be an even more insidious form of ideology at work. If women understand the structure of their oppression, and they can see how the cosmetics and fashion industry actively exploit them, then they must be out of their minds to drop a hundred dollars on the latest moisturizer. Naomi Wolf basically suggests as much, when she describes how, “to reach the cosmetics counter, [a woman] must pass a deliberately disorienting prism of mirrors, lights and scents that submit her to the ‘sensory overload’ used by hypnotists and cults to encourage suggestibility.” She claims that women experience an “unconscious hallucination,” that female minds have been “colonized,” that women have been “stunned and disoriented” by changing gender roles, etc. In short, they are not acting rationally. How could they be so dumb? The answer is ideology: “Women are ‘so dumb’ because the establishment and its watchdogs share the cosmetics industry’s determination that women are and must remain ‘so dumb.'”

However, the very fact that everyone has heard this critique a hundred times and yet nothing ever changes suggests that what we are dealing with is a collective action problem, not a problem of ideology. This is often overlooked in the case of beauty, because the literature has a tendency to focus on the role of ideals or archetypes in setting the standards. This distracts from the fact that beauty has an inherently competitive structure. Although standards of beauty vary from culture to culture, every culture has some kind of beauty hierarchy. People derive very significant material and social advantages from their position in this hierarchy. As a result, they can significantly improve their quality of life by moving up a few levels. This is where the “archetype” model of beauty proves misleading. The advantages of beauty do not flow to those who are beautiful in some absolute sense, but to those who are more beautiful than those around them. This is what generates the competitive structure: moving up the beauty hierarchy means bumping someone else down.

None of this would be a problem if people were unable to amplify their natural endowments. Unfortunately, cosmetics and plastic surgery make it possible to synthetically reproduce some of the characteristics that are considered beautiful. As a result, people have the ability to buy their way up the hierarchy. This generates a classic collective action problem. Consider the example of face-lifts. Many women seek to make themselves look younger through artificial means. However, how old a person looks is entirely relative. If a woman “looks 50,” it is only because, when compared to other 50 year-old women, she looks about the same. This means that when a 50-year old woman gets a face-lift that makes her look 40, the action can be described in one of two ways. In a sense, she has made herself look younger. But in another sense, all she has done is make all the other 50-year old women in the population look a little older. These women may then be motivated to get a face-lift just to retain position. If this leads all 50-year-old women to go out and get face-lifts, then their behavior will be perfectly self-defeating. They will be right back where they started — all looking like 50-year old women—except that now they will be paying a lot of money to look that way.

This is clearly the dynamic at work in a number of different areas (as any resident of California can attest). Many women would be glad to stop wearing makeup — as long as every other woman stopped too. What they are not willing to do is stop unilaterally, because the private cost would outweigh the private benefits. Wearing makeup is like standing up to get a better view at a ballgame. You may be able to see better, but only by blocking the person behind you. As a result, once one person stands up, soon everyone else does too. Naturally they would all be more comfortable sitting, and they would be able to see just as well. But sitting down while everyone else stays standing is hardly an option.

Conclusion

These examples are all of cases where a collective action problem generates the illusion that some kind of ideology is at work. There are many other social interaction patterns that can generate the same effect — e.g., Terrence Kelly’s examples of agents who conform to institutional norms that disadvantage them in order to maintain trust relations reveal a similar phenomenon. Thus the present analysis does not constitute a comprehensive theory. My goal has simply been to make a contribution to the project of weaning social critics from their attachment to the concept of ideology.

The stated motivation for this project has been the concern that, through an excessively uncharitable attitude toward their subjects, critical theorists have had a tendency to undermine the credibility of their own views. In the background, however, has been another concern. Many social critics succumb to a sort of tacit cultural determinism. This is reflected in widespread assumption that social practices directly reflect people’s values, or that they express some set of beliefs about how one should act. If this were the case, then the key to changing social institutions would indeed be to change people’s values or beliefs. Unfortunately, while some social practices are directly “patterned” by the cultural system, many more are reproduced through very loosely constrained strategic action. These interactions are integrated only indirectly, and so the associated outcomes may not reflect any specific set of values or beliefs. In this case, social criticism alone will not change anything.

The more serious problem for critical theory arises as follows: after having presented the criticism, and having it widely accepted, the critic expects to see some kind of social change. When none is forthcoming, the critic begins to suspect, not that there is a practical problem preventing implementation of the desired improvements, but that the criticism itself was too superficial, that it did not get to the root of the problem. Perhaps the ideology was more pervasive than originally suspected. Perhaps the original criticism was insufficiently radical, because it used concepts that were in general circulation, and hence complicit in the ideological system. Perhaps the solution is to deconstruct these concepts and form an entirely new set.

Once this line of thinking has been engaged, the critical theory becomes increasingly baroque, increasingly obscure, and, of course, increasingly unlikely to change anything. This can generate a vicious cycle of theoretical self-radicalization, in which critics respond to the increasing irrelevance of their theories by further radicalizing them, making the entire apparatus more and more remote from the concerns and the vocabulary of everyday life. The goal of this paper has been to suggest one way in which critical theorists can engage in social criticism without generating this tendency to price themselves out of the market. Greater attention to the structure of social interaction, the practical mechanisms through which undesirable interaction patterns are reproduced, has the potential to generate more useful theoretical interventions.