Atomized
This week’s column is a dispatch from the most recent annual meeting of the American Economic Association, in sunny San Diego. It’s on a not-so-sunny subject: the striking rise in mortality rates among working-age white (non-Hispanic) Americans, especially those without college degrees, due to things like suicide and drug and alcohol abuse. The phenomenon has been labeled “deaths of despair” by two economists, Anne Case and Angus Deaton, and it isn’t hard to understand why. America’s opioid epidemic has made the crisis of mortality far worse than it (most likely) would have been. But as serious as the opioid situation has become, it is part of a broader, longer lasting social mess which deserves far more study. Life expectancy in America has fallen for three consecutive years: something which hasn’t happened for a century. That is not the sign of a healthy society, in any sense of the word.
As the column notes, the discussion of deaths of despair at the conference included a contribution from Robert Putnam, whose name you may know from his famous book “Bowling Alone”. That book claimed to identify a significant and steady decline in America’s “social capital”. The strength of American civic society seemed, he thought, to be weakening, as evidenced by falling participation in community organizations of all sorts. Putnam has a new book out this year expanding on these themes, as the column explains:
“Zoom out, he said, and deaths of despair fit into a longer American cultural narrative. For a range of variables, including income equality, cross-party political collaboration, labour-union membership, community involvement and marriage rates, there was a rise from the beginning of the 20th century into the 1960s, followed by a plateau and decline. (The same arc is found for the use of the word “we” relative to “I” in books published in American English.) It seems possible, Mr Putnam said, that the challenges of the first half of the century, from the power of industrial monopolies to depression and war, prompted a cultural response in which Americans thought and acted more as a group. Over the past half-century, however, they seem to have reverted to a more atomised condition.”
The challenge, in social science, is always to distinguish between trends which are merely correlated and those which are somehow causally related. The challenge becomes all the greater when vague concepts like “culture” are introduced, which is one reason why economists tend to eschew such things. But perhaps you, like me, feel that Putnam is on to something, even if it isn’t precisely clear what. If there were something cultural going on in the background, having a significant effect on American society, what would that feel like?
Economists tend to assume that people make the decisions they make because they are more-or-less rational and mostly self-interested. Whether this assumption is made because it makes the analysis easier or because it’s supposedly how people actually think is often unclear, and may well vary from economist to economist or time to time. What rational, self-interested decision-making means, to take one example, is that a person considering whether to go to college or not would gather information about things like the cost of university, the probability of completing a degree, the expected value of careers in different degree-requiring fields, the value of the best non-college option, and things of that nature. They would then compare the costs and benefits of different paths forward and decide what to do.
But is that really how people decide whether to go to college or not? Or where to go and what to study? Of course there are economic considerations involved: like an awareness of the fact that college graduates tend to earn more than those without degrees, and the constraints on college investments imposed by a family’s finances. I would venture, however, that the way these concerns are mostly taken into account is through the application of what you might call cultural heuristics. In other words, teenagers and their families make judgments about college which are heavily informed by what they observe others doing—especially respected others whose lives seem both admirable and achievable. The world is a complicated place, and we’re all constantly asking ourselves what we’re supposed to do. We learn what we’re supposed to do by watching others and seeing how various forms of behavior translate into happy or unhappy lives, success or failure, esteem in the community or disdain. In this way, we collectively grope our way toward solutions to various problems. Maybe it would be better if we all sat down individually and undertook a rigorous application of cost-benefit analysis built around our own skill sets. I’m skeptical, but at any rate that’s not how the world works.
Heuristics that are sufficiently strong and long-lived become embedded in culture. I would argue that among middle-class Americans, going to university is first and foremost an element of culture: a decision taken based on things like values and identity rather than calculations of economic cost and benefit. Cultural rules like this can be useful. Suppose that getting a college degree isn’t just good for the individual, but also provides benefits to society as a whole. Maybe graduates, having been exposed to a more cosmopolitan university environment, become more socially tolerant, to invent a potential example. In that case, society as a whole is better off because of the added incentive to go to college provided by culture than it would be if everyone made the college choice based on individual assessments of costs and benefits alone.
On the other hand, culture is persistent. It might change more slowly than technology or the structure of the economy: so some people might continue to attend college even after the economic logic for doing so has faded. Perhaps more relevant to today, people who have made college a part of their identity may resist efforts to change the institution, even when change is badly needed.
What does this all have to do with deaths of despair? I think a lot of the choices that we make, both economic and social, are influenced by cultural rules like this. They’re a big part of why a particular society is the way that it is. These rules shape how we decide to spend our time: do we get involved in community activities or do things that are more personally remunerative? They shape where we live: is it more of a mark of status to become a respected figure in one’s home town or to leave for a giant coastal metropolis? They shape how we judge different sorts of success: is it just about money, or is it about money earned through a prestige job, or is money less relevant than one’s broader contributions to society? We may not have a very good idea at all how these guidelines come to be established or subsequently erode, but we may remain in the dark regarding where they come from and still recognize that they are an extremely important part of a country’s social infrastructure.
Having set all that out, let’s now tell a story. It’s just a story; any resemblance to actual events is completely unintentional. The idea is simply to illustrate how cultural change might affect broad social outcomes. Suppose that somehow a country came to have a certain set of cultural guidelines that might broadly be considered morally oriented and other-regarding—even, in some cases, collectivist. Who knows how it got that way. Maybe a series of national challenges made the emergence of such guidelines imperative for survival. Doesn’t matter; they got them.
Now, people observe the culture around them, and maybe not everyone liked what they saw. Some thought it was stifling and conformist. Some found it worrying because they thought it likely to enable the rise of a tyrannical state. And some disliked it because it undermined economic dynamism: because it placed constraints on individual initiative and ambition and discouraged the accumulation of vast wealth for its own sake. Maybe the detractors built a movement with the intention of changing society, through persuasion and policy, into something which gave individuals more room to seek their fortune. Maybe they enjoyed a fair amount of success, some of which was on the whole welcome.
But cultural change is tricky and perhaps the process of cultural evolution took on a life of its own. Remember, culture forms and persists because we’re all constantly watching each other, learning what sorts of behaviors are rewarded and esteemed. So over time, as outsized individual achievement became a signal of status, and as high incomes and wealth became a marker of individual achievement, people adjusted their ideas about how to live their lives. Certain pro-social verities, certain values and feelings of social obligation, faded. That cultural change might then have directly influenced large-scale processes of social and economic change: by informing patterns of migration among skilled young people, for instance. It would also have been particularly important in the face of large-scale economic shocks: a community which might have responded in one way to an economic or social crisis came instead to respond in another way. Of course other things would have been going on simultaneously; this would have been one factor among many shaping the course of history. But it would have been a meaningful factor, one that people would eventually come to recognize and would struggle to understand. And one, they might worry, which could prove very difficult to reverse.
The really insidious thing about this sort of cultural change, our hypothetical society would come to realize, is that by stripping away the cultural infrastructure that once helped to enable collective action it made itself very hard to reverse indeed. Group mobilization to undermine the pro-social flavor of culture proved relatively easy to realize, since the culture itself was good at facilitating such things. But in the more atomized culture that subsequently evolved? Well, good luck getting people to cooperate selflessly on grand social projects. So inhabitants of this imagined society might understand that there were costs to the loss of prevailing norms, and that the economy as it had come to behave delivered more socially costly outcomes than expected and fewer socially useful ones. They might imagine all sorts of changes in behavior or policy which could conceivably begin to set things right. But they would be at a total loss as to how to make collective social change happen.
If the story were right, what might a social scientist recommend that this benighted society do? I’m not at all sure. But one might start by encouraging more study of history. If there were once cultural guidelines that were valuable, then they had to come from somewhere, and perhaps an examination of history might reveal the sorts of things that people did to foster pro-social cultural rules. And perhaps also it would be important for people in high-status positions, with lots of social influence, to model different behaviors. Forsake the morally dubious payday. Tell truth to power even when it is personally inconvenient. Focus less on personal brand building and more on taking risks on behalf of those with less power and opportunity. At another panel at the AEA conference I heard Suresh Naidu, a really wonderful economist, describe the old labor union tactic of recruiting the most charismatic and productive worker on the shop floor first, because once they’re on board the rest of the shop will warm to organization more readily. Of course, there’s nothing stopping the charismatic and economically successful from recruiting themselves to the cause. Except, I suppose, for deeply ingrained cultural ideas about what makes an important and successful person important and successful.
But I’m not sure that’s right. And at any rate it’s just a story. Any resemblance to actual events is completely unintentional.