Treatment of people as ethical animals
People are complex, or why economics should be really difficult and qualitative
It is now part of the dimly remembered ephemera of the distant past, but only a few weeks ago Twitter was all abuzz about fabricated data in a paper by a key figure in cognitive psychology and behavioral economics, Dan Ariely. The shenanigans were first brought to light in a fascinating analysis by the authors of the blog Data Colada. (Here’s The Economist’s recap.) Remember that? It was in August, several billion news cycles ago.
What made this an irresistible, if short-lived, story is the fact that the paper in question was on the subject of honesty, of all things. The authors claimed to find that asking people to sign a form at the top rather than the bottom led people to be more honest in reporting information on that form (the mileage on cars, in this example). Ariely built a whole career as a prominent public intellectual—complete with TED talks, popular books and partnerships with big companies—telling people just-so tales about human behavior based on the results of this paper and others which focused on similar subjects. As Alanis Morissette might say: you live, you learn.
When the story broke, it prompted some gesturing at broader problems with social science research. Examples of outright fraudulence are particularly galling and more common than you might think, but they’re only the tip of the iceberg when it comes to problematic research results. Attempts to replicate the findings of published papers routinely fail, and research results often show signs of p-hacking: cases in which a researchers appear either to have plumbed their data for anything that looks significant (rather than stating a hypothesis and testing it) or to have subjected their data to a variety of tests until they find one which delivers the desired result. Replication problems turn up in “hard” sciences, but they’re especially common in behavioral sciences. That’s not ideal, because questionable results in fields like cognitive psychology and behavioral economics have a way of bursting out of the lab and into real world policy debates.
And so when things like the Ariely affair occur, or when studies finding major problems with replication are published, there often follows a discussion about how research and publication methods should be improved in order to make sure a better quality of science gets done. And many of these suggestions are very sensible; it is astonishing, for instance, that authors of published papers are not always required to make available the data underlying their work.
But I don’t know. Is that really enough? Maybe it’s worth taking a step back to think about what it is we’re doing here. Consider, for example, the theory of mind at work in the paper at the center of the Ariely contretemps:
Even subtle cues that direct attention toward oneself can lead to surprisingly powerful effects on subsequent moral behavior. Signing is one way to activate attention to the self. However, typically, a signature is requested at the end. Building on Duval and Wicklund’s theory of objective self-awareness, we propose and test that signing one’s name before reporting information (rather than at the end) makes morality accessible right before it is most needed, which will consequently promote honest reporting. We propose that with the current practice of signing after reporting information, the “damage” has already been done: immediately after lying, individuals quickly engage in various mental justifications, reinterpretations, and other “tricks” such as suppressing thoughts about their moral standards that allow them to maintain a positive self-image despite having lied. That is, once an individual has lied, it is too late to direct their focus toward ethics through requiring a signature.
This is...not how morality works? No one behaves dishonestly because they forgot to be ethical. One’s sense of what’s right is either powerful enough in a particular context that the possibility of behaving dishonestly is never really in play, or one reasons one’s way around ethical constraints, but one does not temporarily forget that an action is unethical such that a prompt could “make morality accessible right before it is most needed”. It’s kind of an infantilizing, insulting thing to think about people: that they might innocently fib about their car’s mileage, having somehow lost the capacity for moral reasoning, only to be reminded at the end of the form by the signature prompt that yes, lying is wrong, at which point the person engages in a panicked bout of post hoc rationalization before deciding, well, the deed is done. If only morality had been more accessible to me when I needed it most.
It seems strange, or it should: a team of adult human beings—experts in their fields—felt that this was a reasonable way to think about people’s inner lives, designed experiments to see how well their hypothesis performed, and then trumpeted the results as saying something significant about human behavior (sincerely, it would seem, in the case of at least some of the participating authors). We’ve all focused on the fraud, as if the exercise itself weren’t a little bizarre.
But of course, it isn’t unusual for a social science to build itself around absurd models of human decision making. It’s an awkward thing to bring up, not least because economists get so touchy when people raise questions about the field’s approach to things, but economics is built on conceptions of human behavior that are very obviously wrong. People aren’t perfectly rational in the way economists often assume. Economics grew itself a whole subfield as it sought to understand all the many ways in which we fail at rationality (which happens to be the part of economics to which Ariely’s work is adjacent). But more importantly, we’re not hard-nosed utility maximizers, selfishly grabbing as much as we possibly can in every context. And yet, the assumption of utility maximization is built into just about every economic model you can dig up.
There are many economists who recognize that these behavioral assumptions are not realistic descriptions of the way normal people think and act. There are some who work conscientiously to point this out to students and whose work takes account of the problems with these assumptions to the greatest extent consistent with getting papers published in economics journals. But there’s no escaping the fact that the assumption of self-interest is the motive power behind economic analysis. If you look at a textbook like the one produced by the Core team, which is about as good as introductory econ textbooks get, you see that the authors are careful to point out that self-interest is an assumption that’s being made rather than a fact on the ground, and you get a discussion about some of the shortcomings of economic models. But then you get a justification for the assumptions which draws on the bad arguments made by Milton Friedman, and you get the other 99% of the book, which makes use of those wrong assumptions.
Now look, there are reasons to make those assumptions. People often behave according to their own self-interest. But more importantly, simplifying assumptions allow economists to build tractable mathematical models and to use them in ways which can yield useful insights. Indeed, it would be totally fine if models making use of these simplifying assumptions were just one small part of the economist’s analytical toolkit. But they aren’t, and that’s a problem.
Is it though? Does it actually matter? I think so, for a few reasons. Heavy reliance on these models is an obstacle to the development of a clearer understanding of how people actually make decisions, and thus to how the world works. Have a look at a part of Friedman’s argument, quoted in the Core textbook:
Consider the problem of predicting the shots made by an expert billiard player. It seems not at all unreasonable that excellent predictions would be yielded by the hypothesis that the billiard player made his shots as if he knew the complicated mathematical formulas that would give the optimum directions of travel, could estimate accurately by eye the angles, etc., describing the location of the balls, could make lightning calculations from the formulas, and could then make the balls travel in the direction indicated by the formulas.
Our confidence in this hypothesis is not based on the belief that billiard players, even expert ones, can or do go through the process described. It derives rather from the belief that, unless in some way or other they were capable of reaching essentially the same result, they would not in fact be expert billiard players.
Ok, but that’s not how billiards players take their shots. If I were someone whose scholarly focus was understanding what happens in billiards, I think I’d care a lot about how players actually figure out what to do with their cues. That seems like a thing which might prove really useful in attempting to understand what happens when people play billiards and why. After all, it seems important to point out, billiards competitions are not just exhibitions in which calculating machines sink every shot with perfect accuracy. Everything that is interesting about them flows from the fact that humans are strange creatures who are both remarkably adept at a wide range of things and yet very much not mindless calculating machines. And you know, it’s just possible that the process of attempting to understand what’s actually happening when a billiards player plays could provide insight into an entire range of other questions.
People are not merely self-interested, and the fact that they are not merely self-interested is not trivial. It is history altering. The Core textbook teaches Robert Allen’s high-wage hypothesis in explaining the onset of the Industrial Revolution. And look, Allen’s is really beautiful piece of explanatory economics, and one which has influenced my thinking on a range of subjects. It’s also attractive to economists because it provides a story about the beginnings of industrialization in Britain which works purely in terms of incentives and self-interest: in Britain, coal was cheap and plentiful but labor was relatively expensive, and so self-interested individuals looked for ways to use coal to save on labor costs.
But as neat a story as this is, it doesn’t explain the onset of modern economic growth. A slight difficulty is that several economic historians dispute the idea that Britain’s was a high-wage economy. But beyond that, Allen’s hypothesis takes as given any number of other characteristics of British society which made it an hospitable place to start an Industrial Revolution. He acknowledges that cultural factors may well have mattered, but dismisses them as necessary but not sufficient conditions for industrialization. What really mattered, he says, was the structure of wages and prices in the British economy, which was itself a consequence of Britain’s imperial and commercial history.
As fascinating as Allen’s argument is—and it is worth reading—this is an absurd way to talk about the origins of industrialization. It’s like responding to someone who has asked why a car moves with the answer that someone pressed the accelerator. It might be correct in some sense (although it might not; perhaps someone released the brake while the car was on a hill, or perhaps it was towed), but the answer provided is nowhere near complete and provides what is in some sense the least interesting and useful response to the question. Why was responding to economic incentives a valid thing to do in British society? Why did people respond in the way that they did: rather than launching a war of conquest and bringing home slaves to address a problem of scarce or dear labor, for example, if in fact labor was dear or scarce?
The people who engaged in the actions that began the Industrial Revolution were people, after all, not mindless automatons. They didn’t respond to whatever incentives they faced like a microorganism responds to stimuli. They thought and reasoned. They grappled with self-doubt and wondered what other members of society thought of their particular obsessions. They may well have considered whether there might be profit in a particular course of action, and their assessment of the question surely influenced their decisions. But there is profit in all sorts of things that we routinely opt not to do. There may be profit in hawking Ivermectin, and yet many of us opt not to cash in on that opportunity. The difference between a world in which exploiting such possibilities is deemed socially acceptable and one in which it is not can be pretty significant.
Understand: this is among the biggest questions in social science: how it is that we came to escape the old, Malthusian world and find ourselves instead in this strange new one, in which average incomes seem to go up year after year. People’s beliefs and ideas are central to it. How much do economists fail to see when they close themselves off from such considerations? What leads us to believe that some new social revolution, rooted in changing ideas and values, might not loom ahead, which might bring the era of modern economic growth to an end—or unlock new and unimagined economic prosperity?
One could say that analysis based on the assumption of self-interest may not suffice to explain or predict epochal phenomena like the Industrial Revolution, but can be used to say useful things about market transactions within a stable society. That would be a pretty substantial concession, in terms of the explanatory power and prestige of the economics profession. But even this doesn’t work. Society isn’t stable: norms change, technologies change, climate changes. What is permissible in one era is not in another. Neither are there bright and persistent lines between the realm of the purely commercial and all the rest of society. If you want to know when it is reasonable to use analyses which rely on the simplifying assumption of self-interest, you need to know something about how these processes unfold, what forces shape the prevailing values in a particular corner of society and under what conditions they evolve, and why—this is the really big thing—people believe it is ok to behave in a certain way in some cases but not others.
Working to understand these things is, to be clear, way harder than dashing off a lit review, writing down a model, then downloading some publicly available data to estimate it. At the same time, there are plenty of economists out there who already do lots of qualitative work attempting to understand the ins and outs of the topic on which they’ve chosen to focus, but who don’t bother writing papers which make detailed, multi-faceted arguments about why something in society is the way it is, because that’s not how you build a successful career in economics.
What is really disturbing about the heavy reliance on the assumption of self-interest is that it encourages modes of thinking in which people lack agency. The rational utility maximizer is not a thoughtful person. He doesn’t weigh up the advantages to taking a course of action against possible ethical concerns or the consistency of that action with his own conception of himself and what he stands for. He’s a dead-eyed sociopath, whose actions are entirely determined by the material circumstance in which he finds himself. And because he lacks agency, he is there to be manipulated by the very clever people who can write down models explaining his behavior.
You thus wind up with people who are chest-thumping liberals, who loathe Marx and think Critical Race Theory is evil because of its emphasis on the role of structural features of society in shaping outcomes rather than individual reason and agency, signing on to this worldview in which the world works best when people mindlessly pursue their own self-interest. You get authoritarian capitalists who think democracy is bad because people want to place limits on the operation of the market. And you get the soft-touch authoritarians who who think they’ve worked out the ways in which the automatons fail to behave as the models suggest they should, and who reckon they should be put in charge of creating “choice architectures” which merely “nudge” people to make better decisions. This, again, is the scandal of Ariely’s dodgy research as much as the data tomfoolery: the idea lurking behind the paper that people are really quite simple creatures whose behavior can be hacked by adjusting where you put the signature box on the form.
The funny thing is that Friedman understood that ideas matter! He believed in the power of persuasion to move people and convince them to change their values, and he succeeded in his efforts to validate self-interest as a virtue. And the lesson his profession seems to have taken from this is not that people are complex, or that prevailing values have big effects on economic activity and deserve intense study, but rather that, yep, it’s good to lean really heavily on models which assume that people are mindless and greedy. As Alanis would say: what a jagged little pill.
Hi Ryan:
Great piece, thanks. A few responses:
1. “authors of published papers are not always required to make available the data underlying their work”
Not just the data! They need to provide the actual analytic mechanism, software, that they use for the calculations. A replicator cannot be expected to re-create it perfectly based on verbal explanations or even the algebraic formulas in their papers. Detailed implementation issues always arise, and replicators can look directly at how the originators dealt with them — the precise, coded derivations of different measures. Plus errors, of course, Reinhart/Rogoff being the obvious example. Gimme the spreadsheet. Or stata code *and* the spreadsheets, as in Piketty & Co.’s DINAs, whatever.
2. This all cuts to the demand for “microfoundations.” In most cynical terms, the synonym for that is “post-facto armchair psychological/behavioral justifications for model assumptions about human reaction functions.” Which generally derive their rhetorical weight from the degree to which they seem “obvious.” Making a bit of a leap here, in practice where confused notions of individual vs collective "saving" rule, this means that assumptions which seem obvious to minds steeped in puritanical Calvinism tend to dominate economic theories and models. (Even Marx had a very heavy dose; Minsky even more so. And etc.) Vs models focusing on the observed, emergent behavior of different groups, classes, etc., whatever their microcauses might be.
So (entering the Office of Self-Aggrandizement here), I’d like to bruit the following model as one that completely eschews and refuses to do that post-facto rationalization and justification veiled as microfoundations.
http://www.paecon.net/PAEReview/issue95/Roth95.pdf
Even where what seems to be an ironclad “obvious” explanation lying on the ground waiting to be picked up: “The bottom 20% turns over its wealth in annual spending six or seven times faster than the top 20% because duh, declining marginal utility.”
Just: that’s what the top/bottom 20% groups *do.*
It’s like modeling the fluid dynamics of water in a whirlpool, or passing through a venturi. Sure, understanding the H20 molecule interactions provides a deep and rich understanding of water’s viscosity. But for the fluid model you just measure the viscosity and Bob’s your uncle.
Fully cynical view: The whole microfoundations business was basically a very clever dodge to require puritanical calvinism in all macroeconomic analysis. Blowing smoke and emitting chaff to to distract from and discredit any models in which group norms, cooperation, emergent properties, etc. trump simplistic additive steely-eyed self interest.
Thanks for listening… /rant