On the Value of “Unrealistic” Research

Or, How to Read Minds with a Few Hundred Bucks and Some Dice

Peter Kriss
4 min readJun 3, 2016

--

People lie, people cheat, and people steal. But they are also courageous and selfless — in ways that are sometimes subtle and surprising.

Take whistleblowers for example. Daniel Ellsberg knew that the White House was lying to the public and to Congress about the true objective of the Vietnam War. He then leaked the Pentagon Papers, knowingly putting himself at significant personal risk.

But how often do people in such positions do the opposite and turn a blind eye (or worse)? The administrators at Penn State and Baylor covering up indefensible crimes within their football programs are just two prominent examples.

I don’t have a general theory of dishonesty or an explanation for why altruism exists, but I do want to tell you about one laboratory experiment and why I think it matters.

Paying for fairness

Assuming you’re not a neoclassical economist, you won’t be surprised to hear that people are willing to pay money to punish people who treat them unfairly. A bit more surprising are studies showing that even when the interaction is one-time only and anonymity is guaranteed, bystanders are often willing to pay their own money to punish a wrongdoer.

The question my colleagues and I wanted to answer was whether these are really different types of preferences. Do bystanders who pay a cost to punish unfairness really want to do it in the same way that victims themselves seem to want to?

We aimed to find out, but faced a significant challenge. Without being able to observe people’s thoughts directly, we needed to create a situation where the underlying reason for acting leads to measurably different behaviors.

Our solution was to let people hide behind a veil of randomness. Let me explain.

The lab experiment

We had people come into a computer lab to participate in a study where the amount of money they earn would depend on their decisions. We randomly and anonymously assigned the subjects to groups so that they knew they were interacting with someone else in the room, but they would never know who.

We gave the first person (the divider) the chance to split a $10 payment between themselves and a second person (the receiver). In the event that they gave less than half — especially if they gave nothing at all — we wanted to see whether the receiver would pay money in order to decrease the earnings of the divider.

In a separate version of the experiment, the receiver was powerless, but a third player (the bystander) who observed the decision had the same punishment option to give up some of their money to punish the divider.

Then, the interesting part: for those receivers and bystanders who choose to punish the divider, we gave them a 6-sided die in a cup and told them to roll it several times to make sure it was fair. Then, to roll it again and report the result into the computer. If it was an even number, then whatever punishment decision they made would count. If it was an odd number, then no punishment would be possible. That is, it would be just like they had decided not to punish and they would pay nothing.

They key is that no one else would observe their die roll and so they could report whatever result they really wanted, while the veil of randomness protected them from appearing as though they didn’t care about unfairness. In fact, they could even fool themselves to the extent that if they didn’t like the outcome, they could call it another practice roll and roll again.

What happened when they rolled the dice?

We know that if everyone is acting honestly, 50% of the die rolls should be even. What we found was that of the receivers who said they wanted to punish, 69% reported rolling an even number, while of the bystanders who said they wanted to punish, only 22% reported rolling an even number.

So what does this mean? We now believe that preferences for punishing unfair actions come in two flavors: resolute and reluctant. Resolute punishers really do want to punish an unfair act even at a cost to themselves, while reluctant punishers do it only out of a sense of obligation.

One reaction to this might be that people aren’t as moral as we thought, but I actually find it reassuring that higher order preferences like duty and obligation drive behavior, even when they conflict with more basic desires.

In defense of unrealistic research

Beyond the specific learning about human psychology, I’ve told you about this study for another reason. And that is because of how unrealistic it is. Many laboratory studies — particularly in the social sciences — are criticized for being unrealistic and this critique often misses the point.

No one has ever faced a real-world situation when they could manipulate a die roll to avoid the opportunity to anonymously punish someone. But like Daniel Ellsberg or the Penn State University administration, they have faced far more complex decisions that are, in one specific and important way, analogous.

The point of laboratory studies is not to replicate the real world as accurately as possible — it is to maintain such tight control over the environment that you can pinpoint causes and effects more precisely than is otherwise possible.

Research is often narrow and sometimes arcane (as members of Congress love to point out), but the pieces complement each other and build towards something greater, often in ways that can’t be predicted.

Research is imperfect and criticism is good, but it’s silly to judge Steven Spielberg by his acting skills.

The full study, Turning a Blind Eye, But Not the Other Cheek: On the Robustness of Costly Punishment by Peter Kriss, Roberto Weber, and Erte Xiao, is forthcoming in the Journal of Economic Behavior and Organization. You can download a pre-publication draft of the paper here.

Lastly, if you enjoyed this post and want to know when I create something new, I’d be happy to keep you in the loop.

--

--

Peter Kriss

Behavioral science + technology. Current: @Qventus, @macroclimate. Past: @Medallia, @visionprize, @CarnegieMellon.