You need less evidence that things are getting worse than better.

How much evidence do you need to decide that something fundamental has changed? This question is addressed in a paper in the February 2017 issue of the Journal of Personality and Social Psychology by Ed O’Brien and Nadav Klein. The authors present a series of findings suggesting that people need less evidence to decide that things are getting worse than they require to conclude that things are getting better.

Art Markman, Ph.D.,Art Markman, Ph.D., is Annabel Irion Worsham Centennial Professor of Psychology and Marketing at the University of Texas at Austin. He got his Sc.B. in Cognitive Science from Brown and his Ph.D. in Psychology from the University of Illinois. He has published over 150 scholarly works on topics in higher-level thinking including the effects of motivation on learning and performance, analogical reasoning, categorization, decision making, and creativity. Art serves as the director of the program in the Human Dimensions of Organizations at the University of Texas.

Editor: Muhammad Talha

In an initial series of studies, participants were asked to imagine various streaks across several domains, such as sports, the economy, and health. Sometimes people were asked to imagine that things were going very well. They were asked to envision that 10 more events happened and were asked how many of the next 10 would have to be bad in order to believe that there had been a lasting change for the worse. Other people were tasked to imagine that things were going badly and then asked how many of the next 10 events would have to be good in order for them to believe that there had been a lasting change.

When the starting point was good, people needed about five of the upcoming events to be bad in order to conclude that things were getting worse. When the starting point was bad, though, they needed about 6.5 good events on average to believe that things were getting better. The researchers found similar differences across a number of different formulations of the question, including modifying the duration of the changes and the size of the changes.

Another study presented participants with a graph of an economic indicator. For some participants, the graph started high and then went lower. Some participants were told that the economic indicator was one in which high values indicated that the economy was healthy. Other participants were told that the indicator was one for which high values indicated that the economy was unhealthy. So the falling bars on the graph represented good things to some participants and bad things to others. 

The beauty of this study is that all participants were seeing exactly the same graph. When participants were asked whether the graph indicated a fundamental shift in the economy, they were more likely to see a small change as indicating a fundamental shift when it meant that things were getting worse rather than that things were getting better. 

So, why does this happen? 

The researchers ruled out lots of alternatives through studies that I won’t describe in detail here—for example, the effect does not seem to be due to people being more alarmed by decreases than increases, or by whether this is happening to themselves or to someone else.

The team suggests instead an explanation based on the physical concept of entropy. The basic idea of entropy is that maintaining order requires energy. A clean desk requires energy to tidy up, but after a while, many (like mine) will return to a state of disorder. Similarly, many people believe that improving the state of the world requires energy—and, correspondingly, that the state of the world will get worse without the application of energy. For example, I play the saxophone. Because I continue to practice, I continue to get better as a player. If I stopped, my performance would get worse.

To test this idea, the researchers conducted a final experiment in which participants were told about a game that people could learn to play. In one version, the game was quite difficult and required real effort for people to get better. In the other version, the game tapped natural abilities that every human has and so simply playing it more would make them better over time. Then people evaluated sequences of performance, either improving or declining, and had to judge when a change in a player's performance reflected real underlying change.

Participants who were told that the game was difficult and required effort showed the same pattern as in all of the other studies: They required less negative evidence to judge that someone was really getting worse than they needed positive evidence to judge that players were getting better. For the version in which the game tapped natural human abilities, though, the pattern reversed: Now participants actually required less positive evidence to decide that someone was getting better than negative evidence to conclude that an individual was getting worse. 

In general, of course, we live in a world in which things get better because effort has been put into them. As a result, this pattern is probably helpful—because we assume things are getting worse from just a few negative observations, we may intervene quickly to try to reverse declines. And, because we require more positive evidence to judge that things are getting better, we may continue to put in effort even after we see some positive results.