Why People Are Confused About What Experts Really Think

Thursday, February 18, 2016

Man with head stretched away from arms
GIVEN the complexities of the modern world, we all have to rely on expert opinion. Are G.M.O. foods safe? Is global warming real? Should children be vaccinated for measles? We don’t have the time or the training to adjudicate these questions ourselves. We defer to the professionals.

And to find out what the experts think, we typically rely on the news media. This creates a challenge for journalists: There are many issues on which a large majority of experts agree but a small number hold a dissenting view. Is it possible to give voice to experts on both sides — standard journalistic practice — without distorting the public’s perception of the level of disagreement?

This can be hard to do. Indeed, critics argue that journalists too often generate “false balance,” creating an impression of disagreement when there is, in fact, a high level of consensus. One solution, adopted by news organizations such as the BBC, is “weight of evidence” reporting, in which the presentation of conflicting views is supplemented by an indication of where the bulk of expert opinion lies.

But whether this is effective is a psychological question on which there has been little research. So recently, I conducted two experiments to find out; they are described in a forthcoming article in the Journal of Experimental Psychology: Applied. Both studies suggest that “weight of evidence” reporting is an imperfect remedy. It turns out that hearing from experts on both sides of an issue distorts our perception of consensus — even when we have all the information we need to correct that misperception.

In one study, all the participants were presented with a numerical summary, drawn from a panel of experts convened by the University of Chicago, of the range of expert opinion on certain economic issues. On some, a large majority of experts agreed; on others, there was more disagreement. For instance, a large majority agreed that a carbon tax would be an efficient means of reducing carbon-dioxide emissions (93 experts agreed, five were uncertain and only two disagreed), but there was more disagreement about whether increasing the minimum wage would make it harder for low-skilled workers to find employment (38 agreed, 27 were uncertain and 36 disagreed).

One group of participants, however, was presented not only with the numerical summary of expert opinion but also with an excerpted comment from one expert on either side of an issue. On the carbon tax issue, for example, these participants read a comment from one of the 93 experts who thought the tax would be effective, justifying that opinion, and a comparable comment from one of the two experts who disagreed.

Then, all the participants were asked to rate their perception of the extent to which the experts agreed with one another on each issue. Even though both had a precise count of the number of experts on either side, the participants who also read the comments of the opposing experts gave ratings that did not distinguish as sharply between the high-consensus and the low-consensus issues. In other words, being exposed to the conflicting comments made it more difficult for participants to distinguish the issues most experts agreed on (such as carbon tax) from those for which there was substantial disagreement (such as minimum wage).

 This distorting influence affected not only the participants’ perception of the degree of consensus, but also their judgments of whether there was sufficient consensus to use it to guide public policy. (My other study, which used a numerical summary of the views of professional film critics on various movies, as well as comments from opposing experts, had similar findings.)

What explains this cognitive glitch? One possibility is that when we are presented with comments from experts on either side of an issue, we produce a mental representation of the disagreement that takes the form of one person on either side, which somehow contaminates our impression of the distribution of opinions in the larger population of experts. Another possibility is that we may just have difficulty discounting the weight of a plausible argument, even when we know it comes from an expert whose opinion is held by only a small fraction of his or her peers. It’s also possible that the mere presence of conflict (in the form of contradictory expert comments) triggers a general sense of uncertainty in our minds, which in turn colours our perceptions of the accuracy of current expert understanding of an issue.

Whatever the cause, the implications are worrisome. Government action is guided in part by public opinion. Public opinion is guided in part by perceptions of what experts think. But public opinion may — and often does — deviate from expert opinion, not simply, it seems, because the public refuses to acknowledge the legitimacy of experts, but also because the public may not be able to tell where the majority of expert opinion lies.

Derek J. Koehler is a professor of psychology at the University of Waterloo.

A version of this op-ed appears in print on February 14, 2016, on page SR10 of the New York edition with the headline: Experts Are Hard to Understand. Today's Paper|Subscribe