Last time I promised a more detailed post about Hugo Mercier and Dan Sperber’s argumentative theory of reasoning, which I’ve found quite interesting. Looking back at the New York Times writeup after reading the paper, I’m amazed (though I shouldn’t be) at how completely wrong they get it. A lot of the informal commentary I’ve seen (including the article I mentioned in my last blog post) seems to follow the Times in thinking that the paper says there’s something “irrational” about the way humans argue, and that, as the Times puts it, our ability to reason evolved as a “weapon” for use in persuading people rather than as a “path to truth”. What the paper actually argues is that human reasoning is better suited to collective than individual problem solving, albeit only within a context that involves disagreement:
When one is alone or with people who hold similar views, one’s arguments will not be critically evaluated. This is when the confirmation bias is most likely to lead to poor outcomes. However, when reasoning is used in a more felicitous context – that is, in arguments among people who disagree but have a common interest in the truth – the confirmation bias contributes to an efficient form of division of cognitive labor. (65)
The conclusion is not that debate is some sort of battle in which each side tries to “defeat” the other in order to gain something by altering their beliefs. If this were so, why would anyone listen to anyone else’s arguments? The conclusion of the paper is that a vigorous debate is the best way we have of finding the truth.
Mercier wrote a response in the Times, so I won’t beat this dead horse any further. There is plenty to say about the claims that the paper actually does make. The authors give not just an explanation for the confirmation bias (in addition to the phenomenon of motivated reasoning) but the suggestion that, by working together in a certain way, we can make the bias a virtue. One’s mind immediately turns to politics, although figuring out a way of encouraging this productive sort of debate on political issues is not an easy matter.
As I understand it, the type of debate that would make the confirmation bias a virtue would, in Mercier & Sperber’s dual-process model of reasoning, play out like this. A group of people would have a common problem that they all have an interest in solving. Each collaborator would form (through intuitive inference) an opinion about what the solution is, and then have two jobs (and only two) involving conscious reasoning: first, they are to come up with arguments in favor of their opinion, and second, they are to look for ways of falsifying the other opinions put forth. The people would also listen to the arguments that others make, evaluating them on an intuitive level and changing their opinions if the arguments prove persuasive. If one of the collaborators becomes convinced that their original opinion was wrong but is not convinced of an alternative, then they might intuitively come up with a new one, which would help in cases where none of the collaborators initially had the right solution. Assuming a whole truckload of conditions, everyone will eventually come to accept the right answer.
As far as I know, the exact conditions under which this sort of debate could work well have not been determined. In the experimental situations cited in section 3 of the paper, there was a clear definition of the problem that was understood by all participants, it was known that a single solution existed, there was a known method for falsifying prospective solutions, and all participants had an interest in finding the correct answer. Setting up a debate that meets all these criteria would be near-impossible for most political problems, but we might be able to make the confirmation bias productive under less-strict conditions. A more difficult complication is that, while the participants in the experiments presumably have little interest, apart from vanity, in what particular solution turns out to be correct, the concepts involved in politics are tied into people’s economic and emotional lives in immensely complex ways. Even if everyone has an interest in reaching the best solution, there may be other factors that motivate the participants to act counterproductively to debate.
These are issues that I was already thinking about before this paper came along, and I’m not yet sure whether an argumentative theory of reasoning, which is primarily a diachronic theory about evolution, adds something significant to a political philosopher’s toolkit if it is correct. It does, however, contra the New York Times, give us a hopeful way of looking at the confirmation bias. Perhaps our faculty of reasoning isn’t buggy after all.