A blog about belief

Posts Tagged ‘confirmation bias’

The “interplay of chain stimulations”

In Mathematics, Philosophy on July 3, 2011 at 10:00 pm

In Word and Object, W.V.O. Quine talks about the way in which the mind revises its web of beliefs as if this process occurs in an unconscious way:

[…]  Prediction is in effect the conjectural anticipation of further sensory evidence for a foregone conclusion.  When a prediction comes out wrong, what we have is a divergent and troublesome sensory stimulation that tends to inhibit that once foregone conclusion, and so to extinguish the sentence-to-sentence conditionings that led to the prediction.  Thus it is that theories wither when their predictions fail.

In an extreme case, the theory may consist in such firmly conditioned connections between two sentences that it withstands the failure of a prediction or two.  We find ourselves excusing the failure of prediction as a mistake in observation or a result of unexplained interference.  The tail thus comes, in an extremity, to wag the dog.

The sifting of evidence would seem from recent remarks to be a strangely passive affair, apart from the effort to intercept helpful stimuli: we just try to be as sensitively responsive as possible to the ensuing interplay of chain stimulations.  What conscious policy does one follow, then, when not simply passive toward this interanimation of sentences?  Consciously the quest seems to be for the simplest story.  Yet this supposed quality of simplicity is more easily sensed than described.  Perhaps our vaunted sense of simplicity, or of likeliest explanation, is in many cases just a feeling of conviction attaching to the blind resultant of the interplay of chain stimulations in their various strengths.  (§5)

Mercier & Sperber’s argumentative theory of reasoning could offer an answer to this conundrum.  To be sure, Quine is not suggesting an argumentative theory – by “story” he means theory, not argument.  But he is only able to hesitantly claim that the conscious part of cognition has the function of preserving the simplicity of theories.  Even this operation appears to occur at the level of intuition, and what purpose conscious reasoning has left is unclear.  In the argumentative theory of reasoning, the “interplay of chain stimulations” by which contrary evidence tugs at our theoretical ideas would be a part of the intuitive track of cognition.  The function of conscious reasoning would be not to oversee this intuitive process, but to come up with good ways of verbalizing its results.  Conscious reasoning would not, in the normal course of things, involve changing the web of belief at all – instead its purpose would be to look for paths along the web that link the particular beliefs one anticipates having to defend to sentences that others might be willing to take as premises.

The argumentative theory claims to explain the confirmation bias by thus reconceiving the function of conscious reasoning, but Quine suggests (in the second paragraph I quoted) that a confirmation bias of sorts can occur in what I have assigned to the intuitive track of cognition as well.  Sometimes our theoretical ideas have become so ingrained that we “excuse” contrary observations.  As far as I can tell, Mercier & Sperber’s argumentative theory would not explain this sort of confirmation bias.

To the extent that it serves the preference for simplicity, an intuitive confirmation bias is not fundamentally irrational, because at least in certain situations selectively ignoring evidence in the name of simplicity can result in better predictions.  This has proven true experimentally in the field of machine learning.  Suppose that we have a plane on which each point is either red or green.  Given a finite number of observations about the colors of particular points, we wish to come up with a way of predicting the color of any point on the plane.  One way of doing this is to produce a function that divides the plane into two sections, red and green.  If we can draw a straight line that correctly divides all of our observations, red on one side and green on the other, then we have a very simple model that, assuming that the set of observations we used to derive it is representative and sufficiently large, is likely to work well.  However, if it is necessary to draw a very complex, squiggly line to correctly account for all of the observations (if we are required to use a learning machine with a high VC dimension), then it is often better to choose a simpler function even if makes the wrong prediction for a few of our observed cases.  Overfitting can lead to the creation of models that deviate from the general pattern in order to account for what might actually be random noise in the observational data.  In the same way, if we attempted to account for every possible bit of contrary evidence in the revision our mental theories, our ability to make useful predictions with them would be confounded.  We will always encounter deviations from what we expect, and at least some of these will be caused by factors that we will never come across enough data to model correctly.  In such cases, we are better off allowing our too-simple theories to stand.

Advertisements

Argumentative theory of reasoning

In Science on June 22, 2011 at 9:00 am

Last time I promised a more detailed post about Hugo Mercier and Dan Sperber’s argumentative theory of reasoning, which I’ve found quite interesting.  Looking back at the New York Times writeup after reading the paper, I’m amazed (though I shouldn’t be) at how completely wrong they get it.  A lot of the informal commentary I’ve seen (including the article I mentioned in my last blog post) seems to follow the Times in thinking that the paper says there’s something “irrational” about the way humans argue, and that, as the Times puts it, our ability to reason evolved as a “weapon” for use in persuading people rather than as a “path to truth”.  What the paper actually argues is that human reasoning is better suited to collective than individual problem solving, albeit only within a context that involves disagreement:

When one is alone or with people who hold similar views, one’s arguments will not be critically evaluated. This is when the confirmation bias is most likely to lead to poor outcomes. However, when reasoning is used in a more felicitous context – that is, in arguments among people who disagree but have a common interest in the truth – the confirmation bias contributes to an efficient form of division of cognitive labor. (65)

The conclusion is not that debate is some sort of battle in which each side tries to “defeat” the other in order to gain something by altering their beliefs.  If this were so, why would anyone listen to anyone else’s arguments?  The conclusion of the paper is that a vigorous debate is the best way we have of finding the truth.

Mercier wrote a response in the Times, so I won’t beat this dead horse any further.  There is plenty to say about the claims that the paper actually does make.  The authors give not just an explanation for the confirmation bias (in addition to the phenomenon of motivated reasoning) but the suggestion that, by working together in a certain way, we can make the bias a virtue.  One’s mind immediately turns to politics, although figuring out a way of encouraging this productive sort of debate on political issues is not an easy matter.

As I understand it, the type of debate that would make the confirmation bias a virtue would, in Mercier & Sperber’s dual-process model of reasoning, play out like this.  A group of people would have a common problem that they all have an interest in solving.  Each collaborator would form (through intuitive inference) an opinion about what the solution is, and then have two jobs (and only two) involving conscious reasoning: first, they are to come up with arguments in favor of their opinion, and second, they are to look for ways of falsifying the other opinions put forth.  The people would also listen to the arguments that others make, evaluating them on an intuitive level and changing their opinions if the arguments prove persuasive.  If one of the collaborators becomes convinced that their original opinion was wrong but is not convinced of an alternative, then they might intuitively come up with a new one, which would help in cases where none of the collaborators initially had the right solution.  Assuming a whole truckload of conditions, everyone will eventually come to accept the right answer.

As far as I know, the exact conditions under which this sort of debate could work well have not been determined.  In the experimental situations cited in section 3 of the paper, there was a clear definition of the problem that was understood by all participants, it was known that a single solution existed, there was a known method for falsifying prospective solutions, and all participants had an interest in finding the correct answer.  Setting up a debate that meets all these criteria would be near-impossible for most political problems, but we might be able to make the confirmation bias productive under less-strict conditions.  A more difficult complication is that, while the participants in the experiments presumably have little interest, apart from vanity, in what particular solution turns out to be correct, the concepts involved in politics are tied into people’s economic and emotional lives in immensely complex ways.  Even if everyone has an interest in reaching the best solution, there may be other factors that motivate the participants to act counterproductively to debate.

These are issues that I was already thinking about before this paper came along, and I’m not yet sure whether an argumentative theory of reasoning, which is primarily a diachronic theory about evolution, adds something significant to a political philosopher’s toolkit if it is correct.  It does, however, contra the New York Times, give us a hopeful way of looking at the confirmation bias.  Perhaps our faculty of reasoning isn’t buggy after all.