A blog about belief

Posts Tagged ‘reasoning’

Always say never

In Philosophy on July 8, 2011 at 2:28 pm

Years ago, I went to a writers’ group in St. Louis, and I commented that the way a story depicted a real-life volunteer organization seemed too much like an advertisement for it. Someone responded, “Let me see if I’ve got this right.  So you’re saying that people should never use the names of real organizations in their writing?” I got annoyed at this because I never said anything like “never.” This has happened to me a number of other times: I’ve made an argument that I meant to apply in a specific context, and to be just one factor among many to consider, and the other person has responded to it as if it were meant as an absolute.

Call this a failure on my part, and I wouldn’t object. But I think my problem is not that I express myself unclearly. I think it’s that I’ve failed to recognize a bias in how people process the statements made in debate. I’m not sure if this bias is universal or just Midwestern stubbornness (I can’t think of any major instances since I left there), but in either case it’s better we try to understand and account for it than complain. What seems to happen is that a person takes statements made with a fixed variable (Pa → Qa), and adds a universal quantifier to them (∀a (Pa → Qa)); then, rather than interrogating the logic by which these statements were reached, they go looking for counterexamples.

Attack the argument, not the conclusion, philosophers say. But the type of discussion to which this (at this point, only conjectured) bias seems keyed is not a terrible one; it’s just more conducive to inductive than deductive reasoning. If we see the positive statements put forth as hypotheses, rather than conclusions, a debate based on the search for counterexamples to universally-quantified claims could be quite effective at reaching the truth.

What this implies is that the way people tend to debate is not optimized for a situation in which multiple perspectives co-exist. An attempt to respect other points of view by speaking only in contingencies would have quite the opposite effect if the listener winds up universalizing the claims one makes; but that universalization would also allow people to hold those claims to stronger, and more objective, epistemic tests. People can’t be expected to be objective in everything they say, but when a conversation is a debate it can come to a gridlock or worse if the participants deal in nothing but particulars.  The truly productive tension is between the particular and the universal.


The “interplay of chain stimulations”

In Mathematics, Philosophy on July 3, 2011 at 10:00 pm

In Word and Object, W.V.O. Quine talks about the way in which the mind revises its web of beliefs as if this process occurs in an unconscious way:

[…]  Prediction is in effect the conjectural anticipation of further sensory evidence for a foregone conclusion.  When a prediction comes out wrong, what we have is a divergent and troublesome sensory stimulation that tends to inhibit that once foregone conclusion, and so to extinguish the sentence-to-sentence conditionings that led to the prediction.  Thus it is that theories wither when their predictions fail.

In an extreme case, the theory may consist in such firmly conditioned connections between two sentences that it withstands the failure of a prediction or two.  We find ourselves excusing the failure of prediction as a mistake in observation or a result of unexplained interference.  The tail thus comes, in an extremity, to wag the dog.

The sifting of evidence would seem from recent remarks to be a strangely passive affair, apart from the effort to intercept helpful stimuli: we just try to be as sensitively responsive as possible to the ensuing interplay of chain stimulations.  What conscious policy does one follow, then, when not simply passive toward this interanimation of sentences?  Consciously the quest seems to be for the simplest story.  Yet this supposed quality of simplicity is more easily sensed than described.  Perhaps our vaunted sense of simplicity, or of likeliest explanation, is in many cases just a feeling of conviction attaching to the blind resultant of the interplay of chain stimulations in their various strengths.  (§5)

Mercier & Sperber’s argumentative theory of reasoning could offer an answer to this conundrum.  To be sure, Quine is not suggesting an argumentative theory – by “story” he means theory, not argument.  But he is only able to hesitantly claim that the conscious part of cognition has the function of preserving the simplicity of theories.  Even this operation appears to occur at the level of intuition, and what purpose conscious reasoning has left is unclear.  In the argumentative theory of reasoning, the “interplay of chain stimulations” by which contrary evidence tugs at our theoretical ideas would be a part of the intuitive track of cognition.  The function of conscious reasoning would be not to oversee this intuitive process, but to come up with good ways of verbalizing its results.  Conscious reasoning would not, in the normal course of things, involve changing the web of belief at all – instead its purpose would be to look for paths along the web that link the particular beliefs one anticipates having to defend to sentences that others might be willing to take as premises.

The argumentative theory claims to explain the confirmation bias by thus reconceiving the function of conscious reasoning, but Quine suggests (in the second paragraph I quoted) that a confirmation bias of sorts can occur in what I have assigned to the intuitive track of cognition as well.  Sometimes our theoretical ideas have become so ingrained that we “excuse” contrary observations.  As far as I can tell, Mercier & Sperber’s argumentative theory would not explain this sort of confirmation bias.

To the extent that it serves the preference for simplicity, an intuitive confirmation bias is not fundamentally irrational, because at least in certain situations selectively ignoring evidence in the name of simplicity can result in better predictions.  This has proven true experimentally in the field of machine learning.  Suppose that we have a plane on which each point is either red or green.  Given a finite number of observations about the colors of particular points, we wish to come up with a way of predicting the color of any point on the plane.  One way of doing this is to produce a function that divides the plane into two sections, red and green.  If we can draw a straight line that correctly divides all of our observations, red on one side and green on the other, then we have a very simple model that, assuming that the set of observations we used to derive it is representative and sufficiently large, is likely to work well.  However, if it is necessary to draw a very complex, squiggly line to correctly account for all of the observations (if we are required to use a learning machine with a high VC dimension), then it is often better to choose a simpler function even if makes the wrong prediction for a few of our observed cases.  Overfitting can lead to the creation of models that deviate from the general pattern in order to account for what might actually be random noise in the observational data.  In the same way, if we attempted to account for every possible bit of contrary evidence in the revision our mental theories, our ability to make useful predictions with them would be confounded.  We will always encounter deviations from what we expect, and at least some of these will be caused by factors that we will never come across enough data to model correctly.  In such cases, we are better off allowing our too-simple theories to stand.

Coersion and motivated reasoning

In Philosophy, Science on June 24, 2011 at 7:08 pm

The argumentative theory of reasoning could shed light on a scenario that I hinted at in my first post about freedom.  Suppose that you are about to be presented with two options, 1 and 2, and that you have decided to choose option 1.  Person B, not knowing this, attempts to coerce you into choosing that option.  You change your decision to 2, even though it is the less attractive option, in order to in some way assert your independence from B.  The argumentative theory could offer the explanation that reasoning about this situation would lead you to choose 2 if you think it would be easier to justify that choice than it would be to respond to accusations that you were too-easily swayed by B.

This decision isn’t necessarily irrational.  It may be that your choosing option 1 after the attempt at coercion really would damage others’ opinions of you, or that it would encourage B to attempt further coercions in the future.  Presumably, the greater the difference in utility between 1 and 2 the less inclined you will be to change your decision, not least because other people will be less likely to accuse you of being manipulated if the choice you made is obviously the better one.

I don’t have experimental evidence to back up this example, though there may be some that I don’t know about.  If it proves true, it might give us some insight into what it is we desire when we desire to be independent.  It suggests a way of thinking about independence that avoids unclear talk about choice, if not one that would serve particularly well in a definition of freedom.

Argumentative theory of reasoning

In Science on June 22, 2011 at 9:00 am

Last time I promised a more detailed post about Hugo Mercier and Dan Sperber’s argumentative theory of reasoning, which I’ve found quite interesting.  Looking back at the New York Times writeup after reading the paper, I’m amazed (though I shouldn’t be) at how completely wrong they get it.  A lot of the informal commentary I’ve seen (including the article I mentioned in my last blog post) seems to follow the Times in thinking that the paper says there’s something “irrational” about the way humans argue, and that, as the Times puts it, our ability to reason evolved as a “weapon” for use in persuading people rather than as a “path to truth”.  What the paper actually argues is that human reasoning is better suited to collective than individual problem solving, albeit only within a context that involves disagreement:

When one is alone or with people who hold similar views, one’s arguments will not be critically evaluated. This is when the confirmation bias is most likely to lead to poor outcomes. However, when reasoning is used in a more felicitous context – that is, in arguments among people who disagree but have a common interest in the truth – the confirmation bias contributes to an efficient form of division of cognitive labor. (65)

The conclusion is not that debate is some sort of battle in which each side tries to “defeat” the other in order to gain something by altering their beliefs.  If this were so, why would anyone listen to anyone else’s arguments?  The conclusion of the paper is that a vigorous debate is the best way we have of finding the truth.

Mercier wrote a response in the Times, so I won’t beat this dead horse any further.  There is plenty to say about the claims that the paper actually does make.  The authors give not just an explanation for the confirmation bias (in addition to the phenomenon of motivated reasoning) but the suggestion that, by working together in a certain way, we can make the bias a virtue.  One’s mind immediately turns to politics, although figuring out a way of encouraging this productive sort of debate on political issues is not an easy matter.

As I understand it, the type of debate that would make the confirmation bias a virtue would, in Mercier & Sperber’s dual-process model of reasoning, play out like this.  A group of people would have a common problem that they all have an interest in solving.  Each collaborator would form (through intuitive inference) an opinion about what the solution is, and then have two jobs (and only two) involving conscious reasoning: first, they are to come up with arguments in favor of their opinion, and second, they are to look for ways of falsifying the other opinions put forth.  The people would also listen to the arguments that others make, evaluating them on an intuitive level and changing their opinions if the arguments prove persuasive.  If one of the collaborators becomes convinced that their original opinion was wrong but is not convinced of an alternative, then they might intuitively come up with a new one, which would help in cases where none of the collaborators initially had the right solution.  Assuming a whole truckload of conditions, everyone will eventually come to accept the right answer.

As far as I know, the exact conditions under which this sort of debate could work well have not been determined.  In the experimental situations cited in section 3 of the paper, there was a clear definition of the problem that was understood by all participants, it was known that a single solution existed, there was a known method for falsifying prospective solutions, and all participants had an interest in finding the correct answer.  Setting up a debate that meets all these criteria would be near-impossible for most political problems, but we might be able to make the confirmation bias productive under less-strict conditions.  A more difficult complication is that, while the participants in the experiments presumably have little interest, apart from vanity, in what particular solution turns out to be correct, the concepts involved in politics are tied into people’s economic and emotional lives in immensely complex ways.  Even if everyone has an interest in reaching the best solution, there may be other factors that motivate the participants to act counterproductively to debate.

These are issues that I was already thinking about before this paper came along, and I’m not yet sure whether an argumentative theory of reasoning, which is primarily a diachronic theory about evolution, adds something significant to a political philosopher’s toolkit if it is correct.  It does, however, contra the New York Times, give us a hopeful way of looking at the confirmation bias.  Perhaps our faculty of reasoning isn’t buggy after all.

Arguing about religion

In Politics on June 21, 2011 at 1:14 pm

You may have already heard about the Christian-right campaign against Ayn Rand (Brian Leiter on it), which aims to point out the incompatibility of Rand’s ideas with Christianity. Peter Laarman has an article in Religion Dispatches that makes a good point about something that Fred Clark at slacktivist has also written about recently, the relative importance of community and personal anecdote over such logical argumentation in the way that our beliefs change. I have to take exception, though, to Laarman’s invocation of Hugo Mercier and Dan Sperber’s much-discussed-but-apparently-little-read article “Why do humans reason?” (about which I’m hoping to have a detailed post soon):

What is the point here? The point is that there IS no point to endless argumentation. Hearts and minds don’t change that way. They change when we share our stories and when we become present in a different way to those whom we wish to influence. The further point is that hearts change before minds do. It rarely works the other way around.

And now some scientists believe that we don’t actually argue to arrive at clarity or truth; argumentation is a “social adaptation,” they argue: we are in debates to win, and we will readily use flawed arguments if we think they will sway the other side. Irrationality is not merely a “kink” in the process of truth seeking. […]

Of course we are in debates to win. What Mercier & Sperber are arguing is that the primary function of reasoning is to produce arguments that are convincing to others rather than to produce better beliefs for oneself. This conclusion would obviously be false if it were uncommon that people be persuaded by arguments. It’s true that it’s not always logical validity that makes arguments persuasive (we’ve known that for millennia), but that’s not the issue here. The anti-Ayn Rand video is not going to fail for being too logical, because it’s not – it’s loaded with ad hominems of just the same sort that have been employed in scare-tactic campaigns for ages. I don’t think it’s likely that this campaign will cause hordes of people to change their beliefs individually, minds-before-hearts, but it could spark change if it becomes an object of discussion among churchgoers and in other community settings. Direct human relationships often do have more of an influence on our beliefs than arguments do, but arguments don’t exist in isolation from them.

Addendum: I was going to argue that the video is calling for people not to change their religious beliefs but to act on them, but being a holist I can’t say that adding a belief that Jesus is at odds with objectivism doesn’t change one’s belief in Jesus.