A blog about belief

Archive for the ‘Science’ Category

Speaking objectively

In Philosophy, Science on November 11, 2011 at 1:24 pm

The “plain style” of speech that has just about completely supplanted the “high” rhetoric of the past is in part a remnant of an early version of the scientific method that claimed to root knowledge in objective observation.  The ideal of dispassionate language has persisted even as this interpretation of scientific practice has given way to more sophisticated ones, and I’m not sure of the extent to which we ought to hold on to it.  One thing that I’ve learned from Twentieth-century philosophers of science like Karl Popper and W.V. Quine, who have more in common with post-structuralism than anyone wants to admit, is that science works because observation is subjective, because there is no way of reporting what we see that cannot be disputed by someone who sees it differently.  Every act of perception is an act of interpretation in terms of a particular doctrine, and this is a good thing, because discovery occurs at the points at which our doctrines come into tension with what we perceive, at which, that is, there is something that we just can’t make fit with our present beliefs.  Objectivity is the process in which we actively seek this sort of tension.  It’s not the opposite of subjectivity, it’s what happens when subjectivity runs up against the world.

The thing is, this process only works if the doctrine in terms of which one interprets things is in a certain sense rigid.  If the associations that constitute one’s beliefs are flexible enough, then one can finagle any sensory data one comes across to fit them; I have called this, in another post, paranoia.  The purpose of the sort of formalized language that Quine takes such pains to develop is not to be transparent, but to be sufficiently rigid for science to work.  It is to make the relationships between different statements clear enough that when contradictions arise – when things don’t line up properly – the tension is manifest.  A language made rigid in this way looks quite similar to the “plain” modes of speech advocated by the early Royal Society empiricists, but the theoretical basis for it is very different from what led, for instance, Thomas Sprat to decry “figural” language as a cause of pointless contention.  There is a legitimate place for this formal sort of language, although not for the reasons that Sprat gives.  It facilitates both experimentation and productive debate.

But rigidity is only one of the prerequisites of objective discussion as I have defined it.  The other, the contingency and changeability of the systems of doctrine by which we make claims, would seem to be best served by a type of language that looks quite different from the traditional scientific plainness, and I’m not convinced that the particular formalization that Quine comes up with doesn’t falter in this regard.  Despite the theoretical motive to the contrary, Quinean language still looks like it’s meant to be transparent, if due to historical association alone.  An ideal sort of language for an objective discourse would be right up front about the fact that each individual statement in that discourse represents a particular, contingent, subjective viewpoint, while still providing as little give as possible when the discussion runs up against one of these viewpoints’ limitations.  This is doubly important in literary criticism, where it’s always import to keep aware of one’s own cultural position, and how that might impact one’s understanding of a text.

Advertisements

Debunking vs explaining

In Art, Philosophy, Science on October 23, 2011 at 5:20 pm

I’m often surprised how many smart people I encounter who think of science as some kind of beauty-destroying machine.  This is a particularly common reaction to the attempts of neuroscientists and cognitive psychologists to understand how people relate to art.  I appreciate the need for different ways of talking about things, particularly when it comes to things as slippery as aesthetic responses, but there’s an anti-intellectualism I can’t abide in the idea that coming up with an explanation for something diminishes it.  There is no reason why our more subjective ways of thinking can’t stand alongside scientific theories, although perhaps science might prod us to adjust them a bit so that they’re less likely to lead us to bad decisions.  If science finds a way to reduce spiritual feelings to the workings of our brain, that doesn’t invalidate them or drain them of their power.  If anything, it validates them by providing evidence for their consistency with the way of looking at the world that’s proven most practically successful for us.

In that last statement I have tacitly given science a privileged position over other forms of knowledge.  I will not recant on this point, but I stop short of imputing that spirituality needs the validation of science.  That, I admit, gets us into worrisome territory.  I do suspect that some people who claim they’ve had a spiritual experience are kidding themselves, but there is a major ethical problem in calling someone else’s claim to have had such an experience bullshit, even if we’ve got MRI scans to back us up.  If someone says they’re experiencing a vision and our cognitive models of spirituality say otherwise, that means that either their claims are false or our models are flawed, and there’s no clear way to decide this.  The scientific standard of truth and those proper to spirituality need not overlap, and an attempt to use a scientific model to debunk something that was developed to sufficiently different ends can be backed up by nothing other than power.

That is, I suppose, the standard postmodern criticism of science.  But using theoretical models in the way I’ve described is not doing science.  Despite the popular image to the contrary, the driving purpose of science is not to debunk.  It is to explain.  Debunking only comes into play when the ideas in question contradict the best scientific models with respect to predictions that can be objectively tested; in that case and only in that case is it in science’s province to investigate who’s right.  Homeopathic medicine is something that’s ripe for refutation, and that’s good, because it causes objective harm.  Sudden irruptions of spiritual knowledge are generally not something that science could coherently debunk, and that’s great, because that sort of experience has resulted in some of the most astounding poetry humans have produced.

It’s because of this that I don’t think scientific investigation is anathema to art, even if we direct it towards art – even, further, if we direct it towards our apprehension of those spiritual sorts of truths that science can’t make heads or tails of.  If we approach the phenomenon of spiritual knowledge empirically, which I think we might as well, we cannot rightly treat that spiritual knowledge as a rival theory to science; instead, we must treat it as something to be explained.  The fact that it has to do with truth of another sort doesn’t mean that science can’t deal with it in that way, and the fact that science might be able to fully explain it doesn’t mean that it can’t still serve as an explanation in its own right.

God existing

In Philosophy, Science on July 30, 2011 at 7:39 pm

I’ve been following with some interest the attempts to create common understanding between Christians and atheists at the blog Unequally Yoked.  I’ve never had much interest in the endless online battles between arrogant atheists and hapless and/or arrogant Christians that have been happening on the Internet since the Usenet days, but the Internet could conceivably be a venue for a more productive discussion.  I’m just not sure that Unequally Yoked is doing any better.  I take exception to her characterization of the “atheist” position in this recent post:

Don’t forget that atheists and Christians don’t just disagree on which way the evidence points, they disagree on what kind of evidence should be counted. Atheists think that proof for the existence of God should look like the proof for the existence of life on the moon, or an invisible dragon in the garage. There should be observable, reproducible evidence that is convincing to pretty much any person with a grip on reality. Christians like Jen think that this kind of proposition is more like the squishy claims of “my mother loves me” or “people’s actions have moral weight” or “the physical world is not an illusion.” These claims are not provable in any formal system and they don’t lend themselves to definite empirical observation.

I call myself an atheist, but this doesn’t mean I’ve rejected the statement “There is a God” in the same way that I would reject the existence of an invisible dragon.  It would be a failure of understanding to suppose that the God of Augustine “exists” in the same sense as temporal things exist, and we would have to make a major leap to suppose that we can coherently talk about God in terms of observable evidence.  Though I am an empiricist in many ways, I don’t share the dismissive attitude towards this sort of “squishy” concept.  Any standard of truth we use we use because it has some value to us, because it helps us to achieve or to better formulate our goals.  This is why we use the empirical definition of truth, and likely it’s why many religious people use definitions of truth based on spiritual feeling.  I don’t think that all definitions of truth are equally valuable, but I would never claim that only the empirical one has any value at all, or try to force other people into debate with me in terms of it.

I call myself an atheist because, quite simply, the concept of God does not occupy a central position in my view of the world.  I wouldn’t say that it doesn’t occupy any position in my view of the world, because I do possess a mental representation of the concept and find it useful now and then, in trying to understand the motives of the religious people around me and in reading literature written from a religious perspective.  When I say that I don’t believe in God, all I mean is that I am not inclined to act on the propositions that make up that particular cluster of concepts in my head.   That’s it – my beliefs about evidence and different types of truth are a different matter.

Believing in evolution

In Philosophy, Science on June 29, 2011 at 3:18 pm

Via Language Log: A Philadelphia Inquirer article questioning whether “Do you believe in evolution?” (asked of Miss USA contestants) is the right question to be asking.

“I have attempted, largely through spurring on from several colleagues . . . to never use the word belief in talks,” said Arizona State University physicist and writer Lawrence Krauss.

“One is asked: Does one believe in global warming, or evolution, and the temptation is to answer yes,” he said, “but it’s like saying you believe in gravity or general relativity.”

“Science is not like religion, in that it doesn’t merely tell a story … one that one can choose to believe or not.”

I agree that what science asks of us is not belief and that talk in those terms can therefore be misleading, but I don’t think this means that the concept is inapplicable to scientific claims. To believe something (a proposition, a story, etc.) is to be in a condition in which one is compelled to act on its implications, which can, but which doesn’t necessarily have to be a consequence of holding the thing to be true. In many cases, I think that it’s good to believe (in this sense) the things that science tells us, in addition to knowing them in the way appropriate to scientific claims. Do I myself believe in the theory of evolution? I would say that I do, since I care about it and am willing to incur costs in order to defend it politically. It’s just that “it’s good to believe this” is a non-trivial statement, and one that’s outside of science’s domain.

As a result, it might indeed be best that science avoid referring to belief in its public face. This does not, however, mean that science education should have nothing to do with belief. Education is about the formation of persons, not just the transfer of information, and an important part of the schoolteacher’s job is to encourage students to develop an active interest in whatever subject they teach. This doesn’t mean that schools can try to force all students to believe in the value of science, but at least they have the responsibility of making sure that the beliefs students end up with are based on correct information.

Coersion and motivated reasoning

In Philosophy, Science on June 24, 2011 at 7:08 pm

The argumentative theory of reasoning could shed light on a scenario that I hinted at in my first post about freedom.  Suppose that you are about to be presented with two options, 1 and 2, and that you have decided to choose option 1.  Person B, not knowing this, attempts to coerce you into choosing that option.  You change your decision to 2, even though it is the less attractive option, in order to in some way assert your independence from B.  The argumentative theory could offer the explanation that reasoning about this situation would lead you to choose 2 if you think it would be easier to justify that choice than it would be to respond to accusations that you were too-easily swayed by B.

This decision isn’t necessarily irrational.  It may be that your choosing option 1 after the attempt at coercion really would damage others’ opinions of you, or that it would encourage B to attempt further coercions in the future.  Presumably, the greater the difference in utility between 1 and 2 the less inclined you will be to change your decision, not least because other people will be less likely to accuse you of being manipulated if the choice you made is obviously the better one.

I don’t have experimental evidence to back up this example, though there may be some that I don’t know about.  If it proves true, it might give us some insight into what it is we desire when we desire to be independent.  It suggests a way of thinking about independence that avoids unclear talk about choice, if not one that would serve particularly well in a definition of freedom.

Argumentative theory of reasoning

In Science on June 22, 2011 at 9:00 am

Last time I promised a more detailed post about Hugo Mercier and Dan Sperber’s argumentative theory of reasoning, which I’ve found quite interesting.  Looking back at the New York Times writeup after reading the paper, I’m amazed (though I shouldn’t be) at how completely wrong they get it.  A lot of the informal commentary I’ve seen (including the article I mentioned in my last blog post) seems to follow the Times in thinking that the paper says there’s something “irrational” about the way humans argue, and that, as the Times puts it, our ability to reason evolved as a “weapon” for use in persuading people rather than as a “path to truth”.  What the paper actually argues is that human reasoning is better suited to collective than individual problem solving, albeit only within a context that involves disagreement:

When one is alone or with people who hold similar views, one’s arguments will not be critically evaluated. This is when the confirmation bias is most likely to lead to poor outcomes. However, when reasoning is used in a more felicitous context – that is, in arguments among people who disagree but have a common interest in the truth – the confirmation bias contributes to an efficient form of division of cognitive labor. (65)

The conclusion is not that debate is some sort of battle in which each side tries to “defeat” the other in order to gain something by altering their beliefs.  If this were so, why would anyone listen to anyone else’s arguments?  The conclusion of the paper is that a vigorous debate is the best way we have of finding the truth.

Mercier wrote a response in the Times, so I won’t beat this dead horse any further.  There is plenty to say about the claims that the paper actually does make.  The authors give not just an explanation for the confirmation bias (in addition to the phenomenon of motivated reasoning) but the suggestion that, by working together in a certain way, we can make the bias a virtue.  One’s mind immediately turns to politics, although figuring out a way of encouraging this productive sort of debate on political issues is not an easy matter.

As I understand it, the type of debate that would make the confirmation bias a virtue would, in Mercier & Sperber’s dual-process model of reasoning, play out like this.  A group of people would have a common problem that they all have an interest in solving.  Each collaborator would form (through intuitive inference) an opinion about what the solution is, and then have two jobs (and only two) involving conscious reasoning: first, they are to come up with arguments in favor of their opinion, and second, they are to look for ways of falsifying the other opinions put forth.  The people would also listen to the arguments that others make, evaluating them on an intuitive level and changing their opinions if the arguments prove persuasive.  If one of the collaborators becomes convinced that their original opinion was wrong but is not convinced of an alternative, then they might intuitively come up with a new one, which would help in cases where none of the collaborators initially had the right solution.  Assuming a whole truckload of conditions, everyone will eventually come to accept the right answer.

As far as I know, the exact conditions under which this sort of debate could work well have not been determined.  In the experimental situations cited in section 3 of the paper, there was a clear definition of the problem that was understood by all participants, it was known that a single solution existed, there was a known method for falsifying prospective solutions, and all participants had an interest in finding the correct answer.  Setting up a debate that meets all these criteria would be near-impossible for most political problems, but we might be able to make the confirmation bias productive under less-strict conditions.  A more difficult complication is that, while the participants in the experiments presumably have little interest, apart from vanity, in what particular solution turns out to be correct, the concepts involved in politics are tied into people’s economic and emotional lives in immensely complex ways.  Even if everyone has an interest in reaching the best solution, there may be other factors that motivate the participants to act counterproductively to debate.

These are issues that I was already thinking about before this paper came along, and I’m not yet sure whether an argumentative theory of reasoning, which is primarily a diachronic theory about evolution, adds something significant to a political philosopher’s toolkit if it is correct.  It does, however, contra the New York Times, give us a hopeful way of looking at the confirmation bias.  Perhaps our faculty of reasoning isn’t buggy after all.