A blog about belief

Posts Tagged ‘evidence’

God existing

In Philosophy, Science on July 30, 2011 at 7:39 pm

I’ve been following with some interest the attempts to create common understanding between Christians and atheists at the blog Unequally Yoked.  I’ve never had much interest in the endless online battles between arrogant atheists and hapless and/or arrogant Christians that have been happening on the Internet since the Usenet days, but the Internet could conceivably be a venue for a more productive discussion.  I’m just not sure that Unequally Yoked is doing any better.  I take exception to her characterization of the “atheist” position in this recent post:

Don’t forget that atheists and Christians don’t just disagree on which way the evidence points, they disagree on what kind of evidence should be counted. Atheists think that proof for the existence of God should look like the proof for the existence of life on the moon, or an invisible dragon in the garage. There should be observable, reproducible evidence that is convincing to pretty much any person with a grip on reality. Christians like Jen think that this kind of proposition is more like the squishy claims of “my mother loves me” or “people’s actions have moral weight” or “the physical world is not an illusion.” These claims are not provable in any formal system and they don’t lend themselves to definite empirical observation.

I call myself an atheist, but this doesn’t mean I’ve rejected the statement “There is a God” in the same way that I would reject the existence of an invisible dragon.  It would be a failure of understanding to suppose that the God of Augustine “exists” in the same sense as temporal things exist, and we would have to make a major leap to suppose that we can coherently talk about God in terms of observable evidence.  Though I am an empiricist in many ways, I don’t share the dismissive attitude towards this sort of “squishy” concept.  Any standard of truth we use we use because it has some value to us, because it helps us to achieve or to better formulate our goals.  This is why we use the empirical definition of truth, and likely it’s why many religious people use definitions of truth based on spiritual feeling.  I don’t think that all definitions of truth are equally valuable, but I would never claim that only the empirical one has any value at all, or try to force other people into debate with me in terms of it.

I call myself an atheist because, quite simply, the concept of God does not occupy a central position in my view of the world.  I wouldn’t say that it doesn’t occupy any position in my view of the world, because I do possess a mental representation of the concept and find it useful now and then, in trying to understand the motives of the religious people around me and in reading literature written from a religious perspective.  When I say that I don’t believe in God, all I mean is that I am not inclined to act on the propositions that make up that particular cluster of concepts in my head.   That’s it – my beliefs about evidence and different types of truth are a different matter.

Advertisements

Paranoid

In Philosophy on July 14, 2011 at 11:01 pm

Philip K. Dick wrote that “the paranoid is totally rigid”, but there is an element of flexibility to the systems that paranoid people construct that is critical to the systems’ ability to endure.  The more that a belief is rigidly universalizing – “all sheep are white” – the stronger the possibility that we encounter evidence that would force us to revise it.  If we see a sheep that is not white, then the belief that all sheep are white will wither.  The types of beliefs that are central to paranoia, by contrast, are vague in the sense that there is no clear way of falsifying them, and this gives them a flexibility that enables one to adduce more-or-less any stimulation as positive evidence of their truth.

Things get more complicated, of course, once we start thinking about language.  We do not simply encounter a sheep that is not white; we encounter something that meets our definition of sheep that is not white.  One might say that we have two options when faced with the evidence – we can either revise our belief that all sheep are white, or else we can revise our definition of sheep.  If we accept, as I do, Quine’s thesis that there is no “fact of the matter” about analyticity of sentences, then there is actually no clear difference between these two responses.  Is our belief that all sheep are white a part of our mental definition of “sheep,” or is it information we have gleaned from experience about a concept that is defined by other characteristics?  The answer is that this is not a well-formed question, at least empirically speaking.  Every belief we have about sheep contributes, if only in some small way, to what the word means to us; it is only that some of these beliefs are closer to the surface, where the empirical force can more easily affect them, than others.

Thought in these terms, paranoia is a state in which the deep mental connections to a proposition at the center of one’s worldview are too few, or too weak, to avoid being easily altered by new evidence.  The difference between believing that “all sheep are white” and believing that “the toaster is out to get me” is that there is not adequate mental rigging fixing the meaning of “out to get me” in that context, while presumably there is for the word “sheep.”  Because of this lack of fixity, observational evidence no longer serves to improve the paranoid’s beliefs by forcing revision of them.  Instead, it pulls their existing beliefs this way and that.

The “interplay of chain stimulations”

In Mathematics, Philosophy on July 3, 2011 at 10:00 pm

In Word and Object, W.V.O. Quine talks about the way in which the mind revises its web of beliefs as if this process occurs in an unconscious way:

[…]  Prediction is in effect the conjectural anticipation of further sensory evidence for a foregone conclusion.  When a prediction comes out wrong, what we have is a divergent and troublesome sensory stimulation that tends to inhibit that once foregone conclusion, and so to extinguish the sentence-to-sentence conditionings that led to the prediction.  Thus it is that theories wither when their predictions fail.

In an extreme case, the theory may consist in such firmly conditioned connections between two sentences that it withstands the failure of a prediction or two.  We find ourselves excusing the failure of prediction as a mistake in observation or a result of unexplained interference.  The tail thus comes, in an extremity, to wag the dog.

The sifting of evidence would seem from recent remarks to be a strangely passive affair, apart from the effort to intercept helpful stimuli: we just try to be as sensitively responsive as possible to the ensuing interplay of chain stimulations.  What conscious policy does one follow, then, when not simply passive toward this interanimation of sentences?  Consciously the quest seems to be for the simplest story.  Yet this supposed quality of simplicity is more easily sensed than described.  Perhaps our vaunted sense of simplicity, or of likeliest explanation, is in many cases just a feeling of conviction attaching to the blind resultant of the interplay of chain stimulations in their various strengths.  (§5)

Mercier & Sperber’s argumentative theory of reasoning could offer an answer to this conundrum.  To be sure, Quine is not suggesting an argumentative theory – by “story” he means theory, not argument.  But he is only able to hesitantly claim that the conscious part of cognition has the function of preserving the simplicity of theories.  Even this operation appears to occur at the level of intuition, and what purpose conscious reasoning has left is unclear.  In the argumentative theory of reasoning, the “interplay of chain stimulations” by which contrary evidence tugs at our theoretical ideas would be a part of the intuitive track of cognition.  The function of conscious reasoning would be not to oversee this intuitive process, but to come up with good ways of verbalizing its results.  Conscious reasoning would not, in the normal course of things, involve changing the web of belief at all – instead its purpose would be to look for paths along the web that link the particular beliefs one anticipates having to defend to sentences that others might be willing to take as premises.

The argumentative theory claims to explain the confirmation bias by thus reconceiving the function of conscious reasoning, but Quine suggests (in the second paragraph I quoted) that a confirmation bias of sorts can occur in what I have assigned to the intuitive track of cognition as well.  Sometimes our theoretical ideas have become so ingrained that we “excuse” contrary observations.  As far as I can tell, Mercier & Sperber’s argumentative theory would not explain this sort of confirmation bias.

To the extent that it serves the preference for simplicity, an intuitive confirmation bias is not fundamentally irrational, because at least in certain situations selectively ignoring evidence in the name of simplicity can result in better predictions.  This has proven true experimentally in the field of machine learning.  Suppose that we have a plane on which each point is either red or green.  Given a finite number of observations about the colors of particular points, we wish to come up with a way of predicting the color of any point on the plane.  One way of doing this is to produce a function that divides the plane into two sections, red and green.  If we can draw a straight line that correctly divides all of our observations, red on one side and green on the other, then we have a very simple model that, assuming that the set of observations we used to derive it is representative and sufficiently large, is likely to work well.  However, if it is necessary to draw a very complex, squiggly line to correctly account for all of the observations (if we are required to use a learning machine with a high VC dimension), then it is often better to choose a simpler function even if makes the wrong prediction for a few of our observed cases.  Overfitting can lead to the creation of models that deviate from the general pattern in order to account for what might actually be random noise in the observational data.  In the same way, if we attempted to account for every possible bit of contrary evidence in the revision our mental theories, our ability to make useful predictions with them would be confounded.  We will always encounter deviations from what we expect, and at least some of these will be caused by factors that we will never come across enough data to model correctly.  In such cases, we are better off allowing our too-simple theories to stand.