A blog about belief

Archive for July, 2011|Monthly archive page

God existing

In Philosophy, Science on July 30, 2011 at 7:39 pm

I’ve been following with some interest the attempts to create common understanding between Christians and atheists at the blog Unequally Yoked.  I’ve never had much interest in the endless online battles between arrogant atheists and hapless and/or arrogant Christians that have been happening on the Internet since the Usenet days, but the Internet could conceivably be a venue for a more productive discussion.  I’m just not sure that Unequally Yoked is doing any better.  I take exception to her characterization of the “atheist” position in this recent post:

Don’t forget that atheists and Christians don’t just disagree on which way the evidence points, they disagree on what kind of evidence should be counted. Atheists think that proof for the existence of God should look like the proof for the existence of life on the moon, or an invisible dragon in the garage. There should be observable, reproducible evidence that is convincing to pretty much any person with a grip on reality. Christians like Jen think that this kind of proposition is more like the squishy claims of “my mother loves me” or “people’s actions have moral weight” or “the physical world is not an illusion.” These claims are not provable in any formal system and they don’t lend themselves to definite empirical observation.

I call myself an atheist, but this doesn’t mean I’ve rejected the statement “There is a God” in the same way that I would reject the existence of an invisible dragon.  It would be a failure of understanding to suppose that the God of Augustine “exists” in the same sense as temporal things exist, and we would have to make a major leap to suppose that we can coherently talk about God in terms of observable evidence.  Though I am an empiricist in many ways, I don’t share the dismissive attitude towards this sort of “squishy” concept.  Any standard of truth we use we use because it has some value to us, because it helps us to achieve or to better formulate our goals.  This is why we use the empirical definition of truth, and likely it’s why many religious people use definitions of truth based on spiritual feeling.  I don’t think that all definitions of truth are equally valuable, but I would never claim that only the empirical one has any value at all, or try to force other people into debate with me in terms of it.

I call myself an atheist because, quite simply, the concept of God does not occupy a central position in my view of the world.  I wouldn’t say that it doesn’t occupy any position in my view of the world, because I do possess a mental representation of the concept and find it useful now and then, in trying to understand the motives of the religious people around me and in reading literature written from a religious perspective.  When I say that I don’t believe in God, all I mean is that I am not inclined to act on the propositions that make up that particular cluster of concepts in my head.   That’s it – my beliefs about evidence and different types of truth are a different matter.

Advertisements

Lot 49

In Literature, Music on July 16, 2011 at 1:24 pm

I guess I should be glad that my favorite author is being forced upon tens of thousands of undergraduates, but I’m worried that a lot of them are getting the wrong impression of Thomas Pynchon from The Crying of Lot 49. I can’t help but feel a fondness for the book, but I’m fond of it mainly insofar as it represents an interesting dead end from which Pynchon had the good sense to retreat.

If he is a postmodernist writer, Pynchon generally stays closer to the early form of postmodernism that emphasized the “ethical turn” against systematization in response to the Holocaust than to their followers who blathered on endlessly about “surfaces.” The Crying of Lot 49 is unique among his novels in that its critique of systems of meaning takes on an air of inevitability. Crying depicts a world in which it is actually impossible to believe in anything without falling into insanity. Absent are fearsome figures like Weissman (from V. and Gravity’s Rainbow), who are threatening beyond anything in Crying because of the extent to which they’ve successfully actualized their murderous belief systems. Instead we are presented with an Umwelt so overloaded with evidence of underlying meanings that the protagonist must be in constant doubt of whether or not her beliefs are the right ones. The only way to believe something without being driven mad by this evidence is to withdraw from the world altogether and live in fantasy; Oedipa ends up alienated not because she’s afraid of the violence that has been committed in the name of grand purposes, but because she can’t get over the idea that there might be a grand purpose for her to be alienated from.

Armchair sociologists have mused that young people can no longer understand the appeal of Crying because the paranoid alienation of Oedipa Maas has become their normal way of looking at the world. To the extent that the present-day person does look at the world in an alienated way, it is only because of the constant repetition of statements like this. For all its supposed rejection of Modernism, postmodernism continues to define alienation in terms of a Modernist concept of the self based on authenticity. Because the epistemic standards of “authenticity” have always been unclear, asking someone whether or not their experience of life is authentic is more likely to start them worrying about whether or not it “really” is than to get you a truthful answer.

The fact is that modern life never was characterized by alienation for more than a small class of highly-educated Westerners who thought themselves into it. The problem with Crying is that it inhabits the alienated state too fully, and with too little counterweight, to be meaningful to someone who hasn’t already bought into that line of thought. As such, it implicates readers in the paranoia that it seems intended to criticize.

One positive thing Crying does do for me, though, is give me the chance to get a thrill of inclusion whenever I see scrawled on a bathroom wall or cut into a park bench a loop, triangle and trapezoid, thus:

Muted post horn

The critical consensus is surely right that if we look for meaning in Pynchon’s works by trying to interpret them, all we get is: looking for meaning in texts is madness. But that’s not all there is to the novels. The purpose of art is not to say something about our world; it’s to become part of our world, and for at least the small group of people who constitute Pynchon’s core audience his work serves much the same goal that Bob Dylan sets out for himself in “Tombstone Blues”:

Now I wish I could write you a melody so plain
That could hold you, dear lady, from going insane
That could ease you and cool you and cease the pain
Of your useless and pointless knowledge

Rather than looking for meaning in the “text” of Pynchon’s novels, then, why not think about what they do for their fans? It is, I think, quite a lot.

I consider myself a Pynchon fan before a Pynchon scholar. I discovered Pynchon’s work well before I knew about postmodernism or had any thought of studying literature seriously, and, for what it’s worth, though it had an enormous impact on me, the effect of the encounter was not to instill in me an “incredulity towards metanarratives.” I read Pynchon’s greatest novel, Gravity’s Rainbow, over the course of about a month when I was 19, having known nothing about it going in, and the three months that followed were defined by my obsession with what I had just (not always pleasurably) forced my way through. I emerged from this period more assured than I ever had been in my political ideology, and ready to take action from within it in a more structured and positive way than the futile jabs at authority figures that had constituted my political life up to that point. I don’t think my experience was an anomaly, and while I’m not going to claim that Pynchon is not a postmodern writer, I do think that looking at him in those terms ill equips the critic to explain what he did to me, and what I suspect constitutes the appeal of his writing for a lot of his most earnest admirers.

Paranoid

In Philosophy on July 14, 2011 at 11:01 pm

Philip K. Dick wrote that “the paranoid is totally rigid”, but there is an element of flexibility to the systems that paranoid people construct that is critical to the systems’ ability to endure.  The more that a belief is rigidly universalizing – “all sheep are white” – the stronger the possibility that we encounter evidence that would force us to revise it.  If we see a sheep that is not white, then the belief that all sheep are white will wither.  The types of beliefs that are central to paranoia, by contrast, are vague in the sense that there is no clear way of falsifying them, and this gives them a flexibility that enables one to adduce more-or-less any stimulation as positive evidence of their truth.

Things get more complicated, of course, once we start thinking about language.  We do not simply encounter a sheep that is not white; we encounter something that meets our definition of sheep that is not white.  One might say that we have two options when faced with the evidence – we can either revise our belief that all sheep are white, or else we can revise our definition of sheep.  If we accept, as I do, Quine’s thesis that there is no “fact of the matter” about analyticity of sentences, then there is actually no clear difference between these two responses.  Is our belief that all sheep are white a part of our mental definition of “sheep,” or is it information we have gleaned from experience about a concept that is defined by other characteristics?  The answer is that this is not a well-formed question, at least empirically speaking.  Every belief we have about sheep contributes, if only in some small way, to what the word means to us; it is only that some of these beliefs are closer to the surface, where the empirical force can more easily affect them, than others.

Thought in these terms, paranoia is a state in which the deep mental connections to a proposition at the center of one’s worldview are too few, or too weak, to avoid being easily altered by new evidence.  The difference between believing that “all sheep are white” and believing that “the toaster is out to get me” is that there is not adequate mental rigging fixing the meaning of “out to get me” in that context, while presumably there is for the word “sheep.”  Because of this lack of fixity, observational evidence no longer serves to improve the paranoid’s beliefs by forcing revision of them.  Instead, it pulls their existing beliefs this way and that.

Always say never

In Philosophy on July 8, 2011 at 2:28 pm

Years ago, I went to a writers’ group in St. Louis, and I commented that the way a story depicted a real-life volunteer organization seemed too much like an advertisement for it. Someone responded, “Let me see if I’ve got this right.  So you’re saying that people should never use the names of real organizations in their writing?” I got annoyed at this because I never said anything like “never.” This has happened to me a number of other times: I’ve made an argument that I meant to apply in a specific context, and to be just one factor among many to consider, and the other person has responded to it as if it were meant as an absolute.

Call this a failure on my part, and I wouldn’t object. But I think my problem is not that I express myself unclearly. I think it’s that I’ve failed to recognize a bias in how people process the statements made in debate. I’m not sure if this bias is universal or just Midwestern stubbornness (I can’t think of any major instances since I left there), but in either case it’s better we try to understand and account for it than complain. What seems to happen is that a person takes statements made with a fixed variable (Pa → Qa), and adds a universal quantifier to them (∀a (Pa → Qa)); then, rather than interrogating the logic by which these statements were reached, they go looking for counterexamples.

Attack the argument, not the conclusion, philosophers say. But the type of discussion to which this (at this point, only conjectured) bias seems keyed is not a terrible one; it’s just more conducive to inductive than deductive reasoning. If we see the positive statements put forth as hypotheses, rather than conclusions, a debate based on the search for counterexamples to universally-quantified claims could be quite effective at reaching the truth.

What this implies is that the way people tend to debate is not optimized for a situation in which multiple perspectives co-exist. An attempt to respect other points of view by speaking only in contingencies would have quite the opposite effect if the listener winds up universalizing the claims one makes; but that universalization would also allow people to hold those claims to stronger, and more objective, epistemic tests. People can’t be expected to be objective in everything they say, but when a conversation is a debate it can come to a gridlock or worse if the participants deal in nothing but particulars.  The truly productive tension is between the particular and the universal.

The “interplay of chain stimulations”

In Mathematics, Philosophy on July 3, 2011 at 10:00 pm

In Word and Object, W.V.O. Quine talks about the way in which the mind revises its web of beliefs as if this process occurs in an unconscious way:

[…]  Prediction is in effect the conjectural anticipation of further sensory evidence for a foregone conclusion.  When a prediction comes out wrong, what we have is a divergent and troublesome sensory stimulation that tends to inhibit that once foregone conclusion, and so to extinguish the sentence-to-sentence conditionings that led to the prediction.  Thus it is that theories wither when their predictions fail.

In an extreme case, the theory may consist in such firmly conditioned connections between two sentences that it withstands the failure of a prediction or two.  We find ourselves excusing the failure of prediction as a mistake in observation or a result of unexplained interference.  The tail thus comes, in an extremity, to wag the dog.

The sifting of evidence would seem from recent remarks to be a strangely passive affair, apart from the effort to intercept helpful stimuli: we just try to be as sensitively responsive as possible to the ensuing interplay of chain stimulations.  What conscious policy does one follow, then, when not simply passive toward this interanimation of sentences?  Consciously the quest seems to be for the simplest story.  Yet this supposed quality of simplicity is more easily sensed than described.  Perhaps our vaunted sense of simplicity, or of likeliest explanation, is in many cases just a feeling of conviction attaching to the blind resultant of the interplay of chain stimulations in their various strengths.  (§5)

Mercier & Sperber’s argumentative theory of reasoning could offer an answer to this conundrum.  To be sure, Quine is not suggesting an argumentative theory – by “story” he means theory, not argument.  But he is only able to hesitantly claim that the conscious part of cognition has the function of preserving the simplicity of theories.  Even this operation appears to occur at the level of intuition, and what purpose conscious reasoning has left is unclear.  In the argumentative theory of reasoning, the “interplay of chain stimulations” by which contrary evidence tugs at our theoretical ideas would be a part of the intuitive track of cognition.  The function of conscious reasoning would be not to oversee this intuitive process, but to come up with good ways of verbalizing its results.  Conscious reasoning would not, in the normal course of things, involve changing the web of belief at all – instead its purpose would be to look for paths along the web that link the particular beliefs one anticipates having to defend to sentences that others might be willing to take as premises.

The argumentative theory claims to explain the confirmation bias by thus reconceiving the function of conscious reasoning, but Quine suggests (in the second paragraph I quoted) that a confirmation bias of sorts can occur in what I have assigned to the intuitive track of cognition as well.  Sometimes our theoretical ideas have become so ingrained that we “excuse” contrary observations.  As far as I can tell, Mercier & Sperber’s argumentative theory would not explain this sort of confirmation bias.

To the extent that it serves the preference for simplicity, an intuitive confirmation bias is not fundamentally irrational, because at least in certain situations selectively ignoring evidence in the name of simplicity can result in better predictions.  This has proven true experimentally in the field of machine learning.  Suppose that we have a plane on which each point is either red or green.  Given a finite number of observations about the colors of particular points, we wish to come up with a way of predicting the color of any point on the plane.  One way of doing this is to produce a function that divides the plane into two sections, red and green.  If we can draw a straight line that correctly divides all of our observations, red on one side and green on the other, then we have a very simple model that, assuming that the set of observations we used to derive it is representative and sufficiently large, is likely to work well.  However, if it is necessary to draw a very complex, squiggly line to correctly account for all of the observations (if we are required to use a learning machine with a high VC dimension), then it is often better to choose a simpler function even if makes the wrong prediction for a few of our observed cases.  Overfitting can lead to the creation of models that deviate from the general pattern in order to account for what might actually be random noise in the observational data.  In the same way, if we attempted to account for every possible bit of contrary evidence in the revision our mental theories, our ability to make useful predictions with them would be confounded.  We will always encounter deviations from what we expect, and at least some of these will be caused by factors that we will never come across enough data to model correctly.  In such cases, we are better off allowing our too-simple theories to stand.

Making judgments

In Philosophy, Politics on July 1, 2011 at 1:18 pm

(This post was has its roots the discussion of a paper presented by Ronni Sadovsky at the 2011 Strategies of Critique conference.)

An attitude can be prejudiced even if the beliefs it is based on are true.  Although it may be the case that blue-collar workers in the U.S. don’t tend to be well-informed about French literature, it would seem a prejudiced thing to do to assume that the people on the maintenance staff of your building probably don’t know much about Marcel Proust and, for instance, condescendingly explain to one of them who Proust is without bothering to find out whether they already know.  What makes this action prejudiced is not an epistemic deficiency – you may well be justified in believing that the person you’re talking to is probably unfamiliar with Proust – but an unfairness that crops up at some point on the path from knowledge to action.  Is information about correlations between groups of people and traits simply inadmissible in how we choose our actions?

It isn’t so simple.  To continue with the example, imagine that you approach a group of maintenance workers and ask, without prompting, whether any of them has read À la recherche du temps perdu.  Instead of coming off as charitable, this could, in certain circumstances, give them the impression that you are lording your superior education over them.  Assuming that it is plausible, this serves as an example of a case where it’s actually necessary to make a judgment based on correlations between traits and groups to maintain a respectful attitude – namely, the judgment that the topic of Proust should be broached in a different way among the maintenance staff than it should be among, say, the attendees of a French Literature colloquium.

Why is it acceptable to make a judgment based on group-trait correlations in the second case and not in the first?  One way of answering this would be to claim that in second case the judgment that one has to make is not about blue-collar workers but about French Literature colloquium attendees.  The approach that you should take with the maintenance staff is the same that you should take with anyone unless you have a particular reason to think they know about Proust.  The question, then, is why it is not an example of prejudice to assume that French Literature colloquium members are probably familiar with Proust.  Two answers immediately come to mind, but I don’t think that either of them is right.  One would be that it is not an example of prejudice because the assumption being made is of a positive valence.  This would also imply, though, that assuming that all young black males are knowledgable about hip-hop is acceptable (because knowing a lot about hip-hop is generally speaking a good thing), while it is clearly still an instance of prejudice.  Another answer would be that the difference is in relevance.  One could claim the property of familiarity with Proust is relevant to the identity of the French department group in a way that it is not to the maintenance staff group, and that this is the difference.  But relevance is a slippery concept, and it could lead to question-begging – one could claim that knowledge of hip-hop is relevant to the identity of young black males.  This way of thinking leads us nowhere good.

I think that any satisfactory solution to this problem is going to have to account for the fact that categories of class and race are, in an ethically significant way, different from categories of profession.  I wouldn’t say that the difference is that one can choose one’s profession – choice is another slippery concept – but there is a sense in which one’s race and class are necessarily prior to one’s membership in other categories that is significant, and this is, perhaps, why acting on knowledge about them in certain ways is ethically wrong.

As another data point, take this article from the Onion.