A blog about belief

Posts Tagged ‘modeling’

The Ontic Web

In Computer Science, Culture on August 7, 2012 at 8:04 pm

Recently I’ve been reading about RDF, which is an attempt by the World Wide Web Consortium to create a standard way of representing information about “resources,” which is the word that they use for things.  I’m no fan of XML—a relative of RDF that provides a way to store every type of information in the same horrible HTML-like syntax—and RDF certainly shares its tendency to complicate people’s jobs.  But although the broadness of RDF’s goals all but guarantees its unwieldiness, I’m beginning to think that there is a need for a computer-processable way of writing “ontologies” beyond the interoperability concerns that motivated the RDF project.

It’s almost too easy to do the postmodernist critique of totalizing schemes with systems like RDF.  The example used in the primer for the OWL 2 Web Ontology Language, a commonly-used extension to RDF, is a system for describing family relationships.  Using OWL’s vocabulary for talking about the types of relationships that can hold between things, they define what it is to be a parent, a sibling, and so forth, in statements like this:

EquivalentClasses( :Person :Human )

The authors claim that they do not

intend this example to be representative of the sorts of domains OWL should be used for, or as a canonical example of good modeling with OWL, or a correct representation of the rather complex, shifting, and culturally dependent domain of families. Instead, we intend it to be a rather simple exhibition of various features of OWL.

Sure enough, we get to the zinger a few sections in.

Frequently, the information that two individuals are interconnected by a certain property allows to draw further conclusions about the individuals themselves. In particular, one might infer class memberships.  For instance, the statement that B is the wife of A obviously implies that B is a woman while A is a man.

Even when they’re only used as examples, categorization schemes tend to turn into power plays.  Think how a person who just married her girlfriend would feel reading that.

But information modeling isn’t all retrograde.  There’s an admirable example in Sam Hughes’s very funny essay about how database engineers will have to adapt to gay marriage.  And there is more to RDF than what I would Heideggerianly call ontics—the description of categories and subcategories of things.

One type of program that people have developed for RDF is the inference engine, which attempts to mimic human reasoning by drawing conclusions from the knowledge represented in files.  Whether or not they will lead to a serious AI, people have put these tools to use straightaway for a quite different purpose, that of checking the consistency of their work while putting ontologies together.  This is a different application of the technology from that of defining standard vocabularies to enable different software systems to work together, which is where RDF has found the most application (and which is admittedly very important).  It has less to do with the finished product (the ontology file) than with what we learn in the process of writing it, and with the input that the computer is able to give to the writer as revision proceeds.

Debunking vs explaining

In Art, Philosophy, Science on October 23, 2011 at 5:20 pm

I’m often surprised how many smart people I encounter who think of science as some kind of beauty-destroying machine.  This is a particularly common reaction to the attempts of neuroscientists and cognitive psychologists to understand how people relate to art.  I appreciate the need for different ways of talking about things, particularly when it comes to things as slippery as aesthetic responses, but there’s an anti-intellectualism I can’t abide in the idea that coming up with an explanation for something diminishes it.  There is no reason why our more subjective ways of thinking can’t stand alongside scientific theories, although perhaps science might prod us to adjust them a bit so that they’re less likely to lead us to bad decisions.  If science finds a way to reduce spiritual feelings to the workings of our brain, that doesn’t invalidate them or drain them of their power.  If anything, it validates them by providing evidence for their consistency with the way of looking at the world that’s proven most practically successful for us.

In that last statement I have tacitly given science a privileged position over other forms of knowledge.  I will not recant on this point, but I stop short of imputing that spirituality needs the validation of science.  That, I admit, gets us into worrisome territory.  I do suspect that some people who claim they’ve had a spiritual experience are kidding themselves, but there is a major ethical problem in calling someone else’s claim to have had such an experience bullshit, even if we’ve got MRI scans to back us up.  If someone says they’re experiencing a vision and our cognitive models of spirituality say otherwise, that means that either their claims are false or our models are flawed, and there’s no clear way to decide this.  The scientific standard of truth and those proper to spirituality need not overlap, and an attempt to use a scientific model to debunk something that was developed to sufficiently different ends can be backed up by nothing other than power.

That is, I suppose, the standard postmodern criticism of science.  But using theoretical models in the way I’ve described is not doing science.  Despite the popular image to the contrary, the driving purpose of science is not to debunk.  It is to explain.  Debunking only comes into play when the ideas in question contradict the best scientific models with respect to predictions that can be objectively tested; in that case and only in that case is it in science’s province to investigate who’s right.  Homeopathic medicine is something that’s ripe for refutation, and that’s good, because it causes objective harm.  Sudden irruptions of spiritual knowledge are generally not something that science could coherently debunk, and that’s great, because that sort of experience has resulted in some of the most astounding poetry humans have produced.

It’s because of this that I don’t think scientific investigation is anathema to art, even if we direct it towards art – even, further, if we direct it towards our apprehension of those spiritual sorts of truths that science can’t make heads or tails of.  If we approach the phenomenon of spiritual knowledge empirically, which I think we might as well, we cannot rightly treat that spiritual knowledge as a rival theory to science; instead, we must treat it as something to be explained.  The fact that it has to do with truth of another sort doesn’t mean that science can’t deal with it in that way, and the fact that science might be able to fully explain it doesn’t mean that it can’t still serve as an explanation in its own right.

The “interplay of chain stimulations”

In Mathematics, Philosophy on July 3, 2011 at 10:00 pm

In Word and Object, W.V.O. Quine talks about the way in which the mind revises its web of beliefs as if this process occurs in an unconscious way:

[…]  Prediction is in effect the conjectural anticipation of further sensory evidence for a foregone conclusion.  When a prediction comes out wrong, what we have is a divergent and troublesome sensory stimulation that tends to inhibit that once foregone conclusion, and so to extinguish the sentence-to-sentence conditionings that led to the prediction.  Thus it is that theories wither when their predictions fail.

In an extreme case, the theory may consist in such firmly conditioned connections between two sentences that it withstands the failure of a prediction or two.  We find ourselves excusing the failure of prediction as a mistake in observation or a result of unexplained interference.  The tail thus comes, in an extremity, to wag the dog.

The sifting of evidence would seem from recent remarks to be a strangely passive affair, apart from the effort to intercept helpful stimuli: we just try to be as sensitively responsive as possible to the ensuing interplay of chain stimulations.  What conscious policy does one follow, then, when not simply passive toward this interanimation of sentences?  Consciously the quest seems to be for the simplest story.  Yet this supposed quality of simplicity is more easily sensed than described.  Perhaps our vaunted sense of simplicity, or of likeliest explanation, is in many cases just a feeling of conviction attaching to the blind resultant of the interplay of chain stimulations in their various strengths.  (§5)

Mercier & Sperber’s argumentative theory of reasoning could offer an answer to this conundrum.  To be sure, Quine is not suggesting an argumentative theory – by “story” he means theory, not argument.  But he is only able to hesitantly claim that the conscious part of cognition has the function of preserving the simplicity of theories.  Even this operation appears to occur at the level of intuition, and what purpose conscious reasoning has left is unclear.  In the argumentative theory of reasoning, the “interplay of chain stimulations” by which contrary evidence tugs at our theoretical ideas would be a part of the intuitive track of cognition.  The function of conscious reasoning would be not to oversee this intuitive process, but to come up with good ways of verbalizing its results.  Conscious reasoning would not, in the normal course of things, involve changing the web of belief at all – instead its purpose would be to look for paths along the web that link the particular beliefs one anticipates having to defend to sentences that others might be willing to take as premises.

The argumentative theory claims to explain the confirmation bias by thus reconceiving the function of conscious reasoning, but Quine suggests (in the second paragraph I quoted) that a confirmation bias of sorts can occur in what I have assigned to the intuitive track of cognition as well.  Sometimes our theoretical ideas have become so ingrained that we “excuse” contrary observations.  As far as I can tell, Mercier & Sperber’s argumentative theory would not explain this sort of confirmation bias.

To the extent that it serves the preference for simplicity, an intuitive confirmation bias is not fundamentally irrational, because at least in certain situations selectively ignoring evidence in the name of simplicity can result in better predictions.  This has proven true experimentally in the field of machine learning.  Suppose that we have a plane on which each point is either red or green.  Given a finite number of observations about the colors of particular points, we wish to come up with a way of predicting the color of any point on the plane.  One way of doing this is to produce a function that divides the plane into two sections, red and green.  If we can draw a straight line that correctly divides all of our observations, red on one side and green on the other, then we have a very simple model that, assuming that the set of observations we used to derive it is representative and sufficiently large, is likely to work well.  However, if it is necessary to draw a very complex, squiggly line to correctly account for all of the observations (if we are required to use a learning machine with a high VC dimension), then it is often better to choose a simpler function even if makes the wrong prediction for a few of our observed cases.  Overfitting can lead to the creation of models that deviate from the general pattern in order to account for what might actually be random noise in the observational data.  In the same way, if we attempted to account for every possible bit of contrary evidence in the revision our mental theories, our ability to make useful predictions with them would be confounded.  We will always encounter deviations from what we expect, and at least some of these will be caused by factors that we will never come across enough data to model correctly.  In such cases, we are better off allowing our too-simple theories to stand.