In Culture on June 19, 2012 at 1:18 pm
I just discovered (a few months late) the mesmerizing Tumblr “What Should We Call Me.” Each post on the blog pairs generalized descriptions of situations with animated GIFs, most often showing a facial expression or bodily motion. This almost perfectly exhibits something that Lisa Zunshine talks about in her paper “Theory of Mind and Fictions of Embodied Transparency.” “Embodied transparency,” as Zunshine defines it, is a class of fictional tropes in which a character involuntarily reacts in a way that reveals their emotions to others, whether through a facial expression or through a bodily movement. The GIFs on “What Should We Call Me” are the converse of this type of trope, conventionalized representations that follow in their wake once people realize that the reactions, once assumed to be involuntary, can be replicated (in this case literally) and subverted.
Zunshine writes that there is an “arms race” between artists who wish to convince the audience that reactions are genuinely involuntary and others who wish to turn them into tropes that can be performed. This causes mental states to appear to “retreat” from the possibility of transparent expression, as more and more tropes are proven subject to deceptive performance (78). But I wonder if there isn’t a sense in which the conventionalization of emotional expression can bring people closer to other people’s minds. I’ve long been a defender of artfulness over ideas of authenticity that exclude it, and conventional bodily reactions—or GIFs that get passed around in lieu of them—provide a way of expressing emotions whose intentionality can be clearly seen by all. They don’t transparently convey emotions, to be sure, but another level of expression can take place that appeals to the audience’s ability to understand the underlying intention, which does not involve communication through the act, but which is a precondition for all acts of communication.
In Computer Science, Literature, Mathematics on June 18, 2012 at 7:37 pm
I’m working on a software project (more soon) that involves a notation that is interpreted by computers. As a way of specifying the language formally, I’m trying out parsing expression grammars, a relatively new alternative to the methods that have been traditionally used to define the syntax of programming languages, like context-free grammars. I’ve been reading the original paper in which Bryan Ford introduces PEGs, and something struck me about the way in which it builds up to the mathematical definition of the idea. The paper begins with an “informal” explanation that starts with an example of a PEG written in ASCII text, like you would use as the input to a program:
# Hierarchical syntax
Grammar <- Spacing Definition+ EndOfFile
Definition <- Identifier LEFTARROW Expression
Expression <- Sequence (SLASH Sequence)*
Sequence <- Prefix*
Prefix <- (AND / NOT)? Suffix
Suffix <- Primary (QUESTION / STAR / PLUS)?
Primary <- Identifier !LEFTARROW
/ OPEN Expression CLOSE
/ Literal / Class / DOT
Although the paper explains what it means in a very prosaic way, placing it in historical context and comparing PEG’s practical implications with those of other types of grammar, this bit of ASCII text seems intuitively like the most formal thing in the paper. The mathematical definition of the construct is set off much less from the text of the article than the ASCII example, which is in a fixed-width font and embedded as “Figure 1.” The definition begins:
Definition: A parsing expression grammar (PEG) is a 4-tuple G=(VN, VT, R, eS), where VN is a finite set of nonterminal symbols, VT is a finite set of terminal symbols, R is a finite set of rules, eS is a parsing expression termed the start expression, and VN ∩ VT = ∅. Each rule r ∈ R is a pair (A, e), which we write A ← e, where A ∈ VN and e is a parsing expression. For any nonterminal A, there is exactly one e such that A ← e ∈ R. R is therefore a function from nonterminals to expressions, and we write R(A) to denote the unique expression e such that A ← e ∈ R.
One reason why the informal explanation is informal in comparison with this is that it describes the syntax of PEGs using a PEG, making it a circular definition. But two things jump out at me.
- The mathematical notations in the formal definition are interpolated into a paragraph of written English, while the informal definition describes the syntax of the system in a way that a computer could understand.
- It would be much harder to see what the formal definition is doing without reading the informal one first. If the paper had started talking about 4-tuples right off the bat, it would be unclear in what sense the objects it defines could be considered “rules” and “parsing expressions.” There is something, a sort of mathematical anamnesis, that the reader takes away from the circular definition at the beginning of the paper that makes it possible to see the meaning of the more rigorous math that follows, in a sense of the word “meaning” that is not yet clear to me.