Comments

Sunday, November 11, 2012

I before E: Lewis, Take 2


Once upon a time, it was suggested that a theory of meaning might be an algorithm that connects syntactic structures with mental representations. Niceties aside, Katz & Fodor and then Katz & Postal (KFP) took the representations to be expressions of a mental language: meanings are expressions of Markerese; and a good theory of meaning for language L connects the syntactic structures of L with their meanings (a.k.a. Markerese translations). I'll return to some reasons for not adopting this view. But first, I want to express puzzlement about why David Lewis' response was widely embraced.

In "General Semantics," Lewis famously said that
we can know the Markerese translation of an English sentence without knowing the first thing about the meaning of the English sentence; namely, the conditions under which it would be true. Semantics with no truth conditions is no semantics.
But is the first thing about a sentence meaning "the conditions under which it would be true"? If 'first' means 'most theoretically important aspect of', then the premise is that KFP were just flat wrong about what is most important. That makes for a simple argument. But what justifies this premise, since it isn't even obvious that English sentences have truth conditions. Consider (1).
(1) The sky is blue.
Is it obvious that the first thing about the meaning of (1) is the conditions under which it would be true? What are these conditions? Is it crazy to suggest that instead of worrying about what skies are, and what it is (or would be) for them to be blue, we should start by asking how (1) gets mapped onto a mental representation, which may or may not turn out to be isomorphic with (1)?

There follows the authoritative remark that "Semantics with no truth conditions is no semantics." But this remark is truistic or question-begging, depending on what is meant by 'semantics'. For Lewis (and Chomsky), it seems to be truistic: a semantics is the sort of thing that Tarski provided for the first-order predicate calculus. I'll happily adopt this classical construal of 'semantics'. But then it's an empirical question whether each Human Language (e.g., my version of English) has a semantics, especially if such languages are I-languages that generate sentences like (2).
(2)  The second sentence is false.
Alternatively, one can say that a language has a semantics if its expressions are meaningful in some systematic way. But then one can’t insist that a semantics be truth conditional. And after Dummett's discussion in “What is a Theory of Meaning,” one might wonder what role truth conditions play in theories of meaning, especially if such theories are (or should be) offered as theories of understanding. Faced with a theory that merely specifies truth/reference/satisfaction conditions, perhaps relative to worlds, one might respond as follows:
we can know the conditions under which an English sentence would be true without knowing the first thing about its meaning; namely, how the sentence is understood. Meaning without understanding is not meaning.

One can dig in and insist that we need a semantics (in Lewis' sense) for English—or at least for the Markerese into which English gets translated—because a real theory of understanding will connect words to the things we use words to talk about. I have doubts about this. Consider science fiction, mathematics, and sentences like (1). But if you dig in here, it's important that you not trade in Lewis’ worlds (which he posited as the domain for an honest-to-god semantics for English) for abstract model-elements that ordinary speakers don't actually talk about.

Correlatively, while it can be illuminating to study "logical" vocabulary by combining model-theoretic techniques with a Montague-style algorithm that connects "surface structures" with sentences of Church's Lambda Calculus, one wants to know how "truth-in-a-model" is related to truth, and how the models are related to the domains for classical semantic theories. Ernie Lepore pressed this kind of point in “What Model-Theoretic Semantics Cannot Do;” see also John Etchemendy’s work. Tweaking the Lewis quote above, one might say that
we can know the True-in-M conditions of an English sentence without knowing the first thing about the meaning of the English sentence; namely, the conditions under which it would be true. Semantics with no truth conditions is no semantics.
Let me stress that I don't endorse any such quick rejection of model-theoretic semantics. In my view, the "truth first" business is undermotivated. One can describe possible languages that have a classical semantics, and ask if any of them are "ours" in any interesting non-stipulated sense. But I thought it was a "bold conjecture" that each Human Language has a classical semantics, and that the conjecture was one (bold) hypothesis about the source of a far more obvious fact: expressions of a Human Language have meanings that are systematically related.  

I see no reason to rule out KFP-style (or Montague-style) theories of meaning just because such theories don't count as officially semantical. And I don't see the point of achieving official semanticalness if that achievement amounts to stipulating that biconditionals like (3) are true,
(3) ‘The sky is blue' is true iff the sky is blue.
even if we can't independently specify the alleged truth conditions for sentences like (1). If pairing (1) with a Markerese translation M isn't good enough, absent an independent specification of the conditions under which M would be true, how can it be enough to pair (1) with itself? And if it's enough to algorithmically pair (1) with a specification of a truth condition that uses (as opposed to merely mentioning) the words 'sky' and 'blue', why isn't it enough to do the same sort of thing for M? Can we just stipulate that (1), but not M, is understood well enough to use its parts in a theoretical metalanguage intended to let us specify truth conditions?

This isn't to say that KFP were right. I don’t think that Human (I-)Languages pair articulations with intrinsically meaningless syntactic structures that are in turn paired with meanings. (Set aside the weirdness of talking about translation here.) If a syntactic structure S can be paired with more than one meaning, as KFP suggested, that raises questions about the sources of constraints on ambiguity. One also has to ask why biology bothers with Chomsky-style syntactic structures, instead of pairing articulations directly with expressions of Markerese, perhaps subject to a few word-order conventions. My own bet is that Human I-Languages generate expressions that have their own meanings, qua "instructions" for how to assemble mental representations. But I see this as a friendly adjustment of KFP's basic picture. 

In any case, my aim is not to rebury KFP’s proposal, but to note that the funeral was premature. There's a possible world in which someone quickly responds to Lewis by saying that (i) Human Languages are generative procedures that connect articulations with mental representations of a certain sort, subject to constraints of Universal Grammar, (ii) the relations between meaning and truth are far more complicated than suggested by the Davidson-Montague-Lewis conjecture, and so a Human Language does not have a classical semantics, but (iii) there is much to be said about how meanings are systematically related, and how meaning is related to understanding. Maybe that world is even nearby; see Chomsky's 1977 collection, Essays on Form and Interpretation.

5 comments:

  1. My conjecture is that Montague/Lewis won because their approach was easier to use for constructing systems that did produce some entailments and not others. Plus all that interesting stuff about upward and downward entailing quantifiers, etc. Barbara Partee says in her 'memoirs of a formal semanticist' that she was completely turned off by the massive ineptitute of the early Katz-Foder proposals, and although it got better later, the account of entailment etc was never very good, and mixed up with notions such as analyticity that were never defined an a manner that was intelligible to many people (including me).

    Jackendoff made a few attempts to define inference rules over his representations, but they never came to very much either (the last one was proposed in 1983 iirc); more people must have tried, but not much of a concrete empirical nature ever seems to have managed to happen, as far as defining entailment etc over full utterances is concerned.

    For lexical semantics it appears to be the opposite: the representationalists have done a lot, the model theorists virtually nothing; 'meaning postulates' were presented as an alternative to lexical decomposition, but they don't seem to have often functioned more than anything as an excuse to not think about lexical semantics.

    ReplyDelete
  2. Setting aside the issue of what counts as winning, and what counts as easy, I suspect that you're basically right about part of the sociology. Katz-Fodor, Davidson, and Katz-Postal didn't saything substantial about overt quantifiers, or how one might get beyond Tarskian treatments of first-order quantification; whereas that was, of course, a point of focus for Montague. Though to be honest, I've never really understood what model-theoretic treatments of 'every/some/no/most' actually *explain*--or what gets explained if one then invokes type-shifting to deal with 'brown cow' and 'sneezed loudly'. Though more generally, I've never really understood what gets explained by providing an algorithm (Davidsonian, Montagovian, or other) that pairs sentences of a natural language with sentences of a metalanguage that isn't posited as something like a language of thought. I know that mapping "surface structures" to expressions of the lambda calculus (or the right sides of Tarski-style T-sentences) is supposed to be way better than mapping to expressions of Markerese. But I think this is more like a premise than a conclusion. The big issue, I suspect, is whether theories of meaning are fundamentally theories of understanding (as Dummett suggested) or theories of how expressions are related to the things we use expressions to talk about.

    ReplyDelete
  3. I think the point is just to attain descriptive adequacy for 'logico-semantic properties and relations' (term used in Larson & Segal's 1995 textbook, with acknowledgement to Katz 1972 for setting these up in a systematic way as a target for analysis).

    So in basic montague grammar you get inferences such as:

    Every boy loves every girl
    John is a boy
    Mary is a girl
    -----------------
    John loves Mary

    And at a somewhat more advanced level, things like:

    The boys carried a piano down the stairs
    John is one of the boys
    *--------------------- (does not entail)
    John carried a piano down the stairs

    But

    The boys each carried a piano down the stairs
    John is one of the boys
    --------------------- (does entail)
    John carried a piano down the stairs

    This is quite obviously Keenan and Faltz's program, for example.

    ReplyDelete
  4. Larson and Segal offer their semantics as a psychological hypothesis. The right hand sides of their T-sentences do reflect their proposal about the thoughts (mental sentences) corresponding to sentences of the object language. So yes, they are in the Katz-Fodor tradition. And good for them. But that kind of psychological commitment was not part of Montague grammar. And if one just describes certain relations among sentences as relations of valid inference--e.g., by pairing 'every' with a certain function--I'm not sure what that explains, unless one is offering the substantive hypothesis that competent speakers represent that function as the semantic value of 'every'.

    ReplyDelete
  5. The point is not to pair 'every' with a function, that's just the implementation technique. The point is to say something about the entailments of sentences containing 'every', which, prior to Montague's work, had simply not been achieved in a generative way for mathematical models of NLs.

    Some people, of course, want to read more significance into the implementation technique than I am claiming here, but that's a different collection of issues.

    ReplyDelete