Once upon a time, it was suggested that a theory of meaning might be an algorithm that connects syntactic structures with mental representations. Niceties aside, Katz & Fodor and then Katz & Postal (KFP) took the representations to be expressions of a mental language: meanings are expressions of Markerese; and a good theory of meaning for language L connects the syntactic structures of L with their meanings (a.k.a. Markerese translations). I'll return to some reasons for not adopting this view. But first, I want to express puzzlement about why David Lewis' response was widely embraced.
In "General Semantics," Lewis famously said that
we can know the Markerese translation of an English sentence without knowing the first thing about the meaning of the English sentence; namely, the conditions under which it would be true. Semantics with no truth conditions is no semantics.
But is the first thing about a sentence meaning "the conditions under which it would be true"? If 'first' means 'most theoretically important aspect of', then the premise is that KFP were just flat wrong about what is most important. That makes for a simple argument. But what justifies this premise, since it isn't even obvious that English sentences have truth conditions. Consider (1).
(1) The sky is blue.
Is it obvious that the first thing about the meaning of (1) is the conditions under which it would be true? What are these conditions? Is it crazy to suggest that instead of worrying about what skies are, and what it is (or would be) for them to be blue, we should start by asking how (1) gets mapped onto a mental representation, which may or may not turn out to be isomorphic with (1)?
There follows the authoritative remark that "Semantics with no truth conditions is no semantics." But this remark is truistic or question-begging, depending on what is meant by 'semantics'. For Lewis (and Chomsky), it seems to be truistic: a semantics is the sort of thing that Tarski provided for the first-order predicate calculus. I'll happily adopt this classical construal of 'semantics'. But then it's an empirical question whether each Human Language (e.g., my version of English) has a semantics, especially if such languages are I-languages that generate sentences like (2).
(2) The second sentence is false.
Alternatively, one can say that a language has a semantics if its expressions are meaningful in some systematic way. But then one can’t insist that a semantics be truth conditional. And after Dummett's discussion in “What is a Theory of Meaning,” one might wonder what role truth conditions play in theories of meaning, especially if such theories are (or should be) offered as theories of understanding. Faced with a theory that merely specifies truth/reference/satisfaction conditions, perhaps relative to worlds, one might respond as follows:
we can know the conditions under which an English sentence would be true without knowing the first thing about its meaning; namely, how the sentence is understood. Meaning without understanding is not meaning.
One can dig in and insist that we need a semantics (in Lewis' sense) for English—or at least for the Markerese into which English gets translated—because a real theory of understanding will connect words to the things we use words to talk about. I have doubts about this. Consider science fiction, mathematics, and sentences like (1). But if you dig in here, it's important that you not trade in Lewis’ worlds (which he posited as the domain for an honest-to-god semantics for English) for abstract model-elements that ordinary speakers don't actually talk about.
Correlatively, while it can be illuminating to study "logical" vocabulary by combining model-theoretic techniques with a Montague-style algorithm that connects "surface structures" with sentences of Church's Lambda Calculus, one wants to know how "truth-in-a-model" is related to truth, and how the models are related to the domains for classical semantic theories. Ernie Lepore pressed this kind of point in “What Model-Theoretic Semantics Cannot Do;” see also John Etchemendy’s work. Tweaking the Lewis quote above, one might say that
we can know the True-in-M conditions of an English sentence without knowing the first thing about the meaning of the English sentence; namely, the conditions under which it would be true. Semantics with no truth conditions is no semantics.
Let me stress that I don't endorse any such quick rejection of model-theoretic semantics. In my view, the "truth first" business is undermotivated. One can describe possible languages that have a classical semantics, and ask if any of them are "ours" in any interesting non-stipulated sense. But I thought it was a "bold conjecture" that each Human Language has a classical semantics, and that the conjecture was one (bold) hypothesis about the source of a far more obvious fact: expressions of a Human Language have meanings that are systematically related.
I see no reason to rule out KFP-style (or Montague-style) theories of meaning just because such theories don't count as officially semantical. And I don't see the point of achieving official semanticalness if that achievement amounts to stipulating that biconditionals like (3) are true,
(3) ‘The sky is blue' is true iff the sky is blue.
even if we can't independently specify the alleged truth conditions for sentences like (1). If pairing (1) with a Markerese translation M isn't good enough, absent an independent specification of the conditions under which M would be true, how can it be enough to pair (1) with itself? And if it's enough to algorithmically pair (1) with a specification of a truth condition that uses (as opposed to merely mentioning) the words 'sky' and 'blue', why isn't it enough to do the same sort of thing for M? Can we just stipulate that (1), but not M, is understood well enough to use its parts in a theoretical metalanguage intended to let us specify truth conditions?
This isn't to say that KFP were right. I don’t think that Human (I-)Languages pair articulations with intrinsically meaningless syntactic structures that are in turn paired with meanings. (Set aside the weirdness of talking about translation here.) If a syntactic structure S can be paired with more than one meaning, as KFP suggested, that raises questions about the sources of constraints on ambiguity. One also has to ask why biology bothers with Chomsky-style syntactic structures, instead of pairing articulations directly with expressions of Markerese, perhaps subject to a few word-order conventions. My own bet is that Human I-Languages generate expressions that have their own meanings, qua "instructions" for how to assemble mental representations. But I see this as a friendly adjustment of KFP's basic picture.
In any case, my aim is not to rebury KFP’s proposal, but to note that the funeral was premature. There's a possible world in which someone quickly responds to Lewis by saying that (i) Human Languages are generative procedures that connect articulations with mental representations of a certain sort, subject to constraints of Universal Grammar, (ii) the relations between meaning and truth are far more complicated than suggested by the Davidson-Montague-Lewis conjecture, and so a Human Language does not have a classical semantics, but (iii) there is much to be said about how meanings are systematically related, and how meaning is related to understanding. Maybe that world is even nearby; see Chomsky's 1977 collection, Essays on Form and Interpretation.