Last week, I located Chomsky’s I-language/E-language distinction in
the context of Church’s distinction between intensions (procedures) and
extensions (sets of ordered pairs). In passing, I mentioned three claims from
“Language and Languages” that deserve more attention.
(1) “…a language does not uniquely
determine the grammar that generates it.”
(2)
“I know of no promising way to make objective sense of the assertion
that
a grammar Γ is used by a population P, whereas another grammar Γ',
which generates the same language as Γ, is not.”
(3)
“I think it makes sense to say
that languages might be used by populations
even if there were no internally
represented grammars.”
Lewis, who I greatly admire, begins his paper in
a way that might seem uncontroversial.
What is a language? Something which assigns
meanings to certain strings of types of sounds or of marks. It could therefore
be a function, a set of ordered pairs of strings and meanings.
But let’s controvert. Why think that in acquiring a spoken language,
one acquires something that assigns
meanings to strings? Couldn’t one
acquire—i.e., come to implement—a procedure that generates meaningful and pronounceable expressions without assigning anything to anything? (Can’t a
machine generate coins that depict people and buildings without assigning
buildings to people?) One can stipulate
that I-languages assign meanings to strings. But then at
least some “meaning assigners” are procedures rather than functions in
extension.
Moreover, even if all languages are meaning assigners
in some nontrivial sense, why think that Human Languages—languages that human
children naturally acquire—could be sets of ordered pairs? There is a thin
sense of ‘could’ in which the object beneath my fingers could be a badger (cleverly disguised by an evil demon) rather than
a computer. Perhaps it follows that my computer could be a badger. But such thin
modal claims don’t help much if you want to know what a computer is. So maybe Lewis' modal claim just indicates the hypothesis
that Human Languages are sets of a certain sort. Note, however, that the
alleged sets would be quirky.
Recalling
an earlier post, (4) is at least roughly synonymous with (5), but not with (6).
(4) Was the guest who fed waffles fed the
parking meter?
(5) The guest who fed waffles was fed the
parking meter?
(6) The guest who was fed waffles fed
the parking meter?
Let <S4,
M4> be the string-meaning pair corresponding to (4), and likewise for (5-6). Then on Lewis’ view, the elements of English include <S4, M5> but not
<S4, M6>. But why is English
not a slightly different set that also includes <S4, M6>? One can
stipulate that English is the set it is. But then the question is why humans
acquire sets like English as opposed to more inclusive sets. And the answer
will be that human children naturally acquire certain generative procedures that allow for homophony only in constrained
ways. Similar remarks
apply, famously, to ‘easy to please’ and ‘eager to please’. Moreover,
if English is a set, does it include <S7, M8> or not?
(7) The child seems
sleeping.
(8) The child seems to
be sleeping.
Such examples suggest that sets of string-meaning
pairs are at best derivative, and that the explanatory action lies with
generative procedures; see Aspects of the
Theory of Syntax. But one can hypothesize that English is a set that may be
specified by a grammar Γ, a
distinct grammar Γ', and various procedures that kids
implement. So perhaps (1) just makes it explicit that Lewis used
‘language’ in a technical/extensional sense, and in (1), ‘the’ should be ‘any'.
Still, (2) and (3) remain puzzling. If there
really are Lewis Languages, why is it so hard
to make sense of the idea that speakers specify them in certain ways, and easier to make sense of speakers using Lewis
Languages without specifying them procedurally? Did Lewis think that it is senseless
to say that a certain machine employs procedure
(9) as opposed to (10),
(9) F(x) = | x - 1 |
(10) F(x)
= +√(x2 - 2x + 1)
or that it is easier to make sense of the
corresponding set being “used by a
population” without being specified in any way? I doubt it. While talk of meaning can bring out residual behaviorism and/or verificationism
(see Quine, or Kripke’s 1982 book on rule-following), I think it’s more
important to highlight Lewis’ slide from talk of meaning to talk of truth and semantics.
What could a meaning of a sentence be?
Something which, when combined with factual information about the world...yields
a truth value. It could therefore be a function from worlds to truth-values—or
more simply, a set of worlds.”
But why not: sentence meanings could be mental representations of a
certain sort? If you think Human Language sentences are true or false, relative
to contexts, it’s surely relevant that mental representations are (unlike sets)
good candidates for being true or false relative to contexts.
Lewis was, of course, exploring a version of the Davidson-Montague Conjecture (DMC) that each Human
Language has a classical semantics. As a first pass, let’s say that a language has a classical semantics
just in case its expressions are related to entities—e.g., numbers, things
covered by good theoretical generalizations, functions and/or mereological sums
defined in terms of such things—in a way that can be recursively specified in
terms of truth, reference/denotation, or Tarski-style satisfaction conditions. DMC
raises many questions about specific constructions. But it also raises the
“meta-question” of how a natural language could ever have a semantics.
One possible answer is that Human
Languages connect pronunciations with generable representations of suitable entities. But this I-language perspective
raises the question of what work the entities do in theories of meaning. And
one needn’t be a behaviorist to wonder if ordinary speakers generate the
representations required by theories of
truth. Lewis held that a Human Language has its semantics by virtue of
being used in accord with conventions of truth and trustfulness, “sustained by
an interest in communication;” where typically, these conventions are not represented
by ordinary speakers. Given extensionally equivalent clusters of conventions, there
may be no fact of the matter about which one governs the relevant linguistic
behavior. So Lewis was led to an extensional conception of Human Languages. He could
offer convention-based accounts of both languages and language (i.e.,
linguistic behavior). But one can play on the count/mass polysemy differently,
and say that language is the use of an I-language. So instead of embracing (2)
and (3) as consequences of Lewis’ metasemantics, one might view them as
reductios of his extensional conventionalism; see Chomsky, Reflections on Language.
Lewis recognized the possibility
of taking Human Languages to be what he called grammars. But he quickly converted
this alternative proposal into a methodological question: “Why not begin by saying what it is for a grammar Γ to be used by a population P?” For which, he had an answer: strings are paired with sets of worlds via conventions that do not
plausibly determine a unique grammar; better to start by saying
what it is for a Lewis Language to
be used by a population. But why begin by saying what it is for anything to be
used by anyone? Why not start by saying that Human
Languages are procedures, partly described by linguists’ grammars, that
generate pronounceable meaningful expressions. What might a sentence meaning
be? Something which, when it interfaces with human conceptual systems, yields (modulo
complications) a truth-evaluable thought. It could therefore be an instruction
to build a thought.