Thursday, January 30, 2014
What real evolution is like
From Bob Berwick: a link to an article indicating just how complicated evolution is. Just-so stories not only fail to deliver the goods, they create an impression at odds with what we now know to be a very complex set of interactions whose fine structure we appear not to have much of a handle on. Yet another reason to be cautious when we tell tall evolutionary tales and criticize proposals on "but how did it evolve?" grounds.
Science automated at last
Funny post from Andrew Gelman (here). Science without scientists. I've always thought that main problem with word processing programs is that they required you to type in whole thoughts. Why not just give it the gist and let the program fill in the missing steps. We are now one step closer. And we eliminate human error and malfeasance. Be the first on your block to get one!
Falsification
The topic is getting some play in the blogosphere due in part to the post on this matter by Sean Carroll (the physicist at Cal Tech, not the eve-devo biologist) on the Edge question of the year (see here and find his contribution on page 202-3). At any rate, there has been some interesting discussion from people in other areas that you might be interested in looking at (here, here, and here). The discussions give subtle takes on Popper's falsificationist dicta, all recognizing that the (at least common) understanding of Popper is way off. Pigliucci notes the problems that the Duhem-Quine thesis raise, i.e. the observation that theories are never tested directly but always rely on many auxiliary assumptions that can serve to insulate a thesis from direct empirical refutation. The discussion is interesting and may serve to temper simple minded resort to the falsificationist gambit.
One thing, however, that seems not to have been mentioned is that quite often we can find evidence for claims that are not falsifiable: for example, the germ theory of disease postulated germs as agents of disease transmission. It's not clear that existential statements are falsifiable, viz. the absence of attested black swans does not imply that black swans don't exist. We make many such existential claims and then go out and look for instances, e.g. germs, black holes, Higg's particles, locality conditions, etc. In other words, we often actively look for confirmatory evidence for our claims rather than looking to refute them. In fact, I might go further, in the early exploratory phase, finding evidence for one's views is actually more important that finding evidence that could refute them. Good ideas start out life as very fragile. There is always tons of apparent evidence that "refutes" them. Indeed, the more daring the proposal, the more likely it appears to be false (and perhaps the more likely that it is false). So, what does one do? Look for places where it works! And this is a smart thing to do. It is always fair to ask someone why their proposal matters. One way that it can matter is that it does something interesting, i.e. solves a puzzle, explains recalcitrant data, predicts a new kind of object etc. Evidence in favor of a proposal is what allows one to make demands on a rational listener. So, falsification as a strategy is useful, but only after a proposal has gained admittance as a live possibility and admission to this august group is paid for in verifications.
At any rate, the discussion is pretty good and there are links to other matters that might matter. It's always nice to see how others flounder with these large methodological concerns. Enjoy.
One thing, however, that seems not to have been mentioned is that quite often we can find evidence for claims that are not falsifiable: for example, the germ theory of disease postulated germs as agents of disease transmission. It's not clear that existential statements are falsifiable, viz. the absence of attested black swans does not imply that black swans don't exist. We make many such existential claims and then go out and look for instances, e.g. germs, black holes, Higg's particles, locality conditions, etc. In other words, we often actively look for confirmatory evidence for our claims rather than looking to refute them. In fact, I might go further, in the early exploratory phase, finding evidence for one's views is actually more important that finding evidence that could refute them. Good ideas start out life as very fragile. There is always tons of apparent evidence that "refutes" them. Indeed, the more daring the proposal, the more likely it appears to be false (and perhaps the more likely that it is false). So, what does one do? Look for places where it works! And this is a smart thing to do. It is always fair to ask someone why their proposal matters. One way that it can matter is that it does something interesting, i.e. solves a puzzle, explains recalcitrant data, predicts a new kind of object etc. Evidence in favor of a proposal is what allows one to make demands on a rational listener. So, falsification as a strategy is useful, but only after a proposal has gained admittance as a live possibility and admission to this august group is paid for in verifications.
At any rate, the discussion is pretty good and there are links to other matters that might matter. It's always nice to see how others flounder with these large methodological concerns. Enjoy.
Monday, January 27, 2014
Being Edgy
For light entertainment, I have just read answers to the
Edge question of the year: “What scientific idea is ready for retirement?”
Edge.org (here) is the fashionable online “salon”
(think French Belle Époque/Early 20th century) where the illuminati,
literati, cognoscenti and scientific elite take on the big issues of the day
impresarioed by John Brockman, the academic world’s favorite Rumpelstiltskin; a
spinner of dry academic research into popular science gold (and by ‘gold’ I
mean $$$$). At any rate, reading the page-lengthish comments has been quite
entertaining and I recommend the pieces to you as a way to unwind after a hard
day toiling over admission and job files. The answers weigh in at 213 pages of
print. Here are a few papers that got me going.
Not surprisingly, two of those that raised my blood pressure
were written about language. One is by Benjamin Bergen (BB) (20-1). He is a cog
sci prof at UCSD and his proposal for an idea worth retiring is “Universal Grammar.”
When I first read this I was really pissed. But I confess that after reading
what he took UG to be, I could understand why BB wants it retired.
BB understand UG as claiming two things: (i) that there are
“core commonalities across languages” and (ii) that such exist as a matter of
“genetic endowment.” He reports that
“field linguists” have discovered that “are much more diverse than originally
thought” (who this ‘we’ are is a bit mystifying. Not even a rabid Chomskyan
like me has ever doubted that the surface diversity among languages is rather
extensive). In particular, not all languages have “nouns and verbs” and not all
“embed propositions in others.” In other words, it seems that field linguists
(you can see the long shadow of Mr D. Everett here, more anon) have been busy
demonstrating something that has been common knowledge for a long long time
(and still the common view): that the surface linguistic forms we find across
natural language are very diverse and that this diversity of surface forms
indicates that there are few surface
manifest universals out there. Oddly, BB is happy to concede that “perhaps
the most general computational principles are part of our innate
language-specific human endowment” but “this won’t reveal much about how language
develops in children.”
There is lots to quibble about here: (i) most importantly
that this Greenbergian gloss on “universal grammar” is not how people like me
(and more importantly, Chomsky) understand UG, (ii) that BB seems not to have
read any of the work on Greenberg style Universals common in the current
literature (think Cinque hierarchy), (iii) that if UG is correct then this
changes the learning problem, (iv) that “inferring the meaning of words”
exploits emerging syntactic knowledge that itself piggy backs on the innate
computational principles of UG (e.g. Gleitman), etc. However, putting all of this to one side, I
have nothing against giving up BB’s Greenbergian conception of Universal
Grammar.
Indeed, I would go further. We should also give up the idea
of language as a proper object of inquiry because it is almost certainly not a natural kind. Generative Linguists
of the Chomsky stripe should make clear that strictly speaking there is no such
thing as English, French, Inuit, etc. and so it is not surprising that these
things have no common properties. BB’s
objections are with claims that my team doesn’t make; I (we) don’t suppose that
languages universally have certain properties, only that I-languages do. And
these properties involve precisely those features that BB seems happy to
concede are species specific and biologically given. For my money, I am happy
to throw BB’s notion of Universals on the scientific trash heap and would add
‘language’ to the pyre.
As mentioned, BB is clearly channeling Daniel L. Everett
(DE) in his comment. DE speaks for himself here (203-205). He wants to dump the
idea that “human behavior is guided by highly specific innate knowledge.”[1]
You might think from this opening line that the target is once again going to
be domain specific principles of UG. But you would be wrong! It seems that
what’s got DE riled this time is the very idea of innate characteristics. DE
finds any idea of a non-environmental
input to development or learning to be illicit. So not only does DE appear to
object to domain specific natively given mechanisms, he seems to object to any
mental or neural structure at all that is not the result of environmental
input. Wow!
I confess, that I found it impossible to make sense of any
of this. I can think of no model of
development or of learning/acquisition that does not rely on some given biases, however modest, that
are required to explain the developmental trajectory. The argument is not whether such biases are required, but what they look like; hence
the discussion concerning domain specificity. But that’s not DE’s position. He
wants to dump the distinction between environment and “innate predispositions
or instincts” because “we currently have no way of distinguishing” them. Really? No way? Not even in a particular
domain of inquiry?
What are the arguments DE musters? There are three. All piss
poor.
First, DE notes that environmental influence is pervasive:
“there is never a period in the development individual…when they are not being
affected by their environment.” Hence, DE concludes, we cannot currently know
what is environmental from what is innately given. Hmm. The problem is
complicated, hence unsolvable? The claim that development arises as the joint
contribution of input + an initial state and that this means we need to know
something about the initial state does not imply that it is easy to decipher
what the architecture of the initial state(s) is. DE and I disagree about what
the initial state for grammar development is, my UG being very different from
his. But no innate principles/biases then no learning/development. So, if you want to understand the latter you
need to truck in the former, no matter
how hard it is to tease them apart.[2]
Second, it seems that one cannot give an adequate
“definition” of ‘innate.’ Every definition has “been shown to be inadequate.”
Of course, every definition of everything has been shown to be inadequate.
There are no interesting definitions of anything, including ‘bachelor.’
However, there are proposals that are serviceable in different domains and that
inquiry aims to refine. That’s what science does. For what I do in syntax,
‘innate’ denotes the given biases/structures required to map environmentally
provided PLD into a G. I have no idea
whether these given biases are coded in the genes, are epigenetic, or are
handed over to each child by his/her guardian angel. Not that I am not denying
that these other questions are interesting and worth investigating. However,
for what I do, this is what I mean by innate. Indeed, I suspect that this is
what it more or less always means: what needs to be given so that adventitious input can be generalized in the attested
ways. Data do not generalize themselves. The principles of generalization must
come from somewhere. We call the place they come from the native or
instinctual. And though it is an interesting question to figure out how such
native information is delivered to the infant, delivered somehow it must be,
for without it development/learning/acquisition is impossible.
Third, DE asserts that one cannot propose that some
character is innate without “some evolutionary account of how it might have
gotten there.” If this is the case, then most of biology and physics might as
well stop right now. This view is just nuts! It’s a version of the old show
stopper: you don’t know anything until you know everything, which, if true,
means that we might as well stop doing anything at all. Let’s for the sake of
argument assume that knowing the evolutionary history of a trait is necessary
for understanding how a thing works (btw, I don’t believe this: we can know a
lot about how something (e.g. wings, bee dances) works without knowing much
about how it developed). Even were this the case, it’s simply false that one
cannot know about the mechanics of a system without knowing anything at all
about how it arose. We know a whole lot about gravity and still don’t know how
it “arose.” But, this position is not only false in practice it is
methodologically sterile as it endorses the all or nothing view of inquiry, and
this, I suspect is why DE proposes it. What DE really wants (surprise,
surprise) is to end Chomsky style work in linguistics. He reaches for any
argument to stop it. The fact that what he says verges on the methodologically incoherent
matters little. This is war, and as in love, for DE, it seems, all things are
fair. Read this piece and weep.
As antidote to DE (and Gopnik) it is worth reading Oliver
Scott Curry’s contribution (38-9) on Associationism.[3]
He writes that associationism is “hollow- a misleading redescription of the
very phenomenon that is in need of explanation.” Right on! Curry makes the
obvious, yet correct, point that absent a given mechanism that allows one to
divide input into the relevant and irrelevant there is no way to use input.
Using input requires “prior theory.” A modest point, but given how hard it is
to wean people from their empiricist predilections, always a useful on to make.
There are other entries that will infuriate, but I will
leave their debunking as an exercise for the reader. For the interested, take a
look at N.J. Enfield’s contribution (47-8) heroically defending the view that
there is more to “language” than competence.
I should add that the immediately linguistically relevant
articles are a small subset of the Edge pieces. Maybe it’s a sign that what current
linguists do is not highly prized that there is not a single piece in he lot by
anyone I would consider doing serious linguistics. It’s a clear sign that what
we do is no longer considered relevant to wider intellectual concerns, at least
buy the “Edgy.” This was not always so. Chomskyan linguistics, after all, was
once the leading edge (sic!) of the “cognitive revolution.” We really need to
do something about this. Maybe I will post on this later. Any suggestions for
raising our profile would be welcome.
This said, there are lots of interesting papers in the
collection: on the use of stats (75-77, and 176-7), minds and brains (208-9),
mysterianism (7-8), big data (24-5 and 176-7), replication (189-90), the scientific
method (147-8), science funding (118-9), science vs technology (211-12), unification
(88-90), simplicity (168-9, 180-1), elegance (93-4), falsifiability (202-3), the
current (very animated and heated) fight over current high theory in physics
(every article by the many physicists), among others. The entries are short,
and often provocative and entertaining. So, if you are looking for bathroom
reading, I cannot recommend this highly enough.
[1]
Note that DE’s explanandum is “behavior.” But, this is the wrong target for
explanation. Steve Pinker’s very nice piece (190-192) puts it very well so let
me quote:
More than half a century
after the cognitive revolution, people still ask whether a behavior is
genetically of environmentally determined. Yet neither genes nor the
environment can control the muscles directly. The cause of behavior is the
brain. While it is sensible to ask how emotions, motives or learning mechanisms
have been influenced by the genes, it makes no sense to ask this of behavior
itself.
[2]
Allison Gopnik (172-3) has a similarly confusing Edge comment. She too seems to
think that the fact that there is a lot of interaction between environmental
input and intial state endowments implies that the whole notion of an initial
state is misconceived. IMO, her “argument” is little better than DE’s.
[3]
See Andy Clark’s piece on I/O models (147) as well.
Sunday, January 26, 2014
Three kinds of syntactic research
Robert Chametzky has some useful things to say about the
absence of theoretical work within syntax that I labored to make clear in an
earlier post (here).
He distinguishes metatheoretical, theoretical and analytic work, the last being
by far the predominant type of research. All three are valuable, but, as a
matter of fact, the third is what predominates in syntax and is what is
generally, inaptly, called “theoretical.” Here is Rob’s tripartite typology,
pp. xvii-xix from his sadly under read book A
Theory of Phrase Markers and the Extended Base available here.
I transcribe, with some indicated omissions, the relevant two pages immediately
below.
There are three sorts of work that
can generally be distinguished in empirical inquiry. One is metatheoretical, a
second is theoretical and the third is analytic. As is often the case, the
boundaries are not sharp and so the types shade off into the other, but the
distinctions are real enough for the core cases. I take up each in turn.
Metatheoretical
work is theory of theory, and divides into two sorts: general and (domain)
specific. General metatheoretical
work is concerned with developing and investigating adequacy conditions for any
theory in any domain. So, for example, it is generally agreed that theories
should be (1) consistent and coherent, both internally and with other
well-established theories; (2) explicit; and (3) simple…Specific metatheoretical work is concerned with adequacy conditions
for theory in a particular domain. So, for example, in linguistics we have
Chomsky’s (1964, 1965) familian distinctions between observational, descriptive
and explanatory adquequacy. Whether such work is “philosophy” or, in this case
“linguistics” seems to me a pointless question.
Theoretical
work is concerned with developing and investigating primitives, derived
concepts and architecture within a particular domain of inquiry. This work will
also deploy and test concepts developed in metatheoretical work against the
results of actual theory construction in a domain, allowing for both evaluating
of the domain theory and sharpening of the metatheoretical concepts. Note this
well: deployment of metatheoretical
concepts is not metatheoretical work;
it is theoretical work.
Analytic
work is concerned with investigating the (phenomena of the) domain in question.
It deploys and tests concepts and architecture developed in theoretical work,
allowing for both understanding of the domain and sharpening of the theoretical
concepts. Note this well: deployment
of theoretical concepts is not
theoretical work, it is analytic work. Analytic work is what overwhelmingly
most linguists do overwhelmingly most of the time…
Linguists tend to confuse analytic
work with theoretical work and theoretical work with metatheoretical work…
Linguists typically distinguish not
among the three types of work described above, but rather between “theoretical”
and “descriptive” work, where both of these are better understood as analytic
work, with, respectively, more or less reliance on or reference to a specific
framework and its concepts and architecture…This distinction between “theoretical”
and “descriptive” is not only ill-conceived, but also, for reasons I do not
fully understand, invidious. The tripartite distinction discussed above
involves no evaluative component for or ranking of the three sorts of work
other…
A Review of Tecumseh Fitch's book
Those of you who don't follow the Talking Brains (TB) blog managed should. Greg Hickock and David Poeppel started it to discuss neuro issues. It started off really strong but went quiet for a while as Greg and David went after bigger game. Greg has been working on a boffo critique of the mirror neuron fad (which should be coming out in book form soon) and David has been living the high life in NYC (where I just had supper with him and together we solved all the problems in neuro-ling, but we are keeping the answers to ourselves so as not to contribute to the depressive job market in linguistics). At any rate, TB sprang to life again recently with an interesting guest review (by William Matchin) of Fitch's new book (one third of Chomsky, Hauser and Fitch) on the evolution of language (here). The book seems quite useful, though as much for what it does not address than for what it does. So take a look. It's interesting stuff.
Two More Things about Minimalist Grammars
My intro post on Minimalist grammars garnered a lot more attention than I expected. People asked some damn fine questions, some of which I could not fully address in a single comment. So let's take a closer look at two technical issues before we move on to derivation trees next week (I swear, Norbert, we'll get to them soon).
Wednesday, January 22, 2014
Linguistics and the scientific method
The Scientific Method (SM) is coming in for some hard knocks
lately. For example, the NYTs has a recent piece rehearsing the replicability
problems that “meta-scientists” like John Ioannidis have uncovered (see here).
It seems that a lot (maybe most) of
the published “findings” served up in our best journals reflect our own biases and
ambitions more than they reveal nature’s secrets. Moreover, George Johnson, the
NYT science writer (and he is a pretty good popularizer of some neat stuff (see
here))
speculates that this may be unavoidable, at least given the costs of big
science (hint: it’s really expensive) and the reward structure for adding a new
brick to our growing scientific edifice (new surprising results are rewarded,
disclosures that the emperor is naked are not). Not surprisingly, there are new
calls to fix this in various ways: making your reported data available online
when publishing (that this was not done as a rule surprised me, but it seems
that in many areas, (e.g. psych, econ) one doesn’t as a rule make one’s raw
results, computational methods and procedures available. Why? Well, it’s labor
intensive to develop these data sets and why share them with just anyone given
that you can use them again and again. It’s a kind of free rider problem),
declaring the point of your experiment ahead of time so that you cannot
statistically cherry pick the results while trolling for significance, getting
more journals to publish failed/successful replications, getting the hiring/promotion/tenure
process to value replication work, have labs replicate one another’s “results”
as part of their normal operation etc. In other words, all proposals to fix an
otherwise well designed system. These reforms assume that the scientific method
is both laudable and workable and what we need are ways to tweak the system so
that it better reflects our collective ideals.
However, there are more subversive rumblings out there.
First, there is the observation that maybe humans are unfit for SM. It seems that scientists are human. They
easily delude themselves into thinking that their work matters, or as Johnson
puts it “the more passionate scientists are about their work, the more
susceptible they are to bias.”
Moreover, it is often really hard to apply SM’s tough love
for scientists often cannot make explicit what one needs to do to get the damn
experiment to work. Tacit knowledge is rife in experimental work and is one of
the main reasons for why one hires a post doc from a lab whose work you admire;
so that s/he can teach you how to run the relevant experiments, how to get just
the right “feel” for what needs doing.
Third, being a scientific spoiler, the one that rats out all
that beautiful work out there is not nearly as attractive a career prospect as
being an original thinker developing new ideas. I doubt that there will ever be
a Nobel Prize for non-replication. And, for just this reason, I suspect that
money to support the vast amount of replication work that seems to be needed
will not be forthcoming.
Forth, a passion for replication may carry its own darker
consequences. Johnson notes that there is “a fear that perfectly good results
will be thrown out.” See here for
some intelligent discussion of the issue.
None of these points, even if accepted as completely
accurate, argue against trying to do something. They simply argue that SM may
not be something that scientists can manage well and so the institutions that
regulate science must do it for them. The problem, of course, is that these
institutions are themselves run by scientists, generally the most successful
(i.e. the ones that produce the new and interesting results) and so the problem
one finds at the individual level may thereby percolate to the institutional
one as well. Think of bankers regulating the SEC or the Fed. Not always a
recipe for success.
There is a more radical critique of SM that is also finding
a voice, one that questions the implicit conception of science that SM
embodies. Here’s
a comment on the SciAm blog that I found particularly stimulating. I’m not sure
if it is correct, but it is a useful antidote to what is fast becoming the
conventional wisdom. The point the
author (Jared Horvath) makes is that
replication has never been the gold
standard for, at least, our most important and influential research. Or as
Horvath puts it: “In actuality, unreliable research and irreproducible data
have been the status quo since the inception of modern science. Far from being
ruinous, this unique feature of research is integral to the evolution of
science.” Horvath illustrates this point by noting some of the more illustrious
failures. They include Galileo, Millikan, and Dalton to name three. His
conclusion? Maybe replicability is not that big a deal, and that our current
worries stem from a misperception of what’s really important in the forward
march of science. Here’s Horvath:
Many are taught that science moves forward in
discreet, cumulative steps; that truth builds upon truth as the tapestry of the
universe slowly unfolds. Under this ideal, when scientific intentions
(hypotheses) fail to manifest, scientists must tinker until their work is
replicable everywhere at anytime. In other words, results that aren’t
valid are useless.
In
reality, science progresses in subtle degrees, half-truths and chance. An
article that is 100 percent valid has never been published. While direct
replication may be a myth, there may be information or bits of data that are useful
among the noise. It is these bits of data that allow science to
evolve. In order for utility to emerge, we must be okay with publishing
imperfect and potentially fruitless data. If scientists were to maintain
the ideal, the small percentage of useful data would never emerge; we’d all be
waiting to achieve perfection before reporting our work.
I have quite a lot of sympathy for this view of the world,
as many of you might have guessed. In fact, I suspect that what drives research
forward is as much due to the emergence of good ideas as the emergence of solid data. The picture that SM provides
doesn’t reflect the unbelievable messiness of real research, as Horvath notes.
It also, I believe, misrepresents the role of data in scientific discourse. It
is there to clean up the ideas.
The goal of scientific research is to uncover the underlying
powers and mechanisms. We do this by proposing some of these “invisible”
powers/mechanisms and asking what sorts of more easily accessible things would
result were these indeed true. In other
words, we look for evidence and this comes in many forms: the mechanisms
predict that the paper will turn blue if I put it in tea, the mechanisms
predict that Sunday will be my birthday, etc. Some of the utility of these
postulated mechanisms is that they “make sense” of things (e.g. Darwin’s theory
of Natural Selection), or they resolve paradoxes (e.g. special
relativity). At any rate, these are as
important (in my view maybe more important) than replicable results. Data
matters, but ideas matter more. And (and this is the important part), the ideas
that matter are not just summations of the data!
Let me put this another way: if you believe that theories
are just compact ways of representing
the data, i.e. that they are efficient ways of representing the “facts,” then
bad facts must lead to bad theories for these will inherit the blemishes of
their inputs (viz. gigo: garbage in, garbage out). On this view, theories live
and die by whether they accurately encode in compact form the facts, the latter
being both epistemologically and ontologically prior to the former. If however
you believe that theories are descriptions of mechanisms then when there is a
conflict between them, the source of the problem might just as well be with the
facts as with the theories. Theoretical mechanisms, not being just summary
restatements of the facts, have an integrity independent of the facts used to
investigate them. Thus, when a conflict arises, one is faced with a serious
problem: is it the data that is misleading or the facts? And, sadly, there is
no recipe or algorithm or procedure for solving this conflict. Thought and
judgment are required, not computation, not “method,” at least if interpreted
as some low level mechanical procedure.
As you might have guessed, IMO the problem with the standard
interpretation of SM is that it endorses a pretty unsophisticated empiricist
conception of science. It takes the central
scientific problem to be the careful accumulation of accurate data points.
Absent this, science grinds to a halt. I doubt it. Science can falter for many
reasons, but the principle one is the absence of insight. Absent reasonable
theory, data is of dubious value. Given reasonable theory, even “bad” data can
be and has been useful.
An aside: It strikes me as interesting that a lot of the
discussion over replication has taken medicine, psychology and neuroscience as
the problematic subject areas. In these
areas good understanding of the underlying mechanisms is pretty thin. If this
is correct, the main problem here is not that the data is “dirty” but that the
theory is negligible. In other words, we are largely ignorant of what’s going
on. It’s useful to contrast this with what happens in physics when a
purportedly recalcitrant data point emerges: it is vetted against serious
theory and made to justify itself (see here).
Dirty data is a serious problem when we know nothing. It is far less a problem
once theory begins to reveal the underling mechanisms.
What’s the upshot of all of this for linguistics? To a certain degree, we have been able to
finesse the replicability worries for the cost of experimentation is so low:
just ask a native speaker or two. Moreover, as lots of linguistics is based on
language data that practitioners are native speakers of, checking the “facts”
can be done quickly and easily and reliably (as Jon Sprouse and Diego Almeida
have demonstrated in various papers). Of course, this same safeguard does not
as easily extend in the same way to field work of less common languages. Here we cannot often check the facts by
consulting ourselves, and this makes this sort of data less secure. Indeed,
this is one of the reasons that the Piraha “debate” has been so fractious.
Putting aside it’s putative relevance for UG (none!), the data itself has been
very contentious, and rightly so (see here).
So where are we linguists? I think pretty well off. We have
a decent-ish description of underlying generative mechanisms, i.e. the
structure of FL/UG (yes there are gaps and problems, but IMO it’s pretty damn
good!) and we have ready access to lots of useful, reliable and usable data. As
current work in the sciences go, we are pretty lucky.
Monday, January 20, 2014
Minimalist Grammars: The Very Basics
Oh boy, it's been a while... where did we leave off? Right, I got my panties in a twist over the fact that a lot of computational work does not get the attention it deserves. Unfortunately the post came across a lot whinier and accusatory than intended, so let's quickly recapitulate what I was trying to say.
Certain computational approaches (coincidentally those that I find most interesting) have a hard time reaching a more mainstream linguistic audience. Not because the world is a mean place where nobody likes math, but because 1) most results in this tradition are too far removed from concrete empirical phenomena to immediately pique the interest of the average linguist, and 2) there are very few intros for those linguists that are interested in formal work. This is clearly something we computational guys have to fix asap, and I left you with the promise that I would do my part by introducing you to some of the recent computational work that I find particularly interesting on a linguistic level.1
I've got several topics lined up --- the role of derivations, the relation between syntax and phonology, island constraints, the advantages of a formal parsing model --- but all of those assume some basic familiarity with Minimalist grammars. So here's a very brief intro to MGs, which I'll link to in my future posts as a helpful reference for you guys. And just to make things a little bit more interesting, I've also thrown in some technical observations about the power of Move...
Certain computational approaches (coincidentally those that I find most interesting) have a hard time reaching a more mainstream linguistic audience. Not because the world is a mean place where nobody likes math, but because 1) most results in this tradition are too far removed from concrete empirical phenomena to immediately pique the interest of the average linguist, and 2) there are very few intros for those linguists that are interested in formal work. This is clearly something we computational guys have to fix asap, and I left you with the promise that I would do my part by introducing you to some of the recent computational work that I find particularly interesting on a linguistic level.1
I've got several topics lined up --- the role of derivations, the relation between syntax and phonology, island constraints, the advantages of a formal parsing model --- but all of those assume some basic familiarity with Minimalist grammars. So here's a very brief intro to MGs, which I'll link to in my future posts as a helpful reference for you guys. And just to make things a little bit more interesting, I've also thrown in some technical observations about the power of Move...
Friday, January 17, 2014
Quantum brains
I probably got carried away with (entangled in?) the reports that birds use quantum entanglement to divine where they are. Jeff W sent me this link to a terrific talk by Aronson at NIPs where he does a pretty good job suggesting that any enthusiasm for quantum computing in brains is likely misplaced. If you want to watch it, I suggest opening it in Firefox. My initial attempts to use Safari were very frustrating.
A query for my computational colleagues
There appears to be a consensus that natural languages are mildly context sensitive (MCS). Indeed, as I understand matters, this is taken to be an important fact about them and one that deserves some explanation. Here's my question: dos this mean that there is consensus that Kobele's thesis is incorrect? As I recall, it argued that NLs do not display constant growth given that the presence of Copy operations. Again, my recollection is that this is discussed extensively in Greg's last chapter of the thesis. I was also led to believe that MCS languages must display constant growth, something incompatible with the sorts of copy mechanisms Greg identified (I cannot remember the language and I don't have the thesis to hand, but I am sure that you all know it). So, was Greg wrong? If so, how? And please use little words for this bear of little brain would love to know the state of play. Thx.
Wednesday, January 15, 2014
Birds are amazing
In a post a while ago I mentioned that biologists were starting to think that birds engaged in quantum computing. To do this requires being able to stabilize entangled quantum states for a reasonable amount of time, something that we (actually, they) cannot do very well in a lab. However, it seems that birds really can do this in natural conditions for longer than our best lab rats can manage it (see here). Those tiny dinosaurs seem to have many interesting tricks up their beaks. Once one can stabilize entanglements how long before we discover that biological systems compute quantumly quite regularly? Not long, I surmise (again). Does anyone have any idea what this might mean for language or cognition? If so, I invite you to submit a post that I would be delighted to run.