Comments

Thursday, June 20, 2013

Berwick Post: Formal Wear: Part I

I am just the blog tech here (something I find amusing given Bob's and my relative skill sets):


Formal Wear: Part I

While Fearless Leader (aka Norbert) was away from his computer, calculating how corks bounce off clay, he asked me to weigh in about this blog’s recent round-robin in the comments section among Alex Clark, David Pesetsky, and Norbert on the value of formalization in linguistics. Like Norbert, it should come as no surprise that I side with David here.  But as Norbert says, this same debate has popped up so often that it surely qualifies for a bit of historical perspective – some scientific success stories and some cautionary notes – echoing Norbert that “when done well formalization allows us to clarify the import of our basic concepts. It can lay bare what the conceptual dependencies between our basic concepts are.”  While the comments flying back and forth seem to have largely focused on just a narrow gang of the “usual suspects” – weak generative capacity and those pesky Swiss German and West Flemish speakers, and now, even Georgians – below I’ll describe several somewhat different but successful formalization blasts from the past. Getting old has one advantage: I can still actually remember these as if they were yesterday, far better than I can remember where I put my glasses.  And if there’s one thing these examples highlight, it’s another familiar hobby-horse: if you think that the world of linguistic formalization’s exhausted by weak generative capacity, mild context-sensitivity, and West Flemish, then look again – there’s more in heaven and earth than dreamed of in your philosophy.  As for the claim that formalization ought to be de rigeur for generative grammar, “just like” every other scientific field, as far as I can make out, it’s precisely the reverse: every other scientific field is “just like” generative grammar. 

So, why formalization? Alex runs through several very plausible reasons, from  “being precise” to “fully mathematically precise” before settling on a candidate that even I could vote for – we’ll come to what that is in a sec. But at first, Alex seems to equate ‘being precise’ with ‘mathematical formalization,’ all in the service of testability: “if your theory is precise, then you can use mathematical or computational methods to produce predictions.”  Well sure, sometimes.  But in truth the implication here really doesn’t work in either direction.  On the one hand, we have theories that pull out all the stops when it comes to mathematics, but still don’t quite reach the brass ring of testable predictions – string theory being the current poster child.  Before that, quantum physics preceded its formalization by Heisenberg, Dirac, and von Neumann by decades – and as Mario Livio remarks in his recent Brilliant Blunders, “More than 20% of Einstein’s original papers contain mistakes of some sort…[but often] the final result is still correct.  This is often the hallmark of great theorists: they are guided by intuition more than by formalism.” And then there are famous statements of this sort, that somehow have slipped by the “all must be formalized” censor: Nam tempus, spatium, locum et motum ut omnibus notissima non definio.” (“I do not define time, space, place and motion, as being well known to all.”) On the other hand, we have scientific theories of extraordinary precision, rivaling anything in physics, that aren’t ever mathematically formalized, and probably never will be, yet serve up a bumper crop of empirical tests and Nobel prizes year after year – modern biology and molecular biology standing front and center.[1] It’s perhaps worth remembering that what’s generally considered one of the greatest theories in all modern science, evolution by natural selection, was conceived by history’s most famous medical school dropout who later confessed in his autobiography that he “could not get past the very first steps in algebra” and “should, moreover, never have succeeded with…mathematics.”  If you believe that molecular biologists lie awake at night fretting over whether the cell’s intricate biomachinery choreographing the dance from DNA to protein must be cast as some set of axioms or equations – well, guess again. Biologists are a pragmatic lot. They lie awake at night thinking about how to get their experiments to work. Actually, in this respect biology is even worse off than linguistics: there are no laws[2] comparable to wh-island effects, let alone something sterling like F=ma. But wait, it gets worse (or better). As Francis Crick’s protégé Sydney Brenner will sing you loud and long, current molecular biology suffers from a severe lack of theories altogether, let alone formalized ones.  No matter; the gene jockeys just plough straight through all the formalism talk and still count on that regular airline flight to Stockholm in December.
In my opinion, Alex advances the winning candidate near the end of his volleys with David when he seemingly softens his mathematical stances and writes: “When I said formalisation in the last sentence I meant a proper mathematical model, rather than just providing a little bit of technical detail here and there. But no more than is used in any other branch of science.  I also don’t think that one necessarily needs to write a grammar for the whole language; but rather a mathematically precise model for a simplified or idealised system; but only for a part of the grammar.  That is how more or less all science works, and I don't think linguistics should be any different” [emphasis in the original].

Exactly! At so last we’ve arrived at a prescription very dear to my own heart: models. Models indeed loom large in science – the very heart and soul of the Galilean method, where the Master once proclaimed that he did not really understand anything in the physical world, like balls rolling down inclined planes, unless he was able “to build a machine that could reproduce its behavior,” and of course, more famously, in Il Saggiatore that “La filosofia è scritta in questo grandissimo libro… Egli è scritto in lingua matematica.” But there’s one final wrinkle: many perfectly fine scientific models aren’t mathematical equations – they can be anything from Feynman diagrams; to scaled-down versions of jet planes in wind tunnels; to computer simulations of ‘agents’ in SimCity and to computer programs generally; to the storyboard sequences of graphical ‘cartoons’ so popular with modern (molecular) biologists. To qualify as a good model, one must meet the dual demands of highlighting what’s important, including relevant consequences, while suppressing irrelevant detail, with sufficient precision that somebody else can use the model to replicate experiments or carry out new ones – which is to say, figure out whether to stick with one’s current theory ditch it to build a better one.  And that covers the waterfront.

At long last we’ve arrived at a prescription very dear to my own heart: models. Models indeed loom large in science – the very heart and soul of the Galilean method, where the Master once proclaimed that he did not really understand anything in the physical world, like balls rolling down inclined planes, unless he was able “to build a machine that could reproduce its behavior,” and of course, more famously, in Il Saggiatore that “La filosofia è scritta in questo grandissimo libro… Egli è scritto in lingua matematica.” But that’s one final wrinkle: there are many perfectly fine scientific models that aren’t mathematical equations – anything from Feynman diagrams, to scaled-down versions of jet planes in wind tunnels, to computer simulations of ‘agents’ in SimCity and to computer programs generally, to the storyboard sequences of graphical ‘cartoons’ so popular with modern (molecular) biologists.  The key requirements seem to be that a good model must meet the dual demands of highlighting what’s important, including relevant consequences, while suppressing irrelevant detail, and sufficiently precise that someone else can use it to duplicate experiments or even carry out new ones.  And that covers the waterfront. We’ll see how far in Part II.




[1]I know of only a handful of well-intentioned, but failed, efforts to ‘formalize’ biology, one the famous program by Nicholas Rashevsky at the University of Chicago in the 1930s and another the much less well-known attempt by Eörs Szathmarthy in the 1980s to develop a computer programming formalization for biology grounded on the lambda calculus.
[2]Alas for Norbert’s obscure object of desire, there are actually good reasons to believe that we’re never going to find laws in biology like F=ma in physics, or what Feynman describes in his The Character of Physical Law, but taking up this point here would draw us a bit away from the main line of discussion.  Ask me about it.

3 comments:

  1. Various formalizations of biology were indeed proposed in the 1800s and they were simply ridiculous: neither physics nor chemistry--even biology, for that matter--was well developed at the time. For a review of this curious chapter in the history of biology, see the last book ("This is Biology") by Ernst Mayr, the last grand old person of the biological sciences.

    ReplyDelete
  2. Biology is such a broad field that it is hard to generalise, but the parts of it that I know anything about -- for instance the molecular biology that you mention -- are perfectly formal-- and indeed mathematical. Maths doesn't mean algebra. So for me, a Feynman diagram, or the structure diagram of a molecule or a parse tree are all mathematical. It's just that the formalisation of things like chemistry doesn't need anything more than 3d geometry and the notion of an undirected graph, which are pretheoretically clear. So when a molecular biologist writes down ACGTCCG or whatever, there is no vagueness there. It's a string, and each of the symbols corresponds to a molecule, which has a defined structure etc etc. We could have a terminological argument but I think we agree on all of the substantive issues.

    So much for the agreement: let me focus on where we disagree otherwise this will be boring:

    You say "The key requirements seem to be that a good model must meet the dual demands of highlighting what’s important, including relevant consequences, while suppressing irrelevant detail, and sufficiently precise that someone else can use it to duplicate experiments or even carry out new ones." And that seems perfectly reasonable, but as I said to David P, it has something of the air of a recipe that says "cook until done". How it is applied in practice is important.
    So I tend to err on the side of being completely formal. If you are *completely* precise, then you know you will be *sufficiently* precise.


    If there is a disagreement here it is about whether specific linguistic proposals are sufficiently precise or not, not about formalisation in biology,
    or the principles of the thing, and so we should get down to brass tacks. And there certainly are some linguistics papers which are completely precise. But there are also some which to me are not.

    So take your paper in Cognitive Science in 2011: Poverty of the stimulus revisited, Berwick RC, Pietroski P, Yankama B, Chomsky N.

    You critique 3 models all of which are mathematically precise: for all three, because they are precise, you can point out some problems they have. That is all as it should be. (For any third parties reading, I picked this paper as I am a co-author of one of the models critiqued and so I read the paper carefully).

    But you also put forward in Section 3 "An optimal general framework" your own proposal based on a sort of set-theoretic version of merge. Now from my perspective that is inadequately formalized: it lacks a lot of detail and as a result I can't critique it. I can't see if it makes the right predictions because it doesn't make any predictions at all. And it doesn't contain pointers to publicly available material that would explain the details. (What Sandiway Fong has published on this seems unconnected to your proposal; you don't cite Stabler). So I don't want to put words in your mouth but it seems like here we might have a disagreement.
    Do you think that model (Section 3) is "sufficiently precise that someone else can use it to duplicate experiments or even carry out new ones."?

    ReplyDelete
  3. This comment has been removed by a blog administrator.

    ReplyDelete