Formal Wear: Part I
While Fearless Leader (aka Norbert) was away from his computer, calculating how corks bounce off clay, he asked me to weigh in about this blog’s recent round-robin in the comments section among Alex Clark, David Pesetsky, and Norbert on the value of formalization in linguistics. Like Norbert, it should come as no surprise that I side with David here. But as Norbert says, this same debate has popped up so often that it surely qualifies for a bit of historical perspective – some scientific success stories and some cautionary notes – echoing Norbert that “when done well formalization allows us to clarify the import of our basic concepts. It can lay bare what the conceptual dependencies between our basic concepts are.” While the comments flying back and forth seem to have largely focused on just a narrow gang of the “usual suspects” – weak generative capacity and those pesky Swiss German and West Flemish speakers, and now, even Georgians – below I’ll describe several somewhat different but successful formalization blasts from the past. Getting old has one advantage: I can still actually remember these as if they were yesterday, far better than I can remember where I put my glasses. And if there’s one thing these examples highlight, it’s another familiar hobby-horse: if you think that the world of linguistic formalization’s exhausted by weak generative capacity, mild context-sensitivity, and West Flemish, then look again – there’s more in heaven and earth than dreamed of in your philosophy. As for the claim that formalization ought to be de rigeur for generative grammar, “just like” every other scientific field, as far as I can make out, it’s precisely the reverse: every other scientific field is “just like” generative grammar.
So, why formalization? Alex runs through several very plausible reasons, from “being precise” to “fully mathematically precise” before settling on a candidate that even I could vote for – we’ll come to what that is in a sec. But at first, Alex seems to equate ‘being precise’ with ‘mathematical formalization,’ all in the service of testability: “if your theory is precise, then you can use mathematical or computational methods to produce predictions.” Well sure, sometimes. But in truth the implication here really doesn’t work in either direction. On the one hand, we have theories that pull out all the stops when it comes to mathematics, but still don’t quite reach the brass ring of testable predictions – string theory being the current poster child. Before that, quantum physics preceded its formalization by Heisenberg, Dirac, and von Neumann by decades – and as Mario Livio remarks in his recent Brilliant Blunders, “More than 20% of Einstein’s original papers contain mistakes of some sort…[but often] the final result is still correct. This is often the hallmark of great theorists: they are guided by intuition more than by formalism.” And then there are famous statements of this sort, that somehow have slipped by the “all must be formalized” censor: Nam tempus, spatium, locum et motum ut omnibus notissima non definio.” (“I do not define time, space, place and motion, as being well known to all.”) On the other hand, we have scientific theories of extraordinary precision, rivaling anything in physics, that aren’t ever mathematically formalized, and probably never will be, yet serve up a bumper crop of empirical tests and Nobel prizes year after year – modern biology and molecular biology standing front and center. It’s perhaps worth remembering that what’s generally considered one of the greatest theories in all modern science, evolution by natural selection, was conceived by history’s most famous medical school dropout who later confessed in his autobiography that he “could not get past the very first steps in algebra” and “should, moreover, never have succeeded with…mathematics.” If you believe that molecular biologists lie awake at night fretting over whether the cell’s intricate biomachinery choreographing the dance from DNA to protein must be cast as some set of axioms or equations – well, guess again. Biologists are a pragmatic lot. They lie awake at night thinking about how to get their experiments to work. Actually, in this respect biology is even worse off than linguistics: there are no laws comparable to wh-island effects, let alone something sterling like F=ma. But wait, it gets worse (or better). As Francis Crick’s protégé Sydney Brenner will sing you loud and long, current molecular biology suffers from a severe lack of theories altogether, let alone formalized ones. No matter; the gene jockeys just plough straight through all the formalism talk and still count on that regular airline flight to Stockholm in December.
In my opinion, Alex advances the winning candidate near the end of his volleys with David when he seemingly softens his mathematical stances and writes: “When I said formalisation in the last sentence I meant a proper mathematical model, rather than just providing a little bit of technical detail here and there. But no more than is used in any other branch of science. I also don’t think that one necessarily needs to write a grammar for the whole language; but rather a mathematically precise model for a simplified or idealised system; but only for a part of the grammar. That is how more or less all science works, and I don't think linguistics should be any different” [emphasis in the original].
Exactly! At so last we’ve arrived at a prescription very dear to my own heart: models. Models indeed loom large in science – the very heart and soul of the Galilean method, where the Master once proclaimed that he did not really understand anything in the physical world, like balls rolling down inclined planes, unless he was able “to build a machine that could reproduce its behavior,” and of course, more famously, in Il Saggiatore that “La filosofia è scritta in questo grandissimo libro… Egli è scritto in lingua matematica.” But there’s one final wrinkle: many perfectly fine scientific models aren’t mathematical equations – they can be anything from Feynman diagrams; to scaled-down versions of jet planes in wind tunnels; to computer simulations of ‘agents’ in SimCity and to computer programs generally; to the storyboard sequences of graphical ‘cartoons’ so popular with modern (molecular) biologists. To qualify as a good model, one must meet the dual demands of highlighting what’s important, including relevant consequences, while suppressing irrelevant detail, with sufficient precision that somebody else can use the model to replicate experiments or carry out new ones – which is to say, figure out whether to stick with one’s current theory ditch it to build a better one. And that covers the waterfront.
At long last we’ve arrived at a prescription very dear to my own heart: models. Models indeed loom large in science – the very heart and soul of the Galilean method, where the Master once proclaimed that he did not really understand anything in the physical world, like balls rolling down inclined planes, unless he was able “to build a machine that could reproduce its behavior,” and of course, more famously, in Il Saggiatore that “La filosofia è scritta in questo grandissimo libro… Egli è scritto in lingua matematica.” But that’s one final wrinkle: there are many perfectly fine scientific models that aren’t mathematical equations – anything from Feynman diagrams, to scaled-down versions of jet planes in wind tunnels, to computer simulations of ‘agents’ in SimCity and to computer programs generally, to the storyboard sequences of graphical ‘cartoons’ so popular with modern (molecular) biologists. The key requirements seem to be that a good model must meet the dual demands of highlighting what’s important, including relevant consequences, while suppressing irrelevant detail, and sufficiently precise that someone else can use it to duplicate experiments or even carry out new ones. And that covers the waterfront. We’ll see how far in Part II.
I know of only a handful of well-intentioned, but failed, efforts to ‘formalize’ biology, one the famous program by Nicholas Rashevsky at the University of Chicago in the 1930s and another the much less well-known attempt by Eörs Szathmarthy in the 1980s to develop a computer programming formalization for biology grounded on the lambda calculus.
Alas for Norbert’s obscure object of desire, there are actually good reasons to believe that we’re never going to find laws in biology like F=ma in physics, or what Feynman describes in his The Character of Physical Law, but taking up this point here would draw us a bit away from the main line of discussion. Ask me about it.