Comments

Friday, January 13, 2017

No time but...

It's file reading season so my time is limited right now. That and the hangover from the holidays has made me slower. Nonetheless, I thought that I would post this very short piece that I read on computational models in chemistry (thx to Bill Idsardi for sending it my way). The report is interesting to linguists, IMO, for several reasons.

First, it shows how a mature science deals with complexity. It appears to be virtually impossible to do a full theoretically rigorous computation given the complexity of the problem, "but for the simplest atoms." Consequently, chemists have developed approximation techniques (algorithms/functions) to figure out how electrons arrange themselves in "bulk materials." These techniques are an artful combination of theory and empirical parameter estimation and they are important precisely because one cannot compute exact results for these kinds of complex cases. This is so even though the relevant theory is quite well known.

I suspect that if and when linguists understand more and more about the relevant computations involved in language we will face a similar problem. Even were we to know everything about the underlying competence and the relevant mental/brain machinery that uses this knowledge, I would expect the interaction effects among the various (many) interacting components to be very complex. This will lead to apparent empirical failure. But, and this is the important point, this is to be expected even if the theory is completely correct. This is well understood in the real sciences. My impression is that this bit of wisdom is still considered way out there in the mental sciences.

Second, the discussed paper warns against thinking that rich empirics can substitute for our lack of understanding. What the reported paper does is test algorithms by seeing how they perform in the simple cases where exact solutions can be computed. How well do those techniques we apply in sophisticated cases work in the simple ones? The questions: How well "different...algorithms approximated the relatively exact solutions."

The discovery was surprising: After a certain point, more sophisticated algorithms started doing worse at estimating the geometry of electrons (this is what one needs to figure out a material's chemical properties). More interesting still, the problem was most acute for "algorithms based on empirical data." Here's the relevant quote:

Rather than calculating everything based on physical principles, algorithms can replace some of the calculations with values or simple functions based on measurements of real systems (an approach called parameterization). The reliance on this approach, however, seems to do bad things to the electron density values it produces. "Functionals constructed with little or no empiricism," the authors write, "tend to produce more accurate electron densities than highly empirical ones."

It seems that when "we have no idea what the function is" that throwing data at the problem can make things worse. This should not be surprising. Data cannot substitute for theoretical insight. Sadly, this trivial observation is worth mentioning given he spirit of the age.

Here is one possible implication for theoretical work in linguistics: we often believe that one tests a theory best by seeing how it generalizes beyond the simple cases that motivate it. But in testing a theory in a complex case (where we know less) we necessarily must make assumptions based less on theory and more on the empirical details of the case at hand. This is not a bad thing to do. But it carries its own risks, as this case illustrates. The problem with complex cases is that they likely provoke interaction effects. To domesticate these effects we make useful ad hoc assumptions. But doing this makes the fundamental principles more opaque in the particular circumstance. Not always, but often.


















No comments:

Post a Comment