This recent piece (by James Somers) on Douglas Hofstadter (DH) has a brief review of AI from it's heady days (when the aim was to understand human intelligence) to its lucrative days (when the goal shifted to cashing out big time). I have not personally been a big fan of DH's early stuff for I thought (and wrote here) that early AI, the one with cognitive ambitions, had problems identifying the right problems for analysis and that it massively oversold what it could do. However, in retrospect, I am sorry that it faded from the scene, for though there was a lot of hype, the ambitions were commendable and scientifically interesting. Indeed, lots of good work came out of this tradition. Marr and Ullman were members of the AI lab at MIT, as were Marcus and Berwick. At any rate, Somers gives a short history of the decline of this tradition.
The big drop in prestige occurred, Somers notes, in about the early 1980s. By then "AI…started to…mutate…into a subfield of software engineering, driven by applications…[the]mainstream had embraced a new imperative: to make machines perform in any possible, with little regard for psychological plausibility (p. 3)." The turn from cognition was ensconced in the conviction that "AI started working when it ditched humans as a model, because it ditched them (p. 4)." Machine translation became the poster child for how AI should be conducted. Somers gives a fascinating thumb nail sketch of the early system (called 'Candide' and developed by IBM) whose claim to fame was that it found a way to "avoid grappling with the brain's complexity" when it came to translation. The secret sauce according to Somers? Machine translation! This process deliberately avoids worrying about anything like the structures that languages deploy or the competence that humans must have to deploy it. It builds on the discovery that "almost doesn't work: a machine…that randomly spits out French words for English words" can be tweaked "using millions of pairs of sentences…[to] gradually calibrate your machine, to the point where you'll be able to enter a sentence whose translation you don't know and get a reasonable result…[all without] ever need[ing] to know why the nobs should be twisted this way or that (p. 10-11).
For this all to work requires "data, data, data" (as Norvig is quoted as saying). Take …"simple machine learning algorithms" plus 10 billion training examples [and] it all starts to work. Data trumps everything" Josh Estelle at Google is quoted as noting (p. 11).
According to Somers, these machine-learning techniques are valued precisely because they allow serviceable applications to be built by abstracting away from the hard problems of human cognition and neuro-computattion. Moreover, the partitioners of the art, know this. These are not taken to be theories of thinking or cognition. And, if this is so, there is little reason to criticize the approach. Engineering is a worthy endeavor and if we can make life easier for ourselves in this way, who could object. What is odd is that these same techniques are now often recommended for their potential insight into human cognition. In other words, a technique that was adopted precisely because it could abstract from cognitive details is now being heralded as a way of gaining insight into how minds and brains function. However, the techniques here described will seem insightful only if you take minds/brains to gain their structure largely via environmental contact. Thinking from this perspective is just "data, data, data"plus the simple systems that process it.
As you may have guessed, I very much doubt that this will get us anywhere. Empiricism is the problem, not the solution. Interestingly, if Somers is right, AI's pioneers, the people that moved away from its initial goals and deliberately moved it in a more lucrative engineering direction knew this very well. It seem that it has taken a few generations to loose this insight.