Here are some papers on the current hype over AI. They are moderately skeptical about the current state of the art. Not so much about whether there are tech breakthroughs to be had. All agree that these are forthcoming. The skepticism concerns the implications of this. Let me a say a word or two about this.
There is a lot of PR concerning how Big Data will revolutionize our conceptions of how the mind works and how science should be conducted. Big Data is Empiricism on steroids. It is made possible because of hardware breakthroughs in memory and speed of computation. We can do more of what we have always done faster and this can make a difference. I doubt that this tells us much about human cognition. Or more accurately, what it does tell us is likely wrong. Big Data is often coupled with Deep Learning. And linguists have every reason to believe that Deep Learning is an incorrect model of human cognition. Why? Because it is a modern version of the old discovery procedure. Level1 generalizations are generalized again at level 2 and level two generalizations are generalized agains t level 3 and so on. As a model of cognition, this tells us that higher levels are just generalizations over lower ones (e.g. from phonemes we get morphemes and from morphemes we get we get phrase structure and from phrase structure we get...). GG started form the demonstration that this is an incorrect understanding of linguistic organization. Levels exist, but they are in no sense reducible to the ones lower down. Indeed, whether it makes sense to speak of 'higher' and 'lower' is quite dubious. The levels interact but don't reduce. And any theory of learning that supposes otherwise is wrong if this is right (and there is no reason to think that it is not right, or at least no argument has been presented arguing against level independence). So, Deep Learning and Big Data are. IMO, dead theories walking. We will see this very soon.
The interview with Gary Marcus (here) discusses these issues and notes that historically what we have here is more of the same that has in the past proven to be wildly oversold. He thinks (and I agree) that we are getting another snow job this time around too. The interview rambles somewhat (bad editing) but there is lots in here to provoke thought.
A second paper on a similar theme is here in Aeon. The fact that it is in Aeon should not be immediately held against it. True, there is reason to be suspicious given the track record, but the paper was not bad, IMO. It argues that there is no "coming of the machines."
Here is a third piece on programming and advice about how not to do it (the Masaya). It interestingly argues for a Marrian conception of programming. Understand the computational problem before you write code. Seems like reasonable advice.
Last point: I mentioned above that Big Data is not only influencing how we conceive of Minds and Brains but also on how we should do science. The idea seems to be that that with enough data, the search for basic causal architecture becomes quaint and unnecessary. We can just vacuum up the data and the generalizations of scientific utility will pop out. On this view, theories (and not only minds) are just compendia of data generalizations and given that we can now construct these compendia more efficiently and accurately by analyzing more and more data, theory construction becomes a quaint pastime. The only real problem with Empiricism is that we did not gather enough data fast enough. And now, Big Data can fix this. The dream of understanding without thinking is finally here. Oy vey!
This comment has been removed by the author.ReplyDelete