Here is an amusing addendum to the last post. The NYT ran an article (here) discussing the edge that China will have over the rest of the world in AI research. What’s the edge? Cheap labor! Now you might find this odd, after all, AI is supposed to be that which makes labor superfluous (you know, the machines are coming and they are going to take all of the jobs). So, why should cheap labor give the Chinese such an advantage? Easy. Without it you cannot hand annotate and hand curate all the data that is getting sucked up. And without that, there is no intelligence, artificial or otherwise. Here is what cheap labor gets you:
Inside, Hou Xiameng runs a company that helps artificial intelligence make sense of the world. Two dozen young people go through photos and videos, labeling just about everything they see. That’s a car. That’s a traffic light. That’s bread, that’s milk, that’s chocolate. That’s what it looks like when a person walks.
As a very perceptive labeler put it (odd that this observation has not been made by those computer scientists pushing Deep Learning and Big Data. I guess that nothing dampens critical awareness more completely than the kaching of the cash register):
“I used to think the machines are geniuses,” Ms. Hou, 24, said. “Now I know we’re the reason for their genius.”
Right now, this is the state of the art. All discussion of moving to unsupervised, uncurated learning is, at this time, idle talk. The money is in labeled data that uses the same old methods we long ago understood would not be useful for understanding either human or animal cognition. What makes humans and animals different from machines is what they come to the learning problem with; lots and lots of pre-packaged innate knowledge. Once we have a handle of what this is we can begin to ask how it works and how to put it into machines. This is the hard problem. Sadly, much of AI seems to ignore it.
This comment has been removed by the author.
ReplyDelete"All discussion of moving to unsupervised, uncurated learning is, at this time, idle talk.":
ReplyDeleteIt's certainly not a solved problem, but it's certainly not idle talk either. This ignores one of the highest-profile successes in NLP from the last two years: Methods like ELMo and BERT that make it possible to train a model for a sentence understanding task using _mostly_ unlabeled running text.
Trading an NYT story for an NYT story: https://www.nytimes.com/2018/11/18/technology/artificial-intelligence-language.html
Agreed. Even before the recent neural-net-ML resurgence, this has been a goal and active topic of research in neuroscience (computational and otherwise) for decades.
DeleteHere is a nice short (non-NYT) review:
http://www.gatsby.ucl.ac.uk/~dayan/papers/dun99b.pdf
One thing I've noticed just recently (that I'm sure won't come as a surprise to people in the field) is that current work in classical AI paradigms often isn't marketed as AI. For example, there's a large and pretty sophisticated literature on automated planning. This literature is still dominated by classical techniques. (Encode the planning problem using a logic and then prove theorems; encode it as a graph and search for paths between nodes; or do a mix of both.) Back in the day, this would have been seen as cutting edge AI research. Now that it's not trendy, it's just "automated planning".
ReplyDelete