Colin Phillips and Chris Dyer sent me the following links that you might find interesting. Colin's (here) is discussion of deep learning and its relation to language work from Michael Jordan (he's a real big shot in these areas see here). I previously posted a link to comments that Jordan made on a Reddit Q&A (here) that includes discussion on about deep learning. This one specifically targets his thoughts on language.
Two interesting features: first, Jordan believes that NLP is a possible sweet spot where scientific and technological work might converge. If correct (and I personally agree with Jordan here), linguists should cultivate this. It would be very good for the field were it to turn out that anyone that did computational work on language had to have a serious background in syntax, phonology, semantics, pragmatics etc. Imagine a world in which formal linguistics was to computational work on NL what physics is to engineering. It would certainly raise the profile of the discipline and thereby, hopefully, encourage more fundamental work on core linguistic questions.
Second, Jordan is supportive of the kinds of things that would further the project: linking NLP work to the kinds of representations that linguists delight in. It goes without saying that I agree with him here. He thinks that this is where the next intellectual/technological opportunity lies. We should resouondly second this thought.
Chris sends a link to a discussion of the work that just won the Nobel Prize (here). It's really cool, and it nicely describes how to combine cognition with neuroscience for the benefit of each. The system the Mosers describ is, in their words: "a likely implementation of a universal brain metric for space" (p. 72). Kant would be delighted.