tag:blogger.com,1999:blog-5275657281509261156.post7268898944264731831..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: More reading for the curiousNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-5275657281509261156.post-35182443052794427202015-02-25T12:59:50.969-08:002015-02-25T12:59:50.969-08:00I did indeed. Thx. I fixed the link. A long day. I did indeed. Thx. I fixed the link. A long day. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-92021324481087589802015-02-25T12:56:32.296-08:002015-02-25T12:56:32.296-08:00Did you mean this by John Collins? (there is no li...Did you mean this by John Collins? (there is no link in your text) https://www.academia.edu/11051565/Naturalism_without_MetaphysicsUtpalhttps://www.blogger.com/profile/18166651069703369369noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-32048747743091073762015-02-25T12:24:51.052-08:002015-02-25T12:24:51.052-08:00I did not intend to imply that DL models could not...I did not intend to imply that DL models could not be fixed. Rather, I was applauding the concern for negative data that these papers were investigating. If these models are to be interpreted psychologically, this kind of data seems very pertinent. As for DL networks as a class, I am in no position to know. They authors did note that these were two different very standard DL systems and that both seemed fool able in more or less the same way. They also noted possible ways of getting around this, maybe. This is all good. That's what they should be doing. Good for them.<br /><br />As to you second question, I doubt it but don't know. As I noted, these are the state of the art systems that they are playing with. But I am no expert either. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-79929734194237593502015-02-25T12:18:56.544-08:002015-02-25T12:18:56.544-08:00I don't know if the deep learning papers imply...I don't know if the deep learning papers imply that deep neural networks as a class of models could never mimic human performance - I think what the papers show is that the particular architecture they tested has this issue. I'm not sure that there's any function that can't be computed by a "deep neural network", but I could be missing something. At any rate these papers are very useful - this seems like the kind of work that needs to be done if we eventually want to make those models more psychologically plausible.<br /><br />And a question: are there are any existing models that that achieve more human-like performance, or that don't have blind spots of this sort? Unfortunately I know next to nothing about computer vision and couldn't find relevant references in the papers. There's a (plausible) speculation at the end of the Nguyen et al paper that generative models may not suffer from this problem, but I don't see why generative models could not be implemented as "deep neural networks".Anonymoushttps://www.blogger.com/profile/14864640787642051975noreply@blogger.com