tag:blogger.com,1999:blog-5275657281509261156.post7392161153903719323..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: Linguistic just so stories, plus ca changes...Norberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger67125tag:blogger.com,1999:blog-5275657281509261156.post-54781606405135531052014-09-13T05:02:34.550-07:002014-09-13T05:02:34.550-07:00I would like to have little chatbots, all running ...I would like to have little chatbots, all running the same computer code (implemented in less than 1000 lines of Prolog, or 10 lines of English) which will cooperate by talking to each other. They would all have access to a large database of text such as is provided by Google.<br /><br />Richard Mullinstelfer cronoshttps://www.blogger.com/profile/17855431611611929232noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-10279948419529760302014-05-12T19:06:39.587-07:002014-05-12T19:06:39.587-07:00@Alex: I am sorry I must miss something you want m...@Alex: I am sorry I must miss something you want me to do/say. I have repeatedly said I AGREE with you that some work presented at Evolang was disappointing. How many times do I need to repeat it until you accept that I share some of your concerns? No one has said modeling provides all the answers. We are commenting on a post that celebrates the Hauser et al. paper - so if you do continue to bring up the issue [vs, mentioning it once and after it has been acknowledged move on to something else], what is your point if not support of Hauser et al. regarding modeling [I am aware you do not support them re other issues]?<br /><br />I am personally not convinced by all the work done by Simon Kirby BUT I think he is way too smart to have learned nothing from the lessons of the work you mention. And notice that the problem of language evolution is not the same as the problem of language acquisition or the problem of modelling full scale human language. At one point our distant ancestors had no language - we have a wide variety of massively complex languages. Our closest primate cousins have at very best some 'toy language' - even though they had as much time to evolve away from our LCA as we did. So why is there this huge difference? Modeling what MIGHT have happened at the very beginning of the transition from no language to human language seems one way to tackle the problem. By itself it probably won't succeed. But it is not the only thing language evolutionists do. To quote from what Mark says below [my apologies for inserting caps]: "[the work] is interdisciplinary, uses multiple methods, TRIES to interface with broader issues and TRIES to contribute to a cumulative science of language" Trying to do X is not the same as having succeeded in X. So maybe you can clarify why you continue to come back to the 'having not succeeded yet' part?Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-33503874212001232732014-05-12T14:54:23.283-07:002014-05-12T14:54:23.283-07:00@Christina: Weren't we just talking about fals...@Christina: Weren't we just talking about false dichotomies? <br /><br />I think there is a methodological lesson from the successes and failures of 80s style connectionism/PDP; what works well on toy examples may not scale to the full complexity of natural language. Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-83031530156549188422014-05-12T12:00:31.145-07:002014-05-12T12:00:31.145-07:00@Max: my apologies; I was under the impression you...@Max: my apologies; I was under the impression you were interested in a serious discussion of my review - my mistake. <br /><br />Apparently, you just search for reasons to generate conflict. How else could one explain your reply to my six [where IN THE REVIEW of SoL did I sling any mud at Chomsky?] with reference to a comment I made on this blog about something Norbert had said some 2 years after SoL was published - truly astonishing! I suggest you find yourself another victim, I have no interest in continuing this conversation.Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-76610696520838335502014-05-12T11:38:04.849-07:002014-05-12T11:38:04.849-07:00@christina:
riiiight....but the criticism isn...@christina: <br /><br />riiiight....but the criticism isn't only that these probabilistic models fail, it's that there's a principled reason these models can't be the whole, or even most of the story. the question isn't: how does the mind pick amongst a pre-specified set of hypotheses (for lets say, a grammar) given some data, it's: where do the hypotheses come from? <br /><br />These models have nothing really to say about it other than appealing to a structure environment. Which we know is a problem since at least David Hume. <br /><br />This all goes without saying that even if these models succeeded in modelling language comprehension in some manner (notice spontaneous language production that is appropriate to circumstance but not determined by context is not even on the agenda) there's no justification to conclude that that is how humans produce and comprehend language. This goes for every other kind of thinking. the way a computer plays chess tells us about zero about how humans play chess. this is elementary stuff, really. failure to take it seriously is why the evolang stuff isn't really taken seriously outside the tabloids. Maxim Baruhttps://www.blogger.com/profile/01889104131534548077noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-50381388442700327202014-05-12T11:23:14.542-07:002014-05-12T11:23:14.542-07:00@Christina:
responses (to your numbered points)
f...@Christina:<br />responses (to your numbered points)<br /><br />first: that's a spurious dismissal if I ever heard one. Plenty of neuroscientists take what you would call a generative-ist view of the mind seriously. Like take say the folks Andrea Moro has been working with. or Gallistel, or David Poeppel or books about neuroscience like 'Principles of Neural Science' 4th ed. by by Eric Kandel, James Schwartz, Thomas Jessell. Also, if you'd like to take a crack at refuting Fodor's arguments, I'd be very amused to see you have a go. Please, by all means. <br /><br />third: there's no argument here. only an assertion of opinion. you asked questions about peoples motivations, and i responded: that's all. seeing your work was all speculation about motivation, as opposed to published correspondence with McGilvray & Chomsky, I'm not sure why speculation is out of bounds. <br /><br />fourth: seems to me that's an evasive response. we aren't arguing about whether chomsky is modest in general, we were talking about one q/a set. which you are so fond of dissecting. modesty is pretty subjective anyhow, I suppose. being dismissive and being modest aren't mutually exclusive by any definition i know of. one can be perfectly modest but dismiss geocentricism for example. <br /><br />fifth: ideology & science have always been snug bed-fellows. speculation about a persons preferred scientific world-view and their politics have been fair ground since about the seventeenth century. Not that they ought to convince anyone in and of themselves. but they certainly raise eye-brows. <br /><br />sixth: ah...how about down below where you compare norbert to the east-german authorities during the communist era for example? <br /><br />you're entire argument is based on discussing personalities. any reader can review your comments on this thread and notice that. And you make no attempt to distance discussion of matters of fact with personalities. I'm sure Robert Boyle must be spinning in his grave... Maxim Baruhttps://www.blogger.com/profile/01889104131534548077noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-28321637724349826372014-05-12T10:44:53.351-07:002014-05-12T10:44:53.351-07:00@Alex: maybe it is because I actually attended Evo...@Alex: maybe it is because I actually attended Evolang X that I am getting the feeling you're beating up a dead horse. The issue that many of the models are based on hugely oversimplified "languages" has been raised at every talk by a modeller I attended. It has been duly noted. But no one claims to have all the answers, to know all the steps involved in language evolution. Now what is the best way to move on? Listen to Norbert and put these questions on ice for 150 years or try to take some more baby steps towards the still very distant goal?<br /><br />Regardless of how you answer my question, if the simplicity of the models disappoints you how much more dissatisfied must you be with the Chomskyan models of the biological language faculty. Do you think because these models are disappointing we should take a 150 year hiatus on generative grammar work? Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-30899869399134721992014-05-12T09:31:03.209-07:002014-05-12T09:31:03.209-07:00Going back to the evolang thing; a good example of...Going back to the evolang thing; a good example of where I agree with the critique is this paper "Linguistic structure is an evolutionary trade-off between simplicity and expressivity" from Cogsci 2013 by Kenny Smith and some others. (available <a href="http://mindmodeling.org/cogsci2013/papers/0256/paper0256.pdf" rel="nofollow">here</a>) <br />So this a really good paper I think, (so I hope the authors don't mind me critiquing it a bit,) there are some well thought out simulations, it's looking at a central problem, BUT the languages it is looking at are really simple. (a two letter alphabet and all strings are of length 2).<br /><br />So of course, I take the point that it is appropriate to start with a really simple system, and that simplifies the computational costs, and so on. But yet I do feel dissatisfied by just how oversimplified the "language" is. <br />Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-7598004703851275252014-05-12T09:15:08.720-07:002014-05-12T09:15:08.720-07:00@Alex: Sorry. I thought it would go somewhere mor...@Alex: Sorry. I thought it would go somewhere more interesting. *shrug*Asad Sayeedhttps://www.blogger.com/profile/02529573419674136476noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-2957954344093610522014-05-12T08:42:29.626-07:002014-05-12T08:42:29.626-07:00@Alex: thanks but no thanks - I have seen quite en...@Alex: thanks but no thanks - I have seen quite enough already. But I am eagerly awaiting Norbert's report of the strong crossover results that Postal is ignorant of. THAT should be worth our undivided attention.Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-51969999477962221672014-05-12T07:50:03.042-07:002014-05-12T07:50:03.042-07:00Can we not have this discussion thread-jacked onto...Can we not have this discussion thread-jacked onto Platonism in linguistics again? Please? Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-6949240055266010712014-05-12T07:45:59.316-07:002014-05-12T07:45:59.316-07:00I don't know how you got that implication from...I don't know how you got that implication from what I wrote. If and when someone builds a strong AI that can produce human language and has the characteristics of a human mind, it would ultimately have been built by a biological object and connected to the history and genesis of biological objects that produce language, and the explanation of its capacity would be intrinsically connected to the explanation of the capacities of biological entities. When it is self-replicating, it will presumably become some kind of chapter of the evolang story. <br /><br />The parrot produces language in a "broad" sense, but does it produce and acquire language in a manner that a human child produces and acquires language? Would it have done so if there weren't a human being to teach it language?<br /><br />I'm not entirely sure where you're going with this. Are you attempting to convince me that language exists independently of biological objects? Would it have existed if no humans had existed? Perhaps it might, if say squid evolved it. How then? Would it look like the language that humans produce? Why would it?<br /><br />It's all very esoteric to me. And very science fictional. May I interest you in a sci-fi reference?Asad Sayeedhttps://www.blogger.com/profile/02529573419674136476noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-84573994219429198452014-05-12T06:55:44.310-07:002014-05-12T06:55:44.310-07:00"Biological objects are the only entities tha..."Biological objects are the only entities that produce language, and they only produce it after a period of acquisition (whatever that period may be)."<br /><br />May I assume from this that you reject strong AI? In other words if a hypothetical computer/robot produces something indistinguishable from English [=passes the Turing Test] it would not count as language because it is not produced by a biological object? On the other hand what is produced by a well trained parrot counts as language because the parrot is a biological object?Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-78576497626948125412014-05-12T04:42:52.712-07:002014-05-12T04:42:52.712-07:00Far simpler.
Children learn (acquire) language a...Far simpler. <br /><br />Children learn (acquire) language and from that acquisition, produce language. Biological objects are the only entities that produce language, and they only produce it after a period of acquisition (whatever that period may be). Children learn *about* geography, but they do not produce continents. They might produce *language* about continents, however. <br /><br />This seems to me to be a relatively elementary distinction. So in a sense, yes: I find it counterintuitive for any X to be learnable (acquirable) by children if X is not generated by a biological object. I do not find it, however, counterintuitive for Xs to be learned *about* without being generated by a biological object. I am happy to say that there are many things that are not generated by a biological object that people learn *about*. Using language, visual aids, direct experience, etc.Asad Sayeedhttps://www.blogger.com/profile/02529573419674136476noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-45292212214440188792014-05-12T02:07:26.082-07:002014-05-12T02:07:26.082-07:00@ Asad Sayed:
Before getting into any detail I w...@ Asad Sayed: <br /><br />Before getting into any detail I would like to clarify that I understand correctly what you assert here:<br /><br />"I find the ideas "not generated by a biological object" and "yet learnable by children" to be as counterintuitive a claim as any that I have heard a "mainstream generative grammarian" make."<br /><br />Are you saying you find it counterintuitive for any X to be learnable by children if X is not generated by a biological object? In other words to you believe children can only learn about things that have been generated by biological objects? If so, how do you explain children acquiring knowledge about geography, astronomy, quantum physics - just to name a few? Mountains and rivers, planets and galaxies, etc. etc. are not generated by a biological object, yet we seem quite capable of acquiring knowledge about them ...Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-20685710991181645122014-05-12T01:36:44.551-07:002014-05-12T01:36:44.551-07:00Norbert writes (I'm leaving out the all caps s...Norbert writes (I'm leaving out the all caps shouting): <br /><br />"Generative Grammar (and Postal IS a generative grammarian, indeed the discoverer of cross over phenomena) has made non trivial discoveries about how grammars are structured. [...] So, when asked about a result, there are tons to choose from, and we should make the other side confront these."<br /><br />I find the talk about "we" and "other side" sociologically interesting, but ultimately unappealing. As Alex Clark notes, there's a couple different senses in which one might define the sides here; that's a first problem. Also, for me as someone from Europe, it feels a bit, how shall I put it, old-fashioned to think in terms of "sides". I think younger generations of scientists are way more heterodox and more attuned to the necessary plurality of the language sciences than you may realise, or than may be apparent from some of the regular commenters on this blog. This goes both ways: new generations are aware of important advances of the past decades, but also of the affordances of new data, methods, and theories for pushing linguistics forward. <br /><br />This is why I quite like Charles Yang's and David Adger's recent work (just as I like Morten Christiansen's and Simon Kirby's work): it is interdisciplinary, uses multiple methods, tries to interface with broader issues and tries to contribute to a cumulative science of language. I don't think all the talk of "sides" and worries about people not acknowledging the value of certain discoveries contributes to that (hell, even Evans & Levinson acknowledge that broadly generativist approaches have made important contributions to our understanding of how grammars are structured). <br /><br />For the same reason, I think the Hauser et al review ultimately misses the mark: it notes that under a very limited conception of language, the evolution of one narrowly defined property may still be mysterious. That's okay (who could be against isolating specific properties for detailed investigation), but the slippage from language-as-discrete-infinity to language-in-all-its-aspects is jarring and misleading. "The mystery of the evolution of discrete infinity" might have been a more accurate title. Although clearly that would draw a smaller readership.markhttps://www.blogger.com/profile/02202024581059703492noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-80745112915001260502014-05-12T01:25:32.791-07:002014-05-12T01:25:32.791-07:00@Alex: You are absolutely right that there are all...@Alex: You are absolutely right that there are all kinds of ambiguities in the term 'generative grammar'. As far as Postal is concerned: his current work does not fall under any of the 3 definitions you offer. And if he says he is not a generative grammarian, I see no reason to doubt this. <br /><br />It is not surprising that Norbert refuses to accept this. Norbert has basically unlimited trust in Chomsky and Chomsky told me in an e-mail exchange that "Edge-based Clausal Syntax", which he said he had not even read, must be a work of generative grammar because Postal is a generative grammarian. IMHO this claim is an indication for the virtually religious dogmatism of a man who cannot accept that people are capable of changing their mind about the best way of doing linguistic research [in Postal's case moving from something covered roughly by your first definition to a non-generative framework]. <br /><br />One more thing: Chomskyans like to create the impression that anyone who rejects generativism has to accept a constructivist account [of say someone like Tomasello]. This is of course not the case. Non-generativists come in many stripes and if Norbert wants to claim that none of them can account for the phenomena he wrongly claims Chomskyans can account for he has to do a lot better than demolishing a strawman or two...Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-6408733941568203242014-05-12T01:23:09.281-07:002014-05-12T01:23:09.281-07:00@Christina: You are right to suggest that I wouldn...@Christina: You are right to suggest that I wouldn't have time to cover the history of these arguments in that degree of detail. The students will for the rest of their stay here have exposure from almost the entire remaining department in other approaches, in any case.<br /><br />As a side note: I find the ideas "not generated by a biological object" and "yet learnable by children" to be as counterintuitive a claim as any that I have heard a "mainstream generative grammarian" make.<br /><br />This discussion has, however, inspired me to assign this Hauser et al. paper for a student presentation (they're required to make presentations under my supervision)---to a student who is an evolang enthusiast and no shrinking violet, with the instruction to make the discussion "fun". <br /><br />On your last point, I actually *do* corpus and lab psycholinguistics studies of linguistic phenomena from a functional perspective (in the sense of working memory/attention/UID and other such concerns) as well as applications and have a bit of familiarity with the literature you're talking about...and I would say that we are far from a position in which functional accounts can supplant structural accounts---particularly of the full range of island phenomena---because they are answering different questions. The starting problem (and communications disconnect, put another way from the way I put it in previous posts) is deciding what it means for a functional account even to supplant a structural account, what it means to "explain", and so on. Asad Sayeedhttps://www.blogger.com/profile/02529573419674136476noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-85063197126620973972014-05-12T01:11:28.472-07:002014-05-12T01:11:28.472-07:00For the purposes of the discussion I think we'...For the purposes of the discussion I think we're talking about 2 and 3 to the extent that 2 and 3 overlap. At least, that was my impression. Someone in categories 2 and 3 might also use 1. IMO probably should, but that's another story.Asad Sayeedhttps://www.blogger.com/profile/02529573419674136476noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-39651023140892940822014-05-12T00:38:29.936-07:002014-05-12T00:38:29.936-07:00An observation: I think the generativist/non-gener...An observation: I think the generativist/non-generativist distinction is being used in several different senses here:<br /><br />Is a generativist someone who <br /><br />uses formal generative grammars <br />thinks linguistic theories are ultimately about cognitive abilities<br />works in Mainstream Generative Grammar <br />Alex Clarkhttps://www.blogger.com/profile/04634767958690153584noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-38136117431478446652014-05-11T20:49:59.787-07:002014-05-11T20:49:59.787-07:00@David: Thank you, yes please keep me posted on th...@David: Thank you, yes please keep me posted on that paper.<br /><br />@Asad Sayeed: Next time you co-teach [or solo-teach] anything on GG you may want to make sure you advise your students that what Norbert says above is utter nonsense on multiple levels. And while you're at it dispel several of the urban myths regarding Paul Postal people like Norbert like to perpetuate. Specifically make sure your students learn that:<br /><br />1. Postal worked at MIT from 1961-1965 and greatly helped to establish the early version of Chomskyan TG. He left MIT because he wanted to not because Chomsky forced him to [in fact at the time Chomsky wanted him to stay]. So pace wide spread urban legend Postal's leaving MIT is no reason for any animosity between him and Chomsky.<br />2. Postal lend his support to the generative semantics 'movement' of the late 1960s, early 1970s which opposed certain aspects of the Chomskyan approach because he was convinced at the time this was the more promising approach [assign "The Best Theory" to your students]. You may wish to include some discussion of Chomsky's reaction; calling Postal's theory the 'worst theory' [not merely worse than his own but THE worst]. One fairly accurate account of this history is given in Goldsmith&Hook's "Ideology and Linguistic Theory: Noam Chomsky and the Deep Structure Debates", make sure you assign the interview with Postal.<br />3. Postal has never been a generative grammarian if generative grammar refers to I-language [=a part of human biology]. He is a Platonist about natural language and rejects the idea that languages are biological objects or generated by biological objects. He believes of course that humans can learn languages but believes it is the task of psychologists not linguists to figure out how kids acquire knowledge of language. <br />4. His own work led Postal to reject generative accounts of grammar and develop his own framework. You may wish to assign Postal's 2011 book "Edge-based Clausal Syntax: A Study of (mostly) English Object Structure", make sure the students read the introduction re Barrel A grammars.<br />5. At this point it may go without saying but just to be sure, emphasize that anyone who claims, as Norbert did, that Postal IS a generative grammarian is either completely ignorant of Postal's recent [= post 1974!] work or deeply immersed in wishful thinking which denies anything that threatens one's ideology. <br /><br />Now in case you do not have the time to teach history, you need to ensure your students learn at least that the strong crossover phenomena Norbert refers to have no explanation in what he calls "generative" accounts and that the current most general explanations of island phenomena, taking a wide range of corpus, lab psycholinguistic and introspective evidence account, point away from a structural account.Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-35040128203563825142014-05-11T20:44:25.516-07:002014-05-11T20:44:25.516-07:00This comment has been removed by the author.Anonymoushttps://www.blogger.com/profile/03443435257902276459noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-13180561766309360902014-05-11T15:29:23.310-07:002014-05-11T15:29:23.310-07:00Fair enough. :) As you might recall, I was way bac...Fair enough. :) As you might recall, I was way back when (the when being about 2008 or 2009) complaining that the online world had been left to, mmm, the sort of people who think that finding an example by Googling for something summarily defeats a * in a linguist's illustration of a phenomenon. <br /><br />But I've since realized (by my change of working environment) that another problem is that generativists often don't speak the kind of language that even open-minded non-generativists and scientists in other fields recognize as "science". Or mathematics. The idea that language is a field of inquiry that might need a <i>sui generis</i> working vocabulary and methodology is not going to occur to people outside of the field, especially if other fields of psychology use superficially similar tools. Why would it? <br /><br />You might sort of see it in the fact that CB felt it necessary to explain with a helpful example that people don't literally make infinite embeddings, since this piece of information was evidently missing from the discussion. Therefore (am I misunderstanding the logic?) discrete infinity is not a scientifically important phenomenon, at the very least.<br /><br />Let's just say that she's not the only person I've met in real life who has felt the need to explain that in these sorts of discussions. Since I spend a lot of time in machine-learning circles, the temptation to say that because it isn't REALLY infinite, a good representation of human language as a mental object can just rely on a relatively crude system that learns up to a particular length. <br /><br />I just co-taught a software project course on building a system that generates language probabilistically for a *limited* domain of use, and even realistic *short* sentences are FREAKISHLY difficult to produce without the aid of an explicit grammar (what most AI generation systems are using). The result doesn't exactly shock me, because it's obvious that it doesn't have to go to literal infinity to take on the effective characteristics of being infinite. It can get "bad" very quickly. Everyone in computer science knows that, but *connecting* it to the reality of language is the kind of basic-level explaining that linguists needs to do. <br /><br />As I mentioned to you privately, now that I've had this post-Maryland experience, I finally sort of think i've figured out how to present syntactic theory to people who aren't linguist-linguists and won't be, but will be psycho or computational linguists, maybe. Well, at least I'm running a sort of experiment trying to teach the "mentality" to young comp ling researchers before they develop certain mental habits, heh.Asad Sayeedhttps://www.blogger.com/profile/02529573419674136476noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-23334357625264852492014-05-11T14:13:51.113-07:002014-05-11T14:13:51.113-07:00There are and always will be climate science denie...There are and always will be climate science deniers in every domain of inquiry. If you/we refuse to defend what we have found and explain why it is important then why should we be surprised that others think it's not. Generative Grammar (and Postal IS a generative grammarian, indeed the discoverer of cross over phenomena) has made non trivial discoveries about how grammars are structured. NONE OF THESE PROPERTIES HAVE BEEN EXPLAINED IN ANY OTHER WAY!!! So, when asked about a result, there are tons to choose from, and we should make the other side confront these. Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-47205122836513307212014-05-11T11:01:42.359-07:002014-05-11T11:01:42.359-07:00I guess that paper was on my mind, and it rather d...I guess that paper was on my mind, and it rather directly speaks to the issue that linguistic knowledge is not stored as a set of statistical surface properties. But Chrisina, you might also be interested in my 'syntax for Cognitive Sciences' on lingbuzz at http://ling.auf.net/lingbuzz/001990. It's essentially a summary of syntactic research over the last 50 years, so it's very superficial (and the journal want me to cut 30% of it out, so it will get more so!) but it does try to say why generative syntax is important for thinking about human cognition more generally. I'll have the revised version soon, so let me know if you want to see that.davidadgerhttps://www.blogger.com/profile/00821774928618824698noreply@blogger.com