tag:blogger.com,1999:blog-5275657281509261156.post6825870570919977996..comments2024-03-28T04:04:55.806-07:00Comments on Faculty of Language: Revolutions in science; a comment on GelmanNorberthttp://www.blogger.com/profile/15701059232144474269noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-5275657281509261156.post-24044406999226541562018-08-28T10:52:46.846-07:002018-08-28T10:52:46.846-07:00Here, here. I think the idea that experiments shou...Here, here. I think the idea that experiments should be infallible is a residue of you know what (psst, Eism!). The idea is that whereas theory is airy fairy and speculative so we expect it to always be suspect and baaaad, experiments, which is just careful looking at the facts will, when not polluted by theoretical preconceptions of laziness or sloppiness, will always be solid and true. This idea ha proven to be, ahem, overstated. So, we find that experiments also run into trouble sometimes. Does this indicate anything "wrong"? Hard to tell. It would be useful if the conclusion from all of this was that we need to be skeptical of everything and understand how hard getting things right is. But instead we get virtue signaling and reinforcement of an ideal that is really quite misleading.Norberthttps://www.blogger.com/profile/15701059232144474269noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-54419490516926348412018-08-27T13:03:19.487-07:002018-08-27T13:03:19.487-07:00Coincidentally today (or not):
https://www.nature...Coincidentally today (or not):<br /><br />https://www.nature.com/articles/s41562-018-0399-z<br /><br />https://www.nature.com/articles/d41586-018-06075-z<br /><br />https://www.theatlantic.com/science/archive/2018/08/scientists-can-collectively-sense-which-psychology-studies-are-weak/568630/<br /><br />From the original article (first link) "Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015":<br /><br />"We find a significant effect in the same direction as the original study for 13 (62%) studies, and the effect size of the replications is on average about 50% of the original effect size. Replicability varies between 12 (57%) and 14 (67%) studies for complementary replicability indicators. Consistent with these results, the estimated true-positive rate is 67% in a Bayesian analysis. The relative effect size of true positives is estimated to be 71%, suggesting that both false positives and inflated effect sizes of true positives contribute to imperfect reproducibility."<br /><br />Given that published studies pick from the right hand tail as it were, I think that the inflation is to be expected. As far as I'm concerned, this seems like a pretty good track record (two out of three ain't bad, at least according to Jim Steinman), but I'm sure it doesn't look that way to people who want some kind of money-back guarantee from science.Bill Idsardihttps://www.blogger.com/profile/10570926308058368183noreply@blogger.comtag:blogger.com,1999:blog-5275657281509261156.post-21162558723935775042018-08-27T12:54:44.558-07:002018-08-27T12:54:44.558-07:00Here's the link to the Gelman blog post:
http...Here's the link to the Gelman blog post:<br /><br />https://andrewgelman.com/2017/06/29/lets-stop-talking-published-research-findings-true-false-2/<br /><br />The section that Norbert quoted is in the comments to the post, not in the main post itself.Bill Idsardihttps://www.blogger.com/profile/10570926308058368183noreply@blogger.com