Comments

Monday, January 4, 2016

An experiment in open access publishing

Colin Phillips has a very interesting discussion of his three year experiment with an open access journal that he, Matt Wagers and Claudia Felser edited in the Frontiers series (here). The online journal (here) has been quite successful and Colin does something very important and timely; he reflects on what went well, what went less well and WHY. In other words, he brings some first hand empirical experience to bear on a topic that we have discussed at FoL. The whole discussion is very interesting and I strongly recommend it for those interested in the topic.

A key point that Colin makes is that there are virtues other than cost in assessing how a journal is contributing to inquiry; readership, time from submission to publication, Impact Factor all matter, in addition to cost. Moreover, he notes that, interestingly, these are all factors that can be managed better or worse depending on what seem to be easily implementable procedures.

One that seems particularly noteworthy is the apparent fact that most of the submissions are accepted. This is not entirely true for there is a per-submission vetting process that insures that most of the submissions are of a kind to be accepted. However, the fact remains that a large proportion of the papers submitted get into print. The main reason seems to be because "[a]rticles are judged only for soundness, not for impact, i.e. if your study is sound but has minimal novelty or importance it can still be accepted." This makes the reviewing process less contentious and the goal posts easier to identify and hence cuts down on reviewing gamesmanship (though that is not how Colin puts it).

Let me make one observation and again encourage you to read the whole post because it is very good.  One of the downsides of the current review process, IMO, is that it penalizes originality. How so? Well, original work is by its nature contentious and less well-formed than less original work. This is especially so when it comes to theory, where making things clear is very hard and then it is harder still to accommodate all the myriad problems that novelty will face. One way of finessing this problem is to publish most everything. This is what the above journal does (well, depending on how 'soundness' is evaluated: what makes a paper sound?). Another way is to to go the way that the old Cognition did: place your trust in editors (Mehler and Bever) with good taste and let them exercise it. As I have mentioned before, some of the greatest journals were run in this way (Keynes ran a journal as did Planck and both were outstanding). Now, I am not acquainted with Ms Felser, but I do know Colin and Matt quite well and I know them to have excellent taste in questions. So, I suspect that one reason for the success of their journal is this X factor. We need more of this. Not to replace the mainline journals, but to allow idiosyncrasy to sometimes get a hearing. We need editors that evaluate papers in terms of novelty and importance of the ideas. This is not quite impact (at least in the terms that Colin notes are relevant to this currently important metric), but it is a big deal, IMO. It can take time for a new idea to gain a foothold, but when it does, well, you can finish this sentence as well as I can.

So, look at the post. It's really thought provoking. I'd be interested to see comments on this in Colin's blog.

No comments:

Post a Comment