[jox] Cutting the Knot
- From: Mathieu ONeil <mathieu.oneil anu.edu.au>
- Date: Sat, 20 Feb 2010 08:43:12 +1100
Well, it's taken a bit of a while, but I finally got around to attempt to
square the circle / find the North West Passage / cut the Gordian
knot, in short: address the issue which caused so much heat last
December, our peer review system. I have come up with a solution
which I hope will satisfy most. Let me say from the start that it
involves some kind of post-publication public assessment of
submissions – I think there is no getting around that. Anyone who
is completely against this can decide if they want to continue with
the project or not, but hear me out first, yall.
1-summary of issue
4-decision: review discussion system
5-time-frame for decisions
7-CSPP peer review process: main stages
1-summary of issue
This is by no means everything that was said, I tried to get to the main
nut, apologies if I forgot anyone.
The central issue we wrestled with was whether to follow the
recommendation laid out in Whitworth and Friedman's paper  to
accept significantly more papers, and rate these according to
different categories. There was a nice summary by Andreas Wittel who
said it was a question of whether to have “pre-selection”
(through accept / reject) or “post-selection” (through rankings).
Brian Whitworth logically argued for ratings:
"A journal that can't be bothered rating its submissions doesn't deserve
to succeed. Equally one that selects the best and leaves the rest is
elitist. There is no easy way between these options, so we suggested
both highly selective reviewing and completely open publishing. The
multi-grade system lets anyone publish but all needn't be rated equal
- though there can be multiple criteria. I guess this goes against
the politically correct idea that we are all equal, but actually some
of us run faster, others cook better and a few of us can actually do
mathematics - so really "equality" is a myth. The real
equality is of opportunity not ability, which is why this approach
lets everyone in who wants to come in."
However a number of people (Athina, Biella, Andreas, Felix) expressed
reservations about ratings because of their perceived overly
hierarchical, discriminatory or “school-like” nature; especially,
some did not like the numerical ranking.
Many points in favour of ratings were made by StefanMn:
StefanMn then proposed a choice where authors submitting a proposal could
indicate whether they want a binary model (publish or reject) or a
multi-dimensional rating system:
There are two main problems with his proposal. First, I think we should be
as consistent as possible in what we present to the world. It would
be weird to have some papers with an appreciation and others without.
Second, if we want to open up the journal selection process and provide rewards to those who do
normally invisible work (i.e. reviewers), in line with Toni Prug's
proposal  for a community peer review system (through a list where proposals are
vetted and reviews are released), then by definition we are rejecting
the publish / don't publish model: vetting and orientation occur
upstream, even before an actual full submission. If we agree to work
with authors by reviewing and discussing their proposal it would be
hard to come to the end of that process, and reject. So we will
necessarily publish more than a traditional journal, including papers
that are not written in perfect English (for example), or that are
problematic in some other way. As was said:
Felix Stalder said that “we should publish only papers that we agree are fit for publication” and
“But "fit for publication" is not based on a single reason.
There may be articles which we consider great in many dimensions but
they lack some certain feature. Lack of this feature normally would
make them unacceptable but if we c25an express this lack by a rating
then the credibility of our journal is maintained *and* the article
This is the essential point, and, upon reflection, I agree with it. And
yet: the fact remains that people are uncomfortable with ratings. I
think it all depends how we do them. In order to protect the
reputation of the journal, we need to alert readers that we are aware
of flaws, but that _we decided to publish anyway_. Hence the need to
“qualify” or “signal” (rather than “rate”) published
submissions. Another thing: I think it is important to distinguish
between appreciations of what a submission clearly “is”
(activist, academic, native-English speaking authored, etc) and
appreciations as to what a submission “is worth” (original,
rigorous, etc): the first appreciations may be described as more
“objective” than the second.
So we could have either a simplified choice of rating (outstanding,
excellent, fair) or an even simpler choice (yes, no) as in: this
paper is a grand grassroots testimony (Activist: yes) but the English
is not perfect (Native English: no) or: this paper is
academic-research oriented (Academic: yes) but it is not based on
empirical evidence (Empirical: no); or: this is a utopian fantasy
about how a peer-produced society would produce gastronomical
delights (Theoretical, Activist: yes); etc.
A-For “objective” categories we could have:
B-For “subjective” categories we could have:
we could ask reviewers to indicate whether they want to express:
about the submission.
[Thus indicating clearly that this is a personal opinion].
In all cases these “appreciations” (whether Activism, Originality,
Congratulations) would have to be clearly linked to the reviews which
would be released when the submission is published.
Finally the editor should have the right to intervene and add an appreciation
if he or she feels that reviewers have been unduly harsh or
So, this is the first point to decide: what categories do we have?
4-decision: review discussion system
The other point to be decided concerns the process of discussion of
submissions: should these be held on a restricted mailing list (to be
clear: not the one we are using now, which is open, but one that
would be reserved to reviewers and authors) or on a protected part of
StefanMn argued for the website option:
“Well, in general I'm a big fan of mailing lists. But in this case I think a web based system would be more useful. I'd suggest to offer potential authors a place where they can propose an article in the way outlined above and reviewers can help the author to write a great article. I think a web page is more useful because it gives every stakeholder a clear structure where the subject is *one* proposed article.”
[See: lost the ref, sorry]
It's true that having one (restricted) mailing list where all submissions
are discussed could be messy (though less so if people do not
interfere with titles of emails thereby breaking threads). And it
might be easier to create files that can be used later on in the
website when publishing, I don't know. At the same time I see some
problems with setting up discrete pages for articles: a) authors and
reviewers might in fact benefit from reading discussions on other
articles; b) not sure about this, but there might be complications in
access rights – who can access what article page, etc?: c) finally
the advantage of the list is that you are kept abreast of discussion
as they go along, whether you seek the information or not –
otherwise many people (myself included) might not go to the website
very often: with a list, you have no choice, you get the message.
This is a strong advantage, in my view.
So it would be great to hear people's opinion on this second issue.
5-time-frame for decisions
Ideally we should be able to formally announce the journal and call for
papers at the conference which Athina has kindly organised as our
launch party – VIRT3C in Hull, in a month – woo-hoo!
[What do you mean, it wasn't organised for that?! Oh well, my mistake. ;-)]
In any case decisions should be finalised a couple weeks before the
conference – in order to give ourselves time to set up anything
that needs to be set up (particularly if we go with the website
discussion option). So I think a
two-week period should be enough - today is Friday 19 February March
2010 – I am invoking Maintainer Mojo to request that final
decisions be reached by Saturday 6 March 2010. No further debate
after that will be accepted as relevant. Tough love, people!
Whitworth B and R Friedman (2009a) "Reinventing academic publishing
online. Part I: Rigor, relevance and practice", First
Monday, Volume 14, Number 8 - 3 August 2009; Whitworth B and R Friedman
(2009b) "Reinventing academic publishing online. Part II: A
socio-technical vision", First Monday, Volume 14, Number 9 - 7 September 2009.
Toni Prug, “Open-process Academic Publishing”
peer review process: main stages
Prospective authors submit a proposal to the list.
All list members vet this proposal during a reasonable period of time (1-2 weeks?): is it appropriate for the journal, are arguments or references missing?
Authors write their submission.
Authors submit to the journal.
The submission is posted by the editor to a password-protected part of the website [mailing list?] who also alerts the main journal list that he has done so.
The editor suggests two expert reviewers (volunteers welcome).
The two expert reviewers read and evaluate the submission during a reasonable period of time (3 weeks?). Reviewers are encouraged to coordinate their
Reviewers post their reviews and recommendations to a password-protected part of site [mailing list?] and alert the list that they have done so.
The list discusses this during a reasonable period of time (1-2 weeks?).
During this time consensus emerges: publish, revise and resubmit (to two other reviewers, for example?), or
During this time consensus does not emerge: the decision then moves to a formal vote on the Governance Board: publish, revise and resubmit
(to two other reviewers, for example) or reject.
Submission and review process published.
Readers can comment and rate.
Authors can respond in comments section [and add links in the text to relevant comments and responses - no updating of text though].