A “Consumer Reports” for Academic Journals? (and wouldn’t a Wiki work?)

In today’s Chronicle of Higher Education, Robert Deaner, associate professor of psychology at Grand Valley State University, makes a suggestion that seems so obviously right that you can’t believe it hasn’t been realized already, let alone proposed.

Academics in every field will know the annoyance and agony of submitting to journals that take six months or more to respond to submissions–and, when they do, offer nothing specific as justification for rejection. Other journals don’t provide reader reports, but you have to read the fine print ahead of time to know this. Still others, by contrast, respond promptly and provide reader reports for virtually all submissions, whether accepted, rejected, or recommended for revision and resubmission.

But how are we (and, more importantly, our students and junior colleagues) supposed to know how particular journals operate? Wouldn’t it be great to have some kind of clearinghouse of information on and reviews of all the journals in your field, with both specific data (on turnaround time, acceptance rate, etc.) and anecdotal observations (“The Cotton Mather Review is pretty slow this year. Any thoughts on why?” “Editorial turnover is what I’ve heard”)?

Here is Deaner’s proposal, outlined in a piece called “It’s Time for Journals to Be Author-Reviewed”:

I suggest the development of a crowdsourced, “author reviewed” journal-evaluation Web site. The idea is that authors from various disciplines would share their experiences with particular journals, both negative and positive. There would be quantitative information such as time until receiving notice of being reviewed, time until receiving first review, total time from initial submission until final publication, and, of course, acceptance or rejection. And there would also be opportunities for rating or commenting on key issues, like the fairness and constructiveness of editors and reviewers and the efficiency of the journal’s production staff.

As reviews accumulated, it would be possible to make better decisions about where we would and would not submit our work. Authors would be able, if they chose, to eliminate journals with exceptionally high or low acceptance rates. They could forgo journals with slow turnarounds or predominantly negative editor or reviewer ratings. Ideally, this Web site would allow journal searches by many other criteria too, including subject area(s), impact factor,  publication fees, open-access options, database indexing, publisher, review process (e.g., blind or not), etc. This platform would also allow our colleagues, including librarians and administrators, to better evaluate where we are publishing and what journals we most need access to.

Now, you can see the obvious downsides here: under-compensated editors resigning after a bit of on-line snark, the difficulty of maintaining anonymity in the case of a smaller journal, the potential for abuse, etc. And the proposal risks promoting a kind of Consumer Reports-like stance toward the means of academic publication. Scholarly journals aren’t vacuum cleaners, after all; they’re nonprofit entities run in large part by fellow academics relying on the labor of their colleagues to review submissions and make sound decisions (there are lots of exceptions to this rule, but you know what I mean). Deaner anticipates this objection; as he puts it, “If you are trying to decide where to go for the best tacos in town, you have Urbanspoon and TripAdvisor to provide hundreds of ratings, many with rich descriptions. But if you want to find the best journal for your manuscript, you may have virtually no information. And, although I like tacos as much as anyone, I hope we agree the journal decision is far more important.”

Yet academia is already doing this kind of thing on a seasonal basis anyway. Just look at the Academic Jobs Wiki, a series of linked pages that report on positions in every field under the sun, with crowd-sourced data on interview requests, flyback invitations, even writing sample requests, as well as lots of anecdotes about departments’ treatment of applicants. Deaner could start such a Wiki tomorrow if he wanted to, beginning with journals in his own field and expanding outward, with folks in other disciplines adding fields, subfields, categories, and individual journals as the site expanded. (There’s already an Academic Journals WikiProject that could easily be hijacked to this purpose.)

University presses could also come in for review. How many of us have a colleague who’s submitted a book proposal to an editor, only to hear back nine months later that the proposal got lost in the shuffle, or never seriously considered? This is the kind of information that would be invaluable to scholars in many fields, helping guide their decisions about where to submit book proposals as they navigate the ever-changing world of scholarly publishing. In a related vein, such a resource would give those promoting new forms of academic publication (open access, print-on-demand, etc.: in my own field, punctum books comes to mind) a place to advocate for submissions from those scholars who might be contemplating conventional publication for a book that might be more suitable for, say, a born-digital platform from a non-university press.

An interesting and timely proposal, in any case.

2 comments on “A “Consumer Reports” for Academic Journals? (and wouldn’t a Wiki work?)

  1. Maryalice on said:

    Have you tried building Amazon affilate site? Some folks make big money with a WordPress blog. That’s my opinion. See this. http://intpedia.com/amazon-affiliate
    Maryalice http://intpedia.com/amazon-affiliate

  2. pyeuteow on said:

    Hello! dbbebaa interesting dbbebaa site! I’m really like it! Very, very dbbebaa good!

Leave a Reply

Your email address will not be published.

HTML tags are not allowed.