Wednesday, December 24, 2008

Stupid Human Tricks in Peer Review

I felt like peer review is under attack after reading the second Slashdot article in as many days (1 and 2) about embarrassing failures in peer review of scientific journals/conference articles. The Sokal affair is a pretty well publicized hoax along these lines. The perpetrator wanted to prove a point that articles on post-modern cultural studies were indistinguishable from gobbledygook, surprising no-one outside the field, he was right!

It seems like the above two recent occurrences demonstrate that peer review is in need of an overhaul (or at least a minor upgrade). The second story is about a paper generated by a context free grammar being accepted to a conference. It seems like therein lies a clue to a possible solution.

We need some way to score the "goodness" of a reviewer's work. It's intractable to focus on creating an algorithm for finding "good" original research papers, that basically amounts to implementing some sort of beyond the state-of-the-art artificial intelligence.

What about focusing on "badness"? What if a certain percentage of the articles sent to a reviewer are generated by a context free grammar trained on the archives of relevant journals. These are now known "bad" articles. If the reviewer passes these papers on to publication or presentation then his reputation/score as a reviewer drops. It would need to be a double-blind process. Neither the reviewer nor the institution sending articles for review should know which are true papers and which are generated.

This would be a simple way to check and see if reviewers were paying attention, though it would increase the work-load on a group of folks who are not reimbursed for their time reviewing articles. It would probably improve the results of peer review though, especially if the fact that someone passed along a bogus paper were publicized in the journal itself, right after that page in the front that lists all of the editors and whatnot.

No comments:

Post a Comment