In Peer Review, We Need More Natural Intelligence

The human beings who review articles for scientific journals sometimes reject good articles for stupid reasons. A computer can automatically reject an article for stupid reasons much more quickly and efficiently than a human being can. Thus, artificial intelligence could amplify the natural stupidity that often plagues the review process.

Rather than spending less time on the peer review process, by allowing a machine to do their thinking for them, editors should devote more time and more thought to the review process. In particular, editors should critically review the reviewers’ reviews and should take authors’ rebuttals seriously.

I have seen the peer review process from both sides. I have worked for peer-reviewed journals, and I have written articles that were published in peer-reviewed journals. Shortly before the American Psychiatric Association published the fifth edition of its Diagnostic and Statistical Manual (DSM‑5), I wrote an article that explained that conversion disorder and somatization disorder should not have been included in DSM-III. My article explained that these diagnoses violate the guiding principle of the DSM-III. They are etiologic diagnoses, which means that they represent the physician’s conclusion about the cause of the patient’s problem, as opposed to being a classification of the pattern of symptoms and signs of the patient’s problem. The reviewer for one journal scoffed at the very idea that conversion disorder is an etiologic diagnosis. He or she then cited an article that would supposedly clear up my confusion. Yet that article stated, in the very first paragraph, that conversion disorder is an etiologic diagnosis. I pointed out that glaring flaw in the reviewer’s reasoning to the editor, but the editor was unfazed by it. The reviewer disliked my article, so the editor had no choice but to reject it. Fortunately, there were other journals.

After my article was eventually published in Medical Hypotheses, I sent a copy to Allan Frances, who chaired the committee that compiled DSM-IV. He then blogged about the article on the Psychology Today and Huffington Post websites, arguing that what I was saying was true and important. In other words, the editor at the original journal relied on a nonsensical review and thus missed the opportunity to publish an important article.

A friend of mine who does research in the social sciences had a similarly frustrating experience. She had done some initial research to develop and validate a model, and then she did further research that applied her model. However, one of her later articles was rejected by a journal because the reviewer thought that she misunderstood the model she was applying. The reviewer recommended that my friend should read the earlier published literature on the model. Because it was a “blinded” review, the reviewer was unwittingly telling her to read her own work. The editor stuck with the reviewer’s decision to reject her article, even though the reviewer’s opinion was clearly stupid. Fortunately, another journal published my friend’s article.

Not all work that is rejected for stupid reasons is good enough for publication. However, articles that say something that is important but is unorthodox or unexpected or politically or commercially inconvenient are particularly likely to be rejected for stupid reasons. One cause of this problem is the natural fallibility of human beings (which of course is the reason why we have peer review to begin with). Another cause of the problem is the tendency for educational institutions to focus on promoting memorization of facts and doctrines, rather than on helping people cultivate skills in critical reasoning. As a result, many highly credentialed people know a great deal about the conventional wisdom of their field of expertise but are poorly equipped to evaluate groundbreaking advances, even in their own field. (This is the underlying cause of the phenomenon that Thomas S. Kuhn described in his book The Structure of Scientific Revolutions.) Worse yet, people with poor reasoning skills tend to be overconfident because they have no way of knowing that their reasoning skills are poor (a phenomenon called the Dunning-Kruger effect). Until their reasoning skills improve, they are immune to reason. One manifestation of this problem is that many doctors are ignoring the evidence from observational research because of a narrow-minded misunderstanding of how to practice evidence-based medicine.

Editors who slavishly follow the advice given by reviewers who have poor skills in critical reasoning are going to miss the opportunity to publish the very articles that are likely to make the most important contributions to science and that may even boost their journal’s impact factor. Relying on artificial intelligence in the peer review process could amplify this problem, as the computer will be following algorithms based on conventional wisdom, rather than being able to have that “aha! moment” that is so thrilling to real scientists.


In my book Not Trivial: How Studying the Traditional Liberal Arts Can Set You Free, I explain the value of the classical liberal arts — including the trivium of grammar, logic, and rhetoric — in helping people develop skills in critical reasoning.

Leave a Reply

Your email address will not be published. Required fields are marked *