A Call for Transparent Academic Publication Process

Reza Vaezi
8 min readJun 12, 2020
The intended vs. The published

According to Galleta et al. (2020), the bar for academic publishing has continuously been rising in the last few decades. As much as this sounds good and implies higher-quality scientific publications, it also can jeopardize the integrity of scholarly works and hence hamper the truthfulness and reliability of the published results and conclusions. To meet the demands of rising bars, some scholars and reviewers may adopt practices that, at the very least, are ethically questionable if not outright unethical and wrong. As stated by Galletta et al., a study with eight hypotheses where four of them are unsupported may look stronger if all or a couple of those unsupported hypotheses are dropped from the study. This provides a strong incentive for authors to drop some unsupported hypotheses from the research and change their initial model and theoretical framework to fit the results of the analysis better. Sometimes even review panel may request unsupported hypotheses to be dropped to make the work appear of higher quality or to make more room for other articles within an issue. Researchers may find that using a different modeling technique and data analysis method other than the initially proposed ones can give them better results and provide support for more of their conclusions. Ronald H. Coase, a noble laureate, once said that if you torture data enough, it will confess to everything you want. As a result, some scholars may “torture” their data to get the desired results that make their papers more appealing to high-quality academic journals and a panel of reviewers. There are more ways that can be employed by desperate researchers facing tenure or graduation requirements. People under pressure for academic publications may go with data fabrication that can take different forms from answering their own surveys multiple times to outright data simulation to fit their proposed hypothetical model. The higher the need for publication in top-quality journals, the greater are the chances that some researchers and review panels may reserve any or combination of these ethically questionable practices.

For many scholars and academicians, the goal is to publish as much as possible in the best journals they can afford. In fact, publish or perish, is the dominant motto among scholars trying to build an academic career. As a result of this goal, many scholars either have been advised or learned through painful experience to not argue with reviewers beyond what is considered reasonable, especially if the reviewers seem to have a strong stance on a subject. If one wants to publish, they better listen to reviewers and try their best to meet reviewers’ demands instead of poking holes into reviewers’ comments. The pressure to listen and comply, bend and break, gets more as rounds of reviews stack on top of each other mainly because authors can see the goal post in reach and are wary of more rounds of revisions or even late-cycle rejection if they don’t comply. As time passes, reviewers gain more power over the authors, and in some cases, they may demand citations of works that are not directly relevant to the work under review but, in one way or another, can benefit a reviewer.

As a result, much scholarly work that goes through a rigorous peer-review process will come out partly or completely different than what it used to be. In most cases, for better (e.g., more rigor), but not always. In some cases, the changes might be so drastic that a third party that was not involved in the authorship and review process may not be able to easily establish a meaningful connection between the first manuscript and what is published. The original idea might very well get lost or changed drastically during this process. The published paper may no longer be a good representation of the author(s) ideas and hypotheses and their effort in finding the truth. It will be an amalgamation of reviewers’ and authors’ ideas; the result of implicit and explicit give-and-take that took place during the review cycle.

On the other hand, reviewing a paper also can be a taxing job with little explicit rewards. Reviewing activity is considered a service to the community of scholars. However, in most cases, the academic career is designed in a way that does not reward service as much as it rewards publications and teachings. For many scholars, service comprises only 10 to 20 percent of their job requirements, including service to the department, college, university, and the community. Thus, service to the community, which manifests only partly in reviewing papers, will be counted the least towards career advancements. For many scholars, reviewing papers is a means of staying relevant and current in their interests while creating a network for future collaborations and advancement of their careers through that network. Because the value of service to the community is often not reflected in formal evaluation processes maintained by universities and research institutions, scholars need to rely more on the intrinsic value they get from reviewing papers to stay engaged in it actively.

The lack of tangible external reward and the diminishing power of internal motivations as scholars establish themselves and their networks over time contributes to two other problems. One is the overly lengthy review cycle, and the other is the reviewer shortage. Information Systems journals, which I take as an example of other academic journals in the humanities and liberal arts, typically have a very lengthy review cycle. Some reserve up to 6 months for each round of review and revision activity. This means that a manuscript can take up to 6 months or more to receive a review and decision, if not desk rejected. Then the authors will have another six months to respond to the reviewers’ comments, and then reviewers will have six months to consider the revisions and request a new revision or accept the paper. Thus from submission of a manuscript to publication can take anywhere between one to a few years. My personal experience varied between 1 year to 3 years. It is understandable why the authors may need up to 6 months to implement revisions as some revisions might be major and require the gathering of new data or implementation of novel methodologies. However, one might wonder why it can take up to 6 months for some reviewers to review a manuscript or its revision. I suspect the reason lies in the lack of external motivations. Scholars tend to postpone review activities in favor of doing what counts towards their career advancement (research and teaching). They get to do the reviews when there are no other high-priority tasks, and who can blame them?

Furthermore, it is no secret among academicians that many peer-reviewed publication outlets, especially those that are not ranked highly, are struggling to find enough reviewers to assign to their incoming papers. This sometimes results in the assignment of papers to reviewers that are not very knowledgeable on the manuscript’s subject and methodology. And eventually may lead to low-quality reviews, questionable publications.

I believe it is time to consider a different publication, review, and reward system that addresses the apparent problems of the current widely accepted system. The proposed system can potentially resolve the issues that were discussed earlier in this post. In short, I suggest a transparent review system where the reviews and reviewers are part of the final publication and will receive credit for their invaluable contributions.

The peer-review process is an integral and invaluable part of knowledge creation and the advancement of science. It acts as a “checks and balances” system to ensure the publication of high-quality scientific papers. It strives, among others, to detect possible errors, improve the quality of methods and findings, and ensure the soundness of conclusions. A good review must be publicly recognized and rewarded for its contributions to the advancement of knowledge. Within the current systems, good reviews are neither recognized (at least publically) nor rewarded appropriately. Within the proposed system, the reviews will be published along with the final article while maintaining the reviewer/author anonymity (double-blind method) until the time of publication. This approach would not have been feasible in the past when most journals were facing page limitations in each of their issues. It is hard for me to think that no one thought about having a transparent review system in the past. Considering the print limitations (number of pages that could be printed in each issue) that most journals faced in the past, the existing system would have been the most efficient one for meeting the needs of knowledge creation in that era. But we are in a new era, and print limitations are no longer relevant, or a restricting factor as most papers are primarily published in online formats. Thus, it is not only possible to move to a transparent system, but also, such a move will have a few advantages for our community and for the advancement of science and education in general.

First, keeping the review process double-blinded until publication time will preserve the benefits of that method in minimizing bias and preventing explicit power and favor plays and backchanneling. Also, when reviewers know that their comments and reviews will become public at the end of the process, they may take more care in constructing their comments and expressing their demands. It can especially reduce irrelevant citation requests, comments directed at dropping unsupported hypotheses, and recommendations for unjust methods. Second, the record of these reviews can be used in formal career advancement evaluations and be counted towards research contributions. As discussed, reviews play an essential part in the advancement of science. In its essence, reviewing manuscripts for publication is a research contribution, not a service contribution. Being recognized as a contributor to a publication can mitigate problems associated with reviewing being a taxing job with little to no tangible external rewards. It can potentially speed up the publication and innovation cycle. Third, publicly available reviews and comments can serve as invaluable educational material to be used in the training of new scientists and doctoral students. Doctoral students can learn to do good research, to distinguish quality research from others, and to criticize and review research by observing the process in place from the initial submission to publication. Fourth, more in line with the third point, but on a much broader scale, this can serve the public interest by educating people about the strengths and weaknesses of scientific methods. Today the majority of people accept almost anything that is reported as an outcome of scientific research. We as scholars have a better grasp of the limitations of findings and may treat them with a grain of salt, but most people (e.g., science reporters) accept them as reported. While if the media and the public have access to reviewer comments of a paper, then those reviews can be reported along with the results, which will educate the general population about the limitations of scientific discoveries and implicitly foster analytical thinking capabilities.

References:

Galleta D., Hu H., Moody G. (2020) Internal AISWorld Communication. Received through AISWorld ListServer on 4/18/2020.

The image is taken from a Facebook group named “Reviewer 2 must be stopped”. Image URL: HERE.

Notes:

You can find the Spanish Translation of this article by Dora Luz González Bañales on HERE; A Spanish Podcast of this article by Dora is also available HERE

Another (slightly edited) version of this post has also appeared on the Coles College of Business Faculty Blogs.

--

--

Reza Vaezi

Associate Professor of Information Systems; Interested in Philosophy & Theology; Researching Human Behavior; Teaching Business Analytics & Emergent Technologies