1. September 2025

Journals Highlights

Celebrating Peer Review Week
September 15–19

September is the month when researchers, editors, publishers, librarians, and institutions all over the world take note of the challenges and evolution of the important practice of peer review. The theme for 2025 is “Rethinking Peer Review in the AI Era.” ASPET’s five Editors-in-Chief, along with our Ethics Editor, weigh in on their thoughts regarding AI and how it is affecting peer review.

In general, what are your thoughts regarding AI? Have you used AI, and in what ways?

Xinxin DingXinxin Ding: We need to embrace it. I am using AI every time I do a Google search, as AI is already embedded in the search engine. But, beyond that, I have not used AI for writing or peer review.

Lynette Daws Lynette Daws: AI is a powerful tool, which facilitates many of our day-to-day operations. Specific to my role as Editor of Pharmacological Reviews, I use AI to help track down recently published reviews in a specific research area. Our journal regularly receives unsolicited proposals and reviews. An important key for the success of the journal is to avoid publishing in a subject that has been recently reviewed by our journal or others. AI combined with PubMed provides an excellent means to survey that landscape, which helps guide me in my decision to accept or reject unsolicited submissions. AI is also very helpful in identifying potential reviewers for the manuscripts we receive, since it provides a list of names along with a brief biography. That said, with all the benefits AI brings, we must be mindful of its misuse. It should be embraced for the vast array of tasks it can facilitate and questions it can answer, with the caveat that it cannot replace human judgment.

Beverley Greenwood Van-MeerveldBeverley Greenwood-Van Meerveld: Professionally, I’ve used AI tools to assist with editing and generating summaries or suggestions for improving written content. I think that AI holds tremendous potential. However, I also recognize the importance of using it ethically and critically. It is also important that we understand its limitations, potential biases, and the need for human oversight. I think that AI is transforming many aspects of our daily lives, from how we work and communicate to how we analyze data and make decisions. While AI is a powerful tool, I view it as a supplement—not a replacement—for human judgment, creativity, and expertise.

John SchuetzJohn Schuetz: I think AI is a potentially valuable investigative tool with caveats: asking open-ended questions without instruction or guidance to an AI system is a recipe for collecting erroneous information. Now when I use it to acquire information, I explicitly ask the system to only use vetted/peer reviewed data sets, information, etc. This is not foolproof as mistakes can still creep in. I have used it to acquire broad strokes of information on areas in which I’m less familiar. I also think it can be useful for routine straightforward proofing requests like grammar, jargon, and verifying checklists.

John TesmerJohn Tesmer: Like handheld calculators, it is another tool that society is going to have to learn how to incorporate into their lives without losing their ability to think critically. I now use AI to suggest my vacation itineraries. It sure saves money with travel agents! But I still decide what I want to do when I get to a destination.

Dr. Michael JarvisMike Jarvis: AI including Large Language Models (LLMs), such as Chat-GPT, are essentially information aggregation technologies that are based on pattern recognition or other prespecified algorithms. While the sophistication and capabilities of LLMs have increased in recent years, their utility is necessarily based on the collection and organization of existing information. LLMs can certainly be productively used as information gathering tools, but I rarely use LLMs as they are generally not “fit for purpose” in the critical analysis, generation, and interpretation of scientific research.

How do you believe AI is affecting, or will affect, peer review?

Xinxin DingXinxin Ding: AI is impacting peer review on multiple fronts. There is an increasing number of manuscripts about AI applications in research. This requires editors to identify reviewers with expertise on the topic, which can be challenging. There is a need to identify new board members with such expertise. AI is also increasingly used in writing, at least in polishing English grammar or expression, though not all authors disclose it. Reviewers and staff need to be able to detect AI usage and be on the lookout for possible AI-related misrepresentation, for example, in citations and in validity of statements.

Lynette DawsLynette Daws: I believe that AI can facilitate peer review by aiding the reviewer in getting a quick sense of recent findings and existing literature in the field. Likewise, it can facilitate detection of plagiarism. However, AI is just a tool, which cannot and should not replace human judgment. AI can report incorrect information, and it can introduce bias. It is therefore imperative that, although peer reviewers might use AI to facilitate the review process, they must use their depth of knowledge to carefully evaluate the merits and deficiencies of a manuscript in making their ultimate recommendation.

Beverley Greenwood Van-MeerveldBeverley Greenwood-Van Meerveld: I think AI has the potential to improve the peer review process. AI can serve as a supportive tool by helping to streamline administrative tasks, detect plagiarism or image manipulation, check reference accuracy, and even flag methodological inconsistencies. These efficiencies could reduce reviewer burden and speed up the review process without compromising quality. However, AI can’t replace the critical judgment, subject matter expertise, or nuanced feedback that human reviewers provide. Peer review involves evaluating originality, scientific rigor, contextual relevance, and ethical considerations, all of which require deep understanding, experience, and often field-specific insight that AI simply cannot replicate. AI must be used transparently and ethically to allow reviewers to focus more of their energy on high-level critique and scholarly evaluation rather than administrative tasks.

John SchuetzJohn Schuetz: It might be useful in screening scope and fit, and detection of duplication or image manipulation, and maybe in identifying broad experimental flaws, the latter with explicit guidelines. But I think using AI as part of the manuscript review process (not sure if AI could reliably judge novelty) is a mistake because I do not think AI should be used to adjudicate manuscripts during peer review. There might be some level, but I have not thought of it yet.

John TesmerJohn Tesmer: I think so long as the text does not constitute plagiarism and is accurate. After all, the main point is the novelty of the reported data and the rigor and reproducibility of the work. I think it might be more of a problem if reviewers use AI to write their critiques, which could imply they are not being thorough in their analysis of the paper (i.e., less critical thinking). On the other hand, AI may enable us to detect fraud more easily in submitted manuscripts, or even when results have been reported previously but not recognized by the authors in the paper (whether maliciously or not). Overall, I see it as more of a win than a loss. It is not something that is going to go away now, regardless.

Dr. Michael JarvisMike Jarvis: AI technologies can assist authors and peer reviewers in organizing information. However, AI technologies cannot effectively address the key aspects of peer review. Assessments of the appropriateness and rigor of a study’s scientific hypothesis, experimental design, use of inferential statistical analysis, and validity of the authors’ conclusions and interpretations are all essential components of quality feedback peer review should provide.