
AI Likely Driving Surge in Letters to the Editor
Researchers and scientific journals can add a new possibility to a growing list of artificial intelligence–generated horrors: letters to the editor.
Two days after researchers published a paper on the efficacy of ivermectin as a treatment for malaria in the New England Journal of Medicine this summer, the journal received a letter to the editor from another researcher criticizing the paper’s findings. That’s standard practice in the world of scientific publishing, where journals publish letters to the editor to keep advancing debate even after the peer-review process is complete.
The letter sent to the NEJM didn’t raise any suspicions with the journal’s editors, who passed it along to the authors for a response.
“It seemed well written at first, but then there were these strange comments, and they referred to other papers that seemed to refute our work,” Matthew Rudd, co-author of the paper and associate professor of mathematics at the University of the South in Tennessee, told Inside Higher Ed. “But those papers were written by [my co-author], and they do not refute our work.”

Matthew Rudd
It was suspicious enough for Rudd and his co-author, Carlos Chaccour, a researcher at the University of Navarra, to investigate the identity of the letter writer.
“It turns out it was two authors, and one of them had published zero letters ever in his life. Then, suddenly this year, he has published—not submitted—84,” Chaccour told Inside Higher Ed. “That’s crazy. In my whole career, I have published 84 papers and probably two or three letters to the editor.”
While there are some prolific letter writers, most scientists publish only a small number of letters to the editor because writing a substantive one requires niche subject matter expertise. But when Rudd and Chaccour sifted through all of the letters the suspicious letter writer had published since 2023, they found that the writer had published letters to the editor across 58 different scientific topics. An AI-detection tool also said there was a high likelihood that the author had produced the letters using generative AI. (OpenAI’s ChatGPT came out in late 2022, followed by a slate of similar AI tools.)

Carlos Chaccour
“That was the starting point,” Rudd said. “We wondered how often this sort of thing happened.”
After the case study, Rudd and Chaccour analyzed more than 730,000 letters recorded in PubMed between 2005 and September 2025. They identified an unexplained uptick in prolific new letter writers starting in 2023, with the number of authors who published 10 or more letters in their debut year rising 376 percent after the introduction of large language models capable of producing letters in seconds.
Since publishing their preliminary findings on a preprint server earlier this month, they’ve heard from numerous other journal editors, who have also identified similarly suspicious letters. And the problem is not contained to a particular region or country.
Carlos Chaccour, Gonzalo Arrondo, Tommaso Cancellario et al. Robot pen pals: a multidisciplinary analysis of recent trends in scientific correspondence, 03 November 2025, PREPRINT (Version 1) available at Research Square.
“They’re coming from places where the pressure to publish or perish is the strongest,” Chaccour said. “This is a global phenomenon. No place is free of this.”
Inside Higher Ed interviewed Rudd and Chaccour about their recent findings and the implications it has for science.
This interview has been edited for length and clarity.
Q: Paper mills have been producing fraudulent research papers for years, which is now even easier with the help of generative AI. But why would a researcher want to use AI to generate letters to the editor? What is the potential benefit?
Rudd: It’s much easier to get a letter published in a journal like the New England Journal of Medicine than a paper, and some institutions will count them the same. So, if you can get a letter to the editor published in The Lancet or the New England Journal of Medicine or [another well-regarded journal], then that’s a great thing to have on your CV.
This is a consequence or artifact of the publish-or-perish system, which gets more intense every year. Researchers have to publish stuff, and letters to the editor are low-hanging fruit for publishing things if they go about it a certain way.
Chaccour: It may be sloppy—normally you would like to know what the CV line is—but some people stop reading after they see that a scholar has published in The Lancet. So, letters to the editor can pad a CV. And as long as perish is the alternative, people will take whatever is on the other side. Do you want pear juice or perish? Pear juice. Do you want rotten tomatoes or perish? Rotten tomatoes, no doubt.
Q: Your research analyzed all letters to the editor recorded in PubMed between 2005 and September 2025. It showed a spike in the mean number of letters published per author per year since 2023, coinciding with the rise of generative AI. Have there been any similar spikes in letter writing in the past, and how do they differ from the most recent spike?
Chaccour: There was also a jump in 2013 and 2020. The 2013 spike is because that’s when PubMed changed the indexing protocols—when an author replied to an article, that reply was indexed as a letter itself, so that’s explained.

Rudd: Twenty-twenty was the pandemic, so there was a flurry of activity related to what’s going on, how to fight and what was happening in various places. That was a very contentious time. People wrote lots of letters—which are faster than papers—because there was an urgency.
Both the 2013 spike and the pandemic spike self-corrected, but the 2023 spike—and specifically for authors who are prolific letter writers—is a bit unexplained. At least as of now, that self-correction does not seem to have happened.
This unusual phenomenon, this rare event of people publishing 10 or more letters [in their first year in the letter-writing market] is now way less rare. It’s statistically a very significant difference. While this is not proof of AI use, it’s strong evidence of something, and the most likely explanation is AI use.

Q: You ran the letters you used for the case study through an AI-detection tool, which showed a strong suspicion of being AI-generated. Do you plan to evaluate all of the letters published since 2023 for evidence of AI generation?
Rudd: We can now use Pangram (the AI-detection tool) at scale. One of the natural things to do next is to go through and check as many letters as possible and extend this case study more broadly. But even Pangram is not proof, because it can have false negatives and false positives. However, it’s when the evidence becomes overwhelming that you can home in on one answer.
Q: How are so many suspicious letters getting past journal editors?
Rudd: Editors are really swamped with what they have to read and get through. And if a letter seems credible and is well written—and it’s just a letter to the editor, not a paper—it makes it easier for journals to let some of them through. Everyone has only so much time in the day and so much attention to give to things [and journals are getting more submissions than ever].
For example, the letter [that started our project] got through the editors at the New England Journal of Medicine because it seemed legit before we flagged it as suspicious.
Q: What are potential consequences for science and the information ecosystem if journals keep publishing more and more letters that are likely AI-generated and riddled with misinformation?
Rudd: It becomes part of the training data for large language models, so it’s gonna regurgitate more of that stuff in other places later on. It makes it much harder to discern what’s true in this age where truth has become a fluid thing.
Chaccour: It’s also stuff like this that validates the political narrative that the public can’t trust scientists. People are losing trust in science, perhaps in part because some scientists are misbehaving. While it’s not the whole body of scientists misbehaving, the ones that do taint the whole perception.
Q: How can the scientific community address the conditions that are incentivizing some researchers to submit AI-generated letters to the editor?
Rudd: There have been a growing number of examples of people whose careers have been derailed when fraud has been uncovered. It may be that more of that has to happen for people to stop doing this so much.
Chaccour: Or maybe we should start putting more value in mentorship and teaching quality, such as the number of people that you teach that go into science and what students think of you, and not just whether you publish something in Nature or Science.
Rudd: But then the question becomes how do you do that? How do you change a whole system?
Source link



