If you need an introduction for a scientific article, you can have it written by the chatbot and text generator ChatGPT. ChatGPT can even come up with its own research question based on a collection of data and… very scientific article to write.
How can publishers of scientific journals still see which articles were written by people and which by a computer? This is a question that has been prevalent since ChatGPT launched last year.
Have it early this year Nature and Science states in their guidelines that ChatGPT may not be listed as a co-author by authors. Nature was allowed from the start that researchers used ChatGPT as a writing aid, as long as this was indicated. The magazine Science had previously banned any use of ChatGPT, but this week has too in the guidelines It states that ChatGPT can be used, as long as it is indicated for what purpose.
Do it secretly
Jelle Zuidema, associate professor of artificial intelligence at the University of Amsterdam, understands these rules: “In this way, journals hope to prevent researchers from doing it secretly and it can be better controlled.”
Zuidema has the idea that ChatGPT is already being used extensively in science to refine articles or write short pieces of text. “I haven’t done it myself yet, but it is understandable that researchers do it, because there is pressure to publish and you often spend a large part of your time writing down your results.”
It becomes a problem when ChatGPT is used not only to refine an article, but also to build entire articles from start to finish.
Despite the peer review process in which other researchers check an article before it is published, sometimes articles come through that are clearly written by ChatGPT. For example, there are found several articles where in the text you suddenly find strange expressions such as ‘regenerate response’ or ‘As an AI language model, I…’ that clearly indicate that the text was created by AI.
To prevent articles written by ChatGPT from escaping attention, publishers are looking for a smart algorithm that recognizes ChatGPT’s style. Nature wrote last week about a new model which was developed by chemists and can recognize with higher accuracy than existing detectors whether a chemistry article was written by ChatGPT. This is still a fairly simple model and Zuidema expects that it will have lower accuracy in practice. “People will always try to find ways to trick detectors, for example by telling ChatGPT to use a certain writing style.”
According to Zuidema, it is a better idea in the long term to require ChatGPT to build a watermark into the texts that detectors can recognize, for example in the form of a pattern of synonyms in the text.