Authorship and ChatGPT : Clinical Nurse Specialist

Secondary Logo

Journal Logo


Authorship and ChatGPT

Fulton, Janet S. PhD, RN, ACNS-BC, ANEF, FCNS, FAAN

Author Information
Clinical Nurse Specialist 37(3):p 109-110, 5/6 2023. | DOI: 10.1097/NUR.0000000000000750
  • Free

The chatbot ChatGPT was released in November 2022 by Open AI, an American artificial intelligence (AI) research laboratory. AI is a technology supporting computers in learning to make decisions.1 Chatbots are a form of AI that deliver responses to requests for data using automated rules, machine learning, and natural language processing. Chatbots allow humans to interact with digital devices as if they were communicating with a real person.2 The “GPT” of ChatGPT is short for generative pre-trained transformer, an AI model that uses deep learning and natural language processing technology to teach computers to make human-like decisions in response to a query.3 ChatGPT emphasizes back-and-forth dialogue.1 The use of chatbots is emerging in business as support for customer service such as online shopping where a pop-up box offers help with product information or in healthcare where chatbots are being used to provide responses to routine health related questions.2

Chatbots like ChatGPT use libraries of existing text organized in new ways to answer questions and produce a draft paper or manuscript based on materials adapted according to training algorithms.2 ChatGPT can create something new in that existing text is rearranged in a new way. However, chatbots lack conscious thought, therefore they can only repeat or rearrange existing content. No new thought goes into the response.2 Since ChatGPT does not have the ability to web-crawl it relies on information learned before 2021 making some responses outdated.3 ChatGPT can “lie” in that it can produce output that would be considered untrustworthy and even deceitful. Of course, lying is an intentional act, whereas ChatGPT is putting words together that may falsely represent a situation or finding.4 ChatGPT can also be amusing as Thorpe5 discovered when he asked ChatGPT to rewrite the classic American play Death of a Salesman substituting Princess Elsa from the animated movie Frozen as the main character in place of Willy Loman. ChatGPT reportedly is designed to avoid racist, sexist, and offensive outputs.3 Most important for users to know, ChatGPT does not provide references for statements it makes; the authenticity and validity of the answer depends on user verifying the information. Flanagin et al6 noted that ChatGPT is not ready to be used as a source of trusted information; transparency and human accountability is required.

Like ChatGPT, the machine learning system DALL-E 2 was released in 2022 to create images and art from a description submitted to it as natural language text.7 This tool has raised concerns similar to those with ChatGPT. However, images generated using DALL-E 2 come with a signature indicating the image’s provenance, which unfortunately can be removed.8 DALL·E 2 creators claim it has limited ability to generate violent, hate, or adult images.5

Using technology tools like ChatGPT and DALL-E 2 can assist authors in preparing manuscripts. With increased use of the technology, publication guidelines are needed to assure scientific integrity. The World Association of Medical Editors (WAME),8 the Committee on Publication Ethics (COPE)9 and the Journal of the American Medical Association (JAMA)6 offer recommendations for editors and authors. Here is a summary of the recommendations.


Chatbots, as non-humans, cannot meet the requirements for authorship as non-humans cannot understand the role of authors or take responsibility for the final manuscript. Chatbots cannot meet International Committee of Medical Journal Editors (ICMJE)10 authorship criteria, particularly providing approval of the final version of the manuscript and being accountable for all aspects of the work ensuring accuracy and integrity. A chatbot does not have legal standing, cannot understand a conflict-of-interest statement, cannot hold copyright, and has no affiliation independent of the creators. An author submitting a manuscript must ensure that all named authors meet the authorship criteria, which clearly means that chatbots should not be included as authors.


Authors using a chatbot to help write a manuscript should declare this fact. The purpose and/or rationale for using the chatbot should be included in the methods (or similar) section of the manuscript. Include a description of the content that was created and the name of the model or tool, version and extension numbers, and manufacturer similar to reporting statistical packages used for data analysis. Reporting the use of chatbots is consistent with ICMJE guidelines for non-author contributors and writing assistance.10


Authors are responsible for the accuracy of the content in the manuscript and for the absence of plagiarism. The specific query function used with the chatbot should be included. All sources must be appropriately attributed including all material produced by a chatbot. Authors should find, review, and include all relevant perspectives on a topic as chatbots may be designed to omit sources with viewpoints opposed by the query.

Transformative, disruptive technologies like ChatGPT and DALL-E 2 will continue to evolve offering both promise and peril. Not too long ago, the personal calculator was heralded as a miracle in computation ease, especially by students in statistics courses, and a curse by teachers worrying about declining math skills. It has come to pass that the personal calculator has a place in our everyday lives, and so it may be for ChatGPT and other chatbots. Yet for both the calculator and the chatbot, the end product remains the ethical responsibility of the user. The old adage “numbers don’t lie, but you can lie with numbers” didn’t change with the use of a calculator. And so it will be with chatbots. Words and numbers are part of the human endeavor and their usefulness and truthfulness, for better or worse, rests solely with the human users.


1. Newman J. ChatGPT? Stable Diffusion? Generative AI jargon, explained. Fast Company. December 26, 2022. Accessed March 13, 2023.
2. What is a chatbot? Oracle Cloud Infrastructure. Accessed March 13, 2023.
3. Roose K. The Brilliance and Weirdness of ChatGPT. NYTimes. December 5, 2022. Accessed March 13, 2023.
4. Davis P. Did ChatGPT Just Lie To Me? January 13, 2023. The Scholarly Kitchen. Accessed March 13, 2023.
5. Thorp HH. ChatGPT is fun, but not an author. Science. 2023;379(6630):313. doi:10.1126/scienceadg7879.
6. Flanagin A, Bibins-Domingo K, Berkwits M, Christiansen SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329(8):637–639. doi:10.1001/jama.2023.1344.
7. DALL-E 2. OpenAI. Accessed March 13, 2023.
8. World Association of Medical Editors. Chatbots, ChatGPT, and Scholarly Manuscripts: WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications. Accessed March 13, 2023.
9. Committee on Publication Ethics. Authorship and AI Tools. Accessed March 13, 2023.
10. ICMJE Defining the role of authors and contributors. Accessed March 13, 2023.
Copyright © 2023 Wolters Kluwer Health, Inc. All rights reserved.