ChatGPT for academic writing: A game changer or a disruptive tool? : Journal of Anaesthesiology Clinical Pharmacology

Secondary Logo

Journal Logo


ChatGPT for academic writing: A game changer or a disruptive tool?

Bhatia, Pradeep

Author Information
Journal of Anaesthesiology Clinical Pharmacology 39(1):p 1-2, Jan–Mar 2023. | DOI: 10.4103/joacp.joacp_84_23
  • Open

ChatGPT (Chat Generative Pre-trained Transformer), launched recently by OpenAI, is a complex machine learning model that is able to carry out natural language generation (NLG) tasks with high accuracy. It generates human-like text, based on the inputs provided to it. The initial model of ChatGPT was trained on a huge amount of data from the internet, using supervised learning. Thereafter, the model was trained using reinforcement learning from human feedback, and subsequently, a reward model for reinforcement learning was created.

ChatGPT can provide answers to questions, write fiction and non-fiction content from prompts, summarize a given text, and generate computer codes. It remembers what the user said earlier in the conversation, and can answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

ChatGPT produces very good-quality text and is a potentially attractive tool for writers due to its capacity to produce, in an instant, logical, coherent sounding text. Sometimes, it also writes plausible-sounding but incorrect or non-sensical answers. It may occasionally produce harmful instructions or biased content and has limited knowledge of the world and events after 2021.[1,2]

Alternatives to ChatGPT include the upcoming artificial intelligence chatbot, called Bard, from Google.

Researchers have found that ChatGPT can pass parts of the US medical licensing exam,[3] raising questions about whether the AI chatbot could one day help write the exam or help students prepare for it. Interestingly, when the researchers decided to publish the results, ChatGPT wrote the abstract and results sections with minimal prompting and largely cosmetic adjustments from the human co-authors.” The bot also contributed large sections to the introduction and methods sections.

Artificial intelligence can assist in several aspects of scientific research reports. It can process and analyze large amounts of data, can help researchers perform a comprehensive literature review by searching and summarizing relevant papers and articles, can help optimize the design of experiments, and assist in the writing of reports. ChatGPT may help researchers, students, and educators generate ideas,[4] and even write essays of reasonable quality on a particular topic.[5]

While ChatGPT can assist in many aspects of scientific research, there are several challenges and limitations with using AI for scientific research reports, including lack of transparency, quality control, ethical concerns, and amplification of existing biases in data and algorithms. It is still important to have human experts involved in the process to provide context, insights, and verification of results.

ChatGPT may produce fraudulent literature. Fake scientific research reports generated by AI are misleading and fabricated and can be used for malicious purposes, such as manipulating scientific discourse or advancing personal agendas. The use of AI to generate fake scientific research reports is concerning as it can undermine the credibility and trust in scientific research and lead to incorrect or harmful decisions based on false information. Additionally, peer review may not distinguish ChatGPT-generated abstracts from those written by authors[6] as they may be designed to mimic the style and format of genuine reports.

A group led by Catherine Gao at Northwestern University in Chicago used ChatGPT to generate artificial research paper abstracts to test whether scientists can spot them.[6] The ChatGPT-generated abstracts remained undetected by the plagiarism checker: the median originality score was 100%, indicating that no plagiarism was detected. The AI-output detector spotted 66% of the generated abstracts. However, human reviewers were no better. They correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts.

Several AI content detector tools are available to check if the text is written by AI/chatbots.

Chatbots are not legal entities and do not have a legal personality. One cannot sue in court, or punish a chatbot in any way. ChatGPT’s creator OpenAI accepts no responsibility for any text produced using their product.

Fake scientific reports and data fabrication are clearly unethical practices. However, it is foreseen that the use of ChatGPT (and similar tools) will be increasingly used by authors to assist them in text generation. Journals have published papers in which chatbots such as ChatGPT are shown as co-authors.

The World Association of Medical Editors (WAME) has given the following recommendations on ChatGPT and Chatbots in relation to scholarly publications.[7]

  1. Chatbots cannot be authors
  2. Authors should be transparent when chatbots are used and provide information about how they were used.
  3. Authors are responsible for the work performed by a chatbot in their paper (including the accuracy of what is presented and the absence of plagiarism) and for acknowledging all sources (including for material produced by the chatbot).
  4. Editors need appropriate tools to help them detect content generated or altered by AI, and these tools must be available regardless of their ability to pay.

JOACP proposes to follow the same policy although presently, the tools to detect the AI-generated text are not yet implemented.

In the end, a short poem by ChatGPT:

ChatGPT’s writing may be great,

But ethics it must contemplate,

Bias and misinformation it should negate,

Science’s integrity to uphold, never abate.


1. Available from:
2. . ChatGPT:Optimizing Language Models for Dialogue Available from:
3. Ault A. AI Bot ChatGPT Passes US Medical Licensing Exams Without Cramming –Unlike Students. Medscape Jan 26 2023.
4. Roose K. Don't Ban ChatGPT in Schools. Teach With It. NYTimes January 12 2023 Available from: / 01/12/technology/chatgpt-schools-teachers.html Last accessed on 2023 Feb 25.
5. Hern A. AI Bot ChatGPT Stuns Academics with Essay-Writing Skills and Usability. The Guardian December 4 2022 Available from: Last accessed on 2023 Feb 25.
6. Else H. Abstracts written by ChatGPT fool scientists. Nature 2023;613:423.
7. Zielinski C, Winker M, Aggarwal R, Ferris L, Heinemann M, Florencio Lapeña J, et al. Chatbots, ChatGPT, and Scholarly Manuscripts:WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications. WAME January 20 2023 Available from:
Copyright: © 2023 Journal of Anaesthesiology Clinical Pharmacology