As scientific research continues to advance, so are the tools researchers use to conduct and publish their studies. With the advances in artificial intelligence (AI), the role of chatbots in research is gaining significant attention. One of the most advanced forms of chatbots is the ‘Chat Generative Pre-Trained Transformer’, commonly called ‘ChatGPT’ (openai.com).[1] It is essential to recognise that while ChatGPT and other large language models (LLMs) can revolutionise the research field, they come with their own advantages and disadvantages. LLM is the type of machine learning used by ChatGPT. It has been trained on vast data to generate text like human writing. LLM can read a vast collection of text documents and learn their language usage. This allows it to create coherent and human-like sentences within seconds, and this ability is its most significant advantage. It is not far-fetched to imagine a future in which AI produces research and writes a scientific paper and reviews it too.[2]
LLMs such as ChatGPT certainly have several advantages as they can assist with research tasks such as draft generation, summarising articles, language translation and editing manuscripts.[3] They can offer instant feedback and also options for paraphrasing. This can be helpful for non-native English-speaking authors. Also, ChatGPT can comprehend information deeply and connect evidence, highlighting secondary findings while summarising academic articles. These applications can save time, effort and money. But they still need input from researchers to ensure accuracy and reliability.
Developments within a few months of its release indicate that the scientific community may not be appropriately prepared as we observe its use without enough consideration for its downsides. With the ability to generate text quickly and efficiently, researchers can produce more content in less time. One of the significant implications has been the potential to increase the number of abstract submissions to conferences and article submissions to journals. However, this increased volume of content may only sometimes be reliable, as these models are not always accurate and may produce vague or inconsistent content. As a result, researchers using these models need to exercise caution and ensure that they take responsibility for their research findings and conclusions.
Another potential disadvantage of LLMs is that they may confabulate, producing only partially accurate content or based on incorrect assumptions.[4] This can significantly violate academic integrity if nothing original is generated. Also, these models may have increased confidence in the language but may need to be more connected with reality. They may produce content that seems plausible but needs to be corrected, leading to inaccurate conclusions and potentially damaging the reputation of the research community.
The use of LLMs in research can improve efficiency and speed but may have a significant impact on research ethics.[5–8] One of the primary concerns is the need for more critical thinking. While these models can assist with generating the content, they have a different level of critical thinking and analysis than a human researcher. This can lead to increased publications by researchers without significant improvement in their experience, potentially leading to a disparity in the quality of research. There could also be concerns about plagiarism and incorrect citations. Paid versions of LLMs can also lead to disparities, as not all researchers can access these tools. This can lead to a divide between those with access to the latest technology and those without access. Furthermore, authors using these models need to mention the use of LLMs in the methods section to ensure transparency and integrity in their research.
As the use of language models becomes more widespread in the research community, there is an urgent need for regulations to ensure the appropriate use of these tools. Certain journals are already implementing policies clarifying the role of AI-generated content around authorship.[9–11] In an era where trust in science is dwindling, researchers must commit to paying attention to the details and being transparent about the use of these tools to ensure that they are not misleading readers. It is important to determine who is responsible for regulating the use of these models and what criteria should be used to assess their accuracy and reliability.
Looking into the future, there is no doubt that LLMs will continue to play a significant role in scientific research. With more data and training, the accuracy of ChatGPT will continue to improve, potentially leading to more accurate and reliable research findings. Moreover, the potential for LLMs to provide personalised medicine is an exciting prospect, allowing doctors to tailor treatments to individual patients based on their unique needs.
AI and its use in medicine are here to stay. We have evolved as a species in creating it. LLMs are game changers, but ensuring that the right principles of transparency, integrity and truth prevail is necessary. Researchers must use LLMs ethically and with utmost care. Only then can we reap the benefits of these tools for the scientific community.
REFERENCES
1. Looi MK. Sixty seconds on…ChatGPT. BMJ 2023;380:205.
2. Checco A, Bracciale L, Loreti P, Pinfield S, Bianchi G. AI-assisted peer review. Humanit Soc Sci Commun 2021;8:25.
3. King MR. The future of AI in medicine:A perspective from a Chatbot. Ann Biomed Eng 2022;51:291–5.
4. Ji Z, Lee N, Frieske R, Yu T, Su D, Xu Y, et al. Survey of hallucination in natural language generation. ACM Comput Surv 2023;55:1–38.
5. Milano S, Taddeo M, Floridi L. Recommender systems and their ethical challenges. AI Soc 2020;35:957–67.
6. Nyholm S. Attributing agency to automated systems:Reflections on human–robot collaborations and responsibility-loci. Sci Eng Ethics 2018;24:1201–19.
7. Sutton RT, Pincock D, Baumgart DC, Sadowski DC, Fedorak RN, Kroeker KI. An overview of clinical decision support systems:Benefits, risks, and strategies for success. NPJ Digit Med 2020;3:17.
8. Hammad M. The impact of artificial intelligence (AI) Programs on writing scientific research. Ann Biomed Eng 2023;51:459–60.
9. Stokel-Walker C. ChatGPT listed as author on research papers:Many scientists disapprove. Nature 2023;613:620–1.
10. Thorp HH. ChatGPT is fun, but not an author. Science 2023;379:313.
11. Nature Editorial. Tools such as ChatGPT threaten transparent science;here are our ground rules for their use. Nature 2023;613:612.