Reply to Letter to the Editor: Paging Dr. ChatGPT: safety, accuracy and readability of ChatGPT in ENT emergencies
Letter to the Editor

Reply to Letter to the Editor: Paging Dr. ChatGPT: safety, accuracy and readability of ChatGPT in ENT emergencies

Stephanie Soon ORCID logo, Brendan Perry

Department of Otolaryngology, Head and Neck Surgery, Sunshine Coast University Hospital, Queensland, Australia

Correspondence to: Stephanie Soon, MBChB. Department of Otolaryngology, Head and Neck Surgery, Sunshine Coast University Hospital, 6 Doherty Street, Birtinya, Queensland 4575, Australia. Email: stephanie.soon@health.qld.gov.au.

Response to: Lennon A. Discussing the concerns raised by the use of chatGPT in ENT emergencies. Aust J Otolaryngol 2025. doi: 10.21037/ajo-25-29.


Received: 12 May 2025; Accepted: 11 June 2025; Published online: 25 August 2025.

doi: 10.21037/ajo-25-36


We thank Lennon for their thoughtful engagement with our work and for contributing to the discussion on the use of ChatGPT in clinical otorhinolaryngology (1).

We acknowledge the concerns raised and recognise that using ChatGPT as a source of patient information presents multiple challenges. As Lennon rightly pointed out, patients often lack the medical expertise to distinguish clinical information from ChatGPT (1). Our study found that ChatGPT answers had higher readability than literacy of the average Australian—increasing the risk of misinterpretation (2). We also agree that variability in ChatGPT responses, as noted in our study also limits its potential as a reliable tool for patient education. Legal liability also remains a grey area. Since artificial intelligence (AI) is not human, it cannot be held legally responsible in the same way a person can (3). As such, OpenAI includes disclaimers for its generated content—explicitly stating that the burden of any error rests on the user. We also appreciate the concerns raised on how ChatGPT use could exacerbate health inequalities, especially among minority groups who are underrepresented in training datasets. These are all critical points that require careful attention before the widespread deployment of AI technologies in healthcare.

Despite these, it is undeniable that AI is here to stay. Since the release of ChatGPT in November 2022, the uptake of large language models (LLMs) has been rapid. ChatGPT reached 100 million users within 2 months of release (4). Since its release, many other LLMs have also been released such as Microsoft CoPilot, Google BERT and Meta AI. With or without clinician approval, patients are using AI tools and this trend is likely to continue. A 2024 Australian study estimated that 9.9% of Australian adults (about 1.9 million people) asked ChatGPT health related information, with higher usage in groups with barriers to heath care access such as English as a second language patients (5).

The aim of our study was therefore to evaluate whether the information provided by ChatGPT was somewhat safe and accurate, given its popularity. AI tools should not replace clinical judgment. However, as AI becomes increasingly integrated into health information-seeking behaviours, understanding its strengths and limitations is crucial.

Finally, as seen in other specialties such as radiology (6), AI is used to assist not replace clinical decision making. Similarly, we believe that LLMs such as ChatGPT in otorhinolaryngology may serve as a supplementary tool rather than a substitute for clinical expertise. This could include a role in medical education and providing quick access to information for clinicians. We thank the authors again for their considered comments and we hope that this continues to foster critical discussion about the responsible and safe integration of AI into healthcare practice.


Acknowledgments

None.


Footnote

Provenance and Peer Review: This article was commissioned by the editorial office, Australian Journal of Otolaryngology. The article did not undergo external peer review.

Funding: None.

Conflicts of Interest: Both authors have completed the ICMJE uniform disclosure form (available at https://www.theajo.com/article/view/10.21037/ajo-25-36/coif). The authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Lennon A. Discussing the concerns raised by the use of chatGPT in ENT emergencies. Aust J Otolaryngol 2025; [Crossref]
  2. Soon S, Perry B. Paging Dr. ChatGPT: safety, accuracy and readability of ChatGPT in ENT emergencies. Aust J Otolaryngol 2025;8:8.
  3. Wang C, Liu S, Yang H, et al. Ethical Considerations of Using ChatGPT in Health Care. J Med Internet Res 2023;25:e48009. [Crossref] [PubMed]
  4. Eysenbach G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med Educ 2023;9:e46885. [Crossref] [PubMed]
  5. Ayre J, Cvejic E, McCaffery KJ. Use of ChatGPT to obtain health information in Australia, 2024: insights from a nationally representative survey. Med J Aust 2025;222:210-2. [Crossref] [PubMed]
  6. van Leeuwen KG, de Rooij M, Schalekamp S, et al. Clinical use of artificial intelligence products for radiology in the Netherlands between 2020 and 2022. Eur Radiol 2024;34:348-54. [Crossref] [PubMed]
doi: 10.21037/ajo-25-36
Cite this article as: Soon S, Perry B. Reply to Letter to the Editor: Paging Dr. ChatGPT: safety, accuracy and readability of ChatGPT in ENT emergencies. Aust J Otolaryngol 2025;8:35.

Download Citation