Discussing the concerns raised by the use of ChatGPT in ENT emergencies
Letter to the Editor

Discussing the concerns raised by the use of ChatGPT in ENT emergencies

Austen Lennon ORCID logo

School of Medicine, Cardiff University, Cardiff, UK

Correspondence to: Austen Lennon. School of Medicine, Cardiff University, Heath Park, Cardiff CF14 4XN, UK. Email: austen.lennon@doctors.org.uk.

Comment on: Soon S, Perry B. Paging Dr. ChatGPT: safety, accuracy and readability of ChatGPT in ENT emergencies. Aust J Otolaryngol 2025;8:8.


Received: 05 April 2025; Accepted: 11 June 2025; Published online: 22 August 2025.

doi: 10.21037/ajo-25-29


I congratulate Soon and Perry on their contributions to the increasing need to thoroughly understand the reliability of ChatGPT as its prevalence grows yearly. Upon finishing the article “Paging Dr. ChatGPT: safety, accuracy and readability of ChatGPT in ENT emergencies” (1), I feel that there are concerns that require remediation.

Upon reading the article, I appreciated the limitations discussed, including:

  • Differences in the interpretation of information between qualified doctors and patients.
  • The distinction between using ChatGPT’s free versus paid versions.
  • The expected discrepancies between the queries submitted in the study and those posed by patients.

Whilst the article suggests that ChatGPT can provide reasonably reliable answers to conditions with well-defined treatment steps, the reliability of answers decreased for “less common or more nuanced scenarios” (1). This raises a concern: although doctors may be able to discern between common and complex conditions, patients lack the expertise to do so. This poses a risk of patients submitting queries about inappropriate conditions, which could lead to misinformation and potential harm. This issue also brings up liability concerns. If ChatGPT is considered reliable for specific situations, who would be held responsible if a patient unknowingly seeks information for an unsafe scenario?

Additionally, the article mentions some variability in ChatGPT’s responses. While this is understandable given the nature of the program (which is not a medical device), a key question remains: if we begin to advocate for ChatGPT’s use to provide information to patients, how can we ensure trust in its answers while also accounting for the possibility of differing responses? How can patients determine whether to trust or question the information provided?

The authors also discuss the limitations related to accessibility, particularly regarding the paid service offering more readable information, which could isolate individuals with lower literacy rates. However, this may overlook a more pressing ethical concern. As studies such as this one show increasing reliability, the medical profession may slowly incorporate ChatGPT into patient care. However, this could exacerbate disparities in medical access. Individuals from lower socioeconomic backgrounds with less access to qualified professionals may increasingly rely on artificial intelligence (AI)-based services (2), which could have long-term health implications that are not yet fully understood.

Humans are complex, and symptoms differ between us all. There are often significant disparities between typically noted symptoms and those symptoms experienced by minority ethnic groups. Considering that AI is trained on existing data and often data is not inclusive or representative of minority communities, have considerations been given on how to mitigate the amplification of health inequalities (3)?

The writers did a compelling job of discerning the reliability of ChatGPT in ENT emergencies; however, there is a need to explore the safeguards required to deploy this technology safely.


Acknowledgments

None.


Footnote

Provenance and Peer Review: This article was a standard submission to the journal. The article did not undergo external peer review.

Funding: None.

Conflicts of Interest: The author has completed the ICMJE uniform disclosure form (available at https://www.theajo.com/article/view/10.21037/ajo-25-29/coif). The author has no conflicts of interest to declare.

Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Soon S, Perry B. Paging Dr. ChatGPT: safety, accuracy and readability of ChatGPT in ENT emergencies. Aust J Otolaryngol 2025;8:8.
  2. Pugh A. The Rich Can Afford Personal Care. The Rest Will Have to Make Do With AI. 2024 [cited 2025 Mar 28]. Available online: https://www.wired.com/story/wealth-inequality-personal-service-access-artificial-intelligence/?utm_source=chatgpt.com
  3. Paik KE, Hicklen R, Kaggwa F, et al. Digital Determinants of Health: Health data poverty amplifies existing health disparities—A scoping review. PLOS Digital Health 2023;2:e0000313. [Crossref] [PubMed]
doi: 10.21037/ajo-25-29
Cite this article as: Lennon A. Discussing the concerns raised by the use of ChatGPT in ENT emergencies. Aust J Otolaryngol 2025;8:34.

Download Citation