close
Wednesday November 27, 2024

Research claims ChatGPT more empathetic than actual medics

Clinicians like the chatbot's answer to the physicians' in 78.6% of 585 scenarios

By Web Desk
May 03, 2023
In this file photo taken on January 23, 2023 This picture taken on January 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT. — AFP
 In this file photo taken on January 23, 2023 This picture taken on January 23, 2023 in Toulouse, southwestern France, shows screens displaying the logos of OpenAI and ChatGPT. — AFP

A recent study indicates that AI-backed ChatGPT may be just as sound as doctors — if not more useful — at responding to patients' medical questions.

A group of researchers from the University of California, San Diego; John Hopkins; and other institutes asked 195 medical queries from OpenAI's ChatGPT and compared both the grade and the kindness of the chatbot's responses to answers from actual doctors on Reddit.

A group of healthcare experts, including professionals working in internal medicine, paediatrics, oncology, and infectious disease, scored both the bot and the human responses on a five-point scale, evaluating the "quality of information" and "the compassion or bedside manner" provided.

In the study, clinicians liked the chatbot's answer to the medic in 78.6% of the 585 scenarios. The chatbot's replies were rated 3.6 times higher in quality and 9.8 times higher in empathy than those of the doctors.

Medics keep it short — while ChatGPT answers in detail

A major cause ChatGPT won out in the study is that the bot's replies to queries were more extended and more personable than the doctors' brief, time-saver solutions.

For example, when questioned whether it's possible to go blind after getting bleach in your eye, ChatGPT responded "I'm sorry to hear that you got bleach splashed in your eye," and offered four other sentences of description, with clear instructions on how to wash it.

The doctor just said, "Sounds like you will be fine," and shortly instructed the patient to "flush the eye" or call poison control.

ChatGPT isn't capable of diagnosing on its own.

However, readers should not let ChatGPT's performance in this study mislead them. It's still not a physician.