تحت رعاية سموّ الشيخ خالد بن محمد بن زايد آل نهيان، ولي عهد أبوظبي رئيس المجلس التنفيذي لإمارة أبوظبي
Under the Patronage of His Highness Sheikh Khaled bin Mohamed bin Zayed Al Nahyan, Crown Prince of Abu Dhabi and Chairman of Abu Dhabi Executive Council
ChatGPT can answer patient questions 'more accurately and empathically than doctors'
Study suggests AI assistants such as ChatGPT may revolutionize medicine, improving the treatment of patients
A study has found that the artificial intelligence chatbot ChatGPT can outperform doctors in providing high-quality and empathetic advice to patients' questions.
Researchers at the University of California published the findings in JAMA Internal Medicine.
They compared written responses from doctors and ChatGPT to real-world health questions. A panel of licensed healthcare professionals preferred ChatGPT’s responses 79 per cent of the time and rated its responses as higher quality and more empathetic.
Dr John W Ayers from the Qualcomm Institute within the University of California San Diego, who led the study, said: “The opportunities for improving healthcare with AI are massive.
“AI-augmented care is the future of medicine.”
While the study shows the potential for AI assistants to be integrated into health systems to improve doctors' responses to patient questions, the researchers emphasised that AI assistants such as ChatGPT are not intended to replace doctors.
Instead, they believe that doctors working together with technologies like ChatGPT may revolutionise medicine.
To obtain a large and diverse sample of healthcare questions and doctors' answers that do not contain identifiable personal information, the team turned to the social media platform Reddit, where millions of patients publicly post medical questions, to which doctors respond.
The subreddit r/AskDocs has about 452,000 members who post medical questions, with verified healthcare professionals submitting answers.
While some may wonder if question-answer exchanges on social media are a fair test, the researchers noted that the exchanges were reflective of their clinical experience.
The team randomly sampled 195 exchanges from AskDocs where a verified doctor responded to a public question.
The team provided the original question to ChatGPT and asked it to author a response.
A panel of three licensed healthcare professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a doctor or ChatGPT.
The panel of healthcare professional evaluators preferred ChatGPT responses to doctor responses 79 per cent of the time.
ChatGPT responses were also rated significantly higher in quality than doctors' responses, and were more empathetic.
Dr Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine and study co-author, said: “ChatGPT is a prescription I’d like to give to my inbox.
“The tool will transform the way I support my patients.”
While the study shows promise for AI assistants in healthcare, the researchers emphasised the need for integrating AI assistants into healthcare messaging to be done in the context of a randomised controlled trial to judge how the use of AI assistants affects results for both doctors and patients.