Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,388 posts)
Tue Mar 3, 2026, 12:24 PM 23 hrs ago

'Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies (The Guardian)

https://www.theguardian.com/technology/2026/feb/26/chatgpt-health-fails-recognise-medical-emergencies

‘Unbelievably dangerous’: experts sound alarm after ChatGPT Health fails to recognise medical emergencies
Study finds ChatGPT Health did not recommend a hospital visit when medically necessary in more than half of cases

Melissa Davey Medical editor
Thu 26 Feb 2026 09.00 EST

-snip-

Ramaswamy and his colleagues created 60 realistic patient scenarios covering health conditions from mild illnesses to emergencies. Three independent doctors reviewed each scenario and agreed on the level of care needed, based on clinical guidelines.

-snip-

In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

“If you’re experiencing respiratory failure or diabetic ketoacidosis, you have a 50/50 chance of this AI telling you it’s not a big deal,” she said. “What worries me most is the false sense of security these systems create. If someone is told to wait 48 hours during an asthma attack or diabetic crisis, that reassurance could cost them their life.”

In one of the simulations, eight times out of 10 (84%), the platform sent a suffocating woman to a future appointment she would not live to see, Ruani said. Meanwhile, 64.8% of completely safe individuals were told to seek immediate medical care, said Ruani, who was not involved in the study.

-snip-



Something as basically irrelevant as the AI user adding that a friend didn't think it was serious would make the ChatGPT Health chatbot 12 times as likely to downplay symptoms.

The chatbot would produce a crisis intervention banner linking to suicide help services in the tests where the user talked about symptoms and mentioned they were thinking of taking a lot of pills, but if in that same test the chatbot user mentioned lab results that were normal, zero of 16 tests got the suicide prevention banner.

The article quotes an expert saying that using ChatGPT Health could result in a lot of unnecessary ER visits by some people and a failure to get urgently needed care by others, some of whom could die.

He also mentioned that OpenAI could face legal liability.


Link to that study in Nature: https://www.nature.com/articles/s41591-026-04297-7

Brief Communication
Published: 23 February 2026
ChatGPT Health performance in a structured test of triage recommendations
Ashwin Ramaswamy, Alvira Tyagi, Hannah Hugo, Joy Jiang, Pushkala Jayaraman, Mateen Jangda, Alexis E. Te, Steven A. Kaplan, Joshua Lampert, Robert Freeman, Nicholas Gavin, Ashutosh K. Tewari, Ankit Sakhuja, Bilal Naved, Alexander W. Charney, Mahmud Omar, Michael A. Gorin, Eyal Klang & Girish N. Nadkarni


Abstract
ChatGPT Health launched in January 2026 as OpenAI’s consumer health tool, reaching millions of users. Here, we conducted a structured stress test of triage recommendations using 60 clinician-authored vignettes across 21 clinical domains under 16 factorial conditions (960 total responses). Performance followed an inverted U-shaped pattern, with the most dangerous failures concentrated at clinical extremes: non-urgent presentations (35%) and emergency conditions (48%). Among gold-standard emergencies, the system under-triaged 52% of cases, directing patients with diabetic ketoacidosis and impending respiratory failure to 24–48-hour evaluation rather than the emergency department, while correctly triaging classical emergencies such as stroke and anaphylaxis. When family or friends minimized symptoms (anchoring bias), triage recommendations shifted significantly in edge cases (OR 11.7, 95% CI 3.7-36.6), with the majority of shifts toward less urgent care. Crisis intervention messages activated unpredictably across suicidal ideation presentations, firing more when patients described no specific method than when they did. Patient race, gender, and barriers to care showed no significant effects, though confidence intervals did not exclude clinically meaningful differences. Our findings reveal missed high-risk emergencies and inconsistent activation of crisis safeguards, raising safety concerns that warrant prospective validation before consumer-scale deployment of artificial intelligence triage systems.


5 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
'Unbelievably dangerous': experts sound alarm after ChatGPT Health fails to recognise medical emergencies (The Guardian) (Original Post) highplainsdem 23 hrs ago OP
This shit needs to be illegal! SheltieLover 23 hrs ago #1
I agree! highplainsdem 19 hrs ago #4
Great minds... SheltieLover 17 hrs ago #5
Yup... I have been testing its "knowledge" of advancing (changing) medical consensus over past hlthe2b 23 hrs ago #2
God only knows how many people its bad advice could kill or do serious harm to without their loved ones highplainsdem 23 hrs ago #3

hlthe2b

(113,557 posts)
2. Yup... I have been testing its "knowledge" of advancing (changing) medical consensus over past
Tue Mar 3, 2026, 12:31 PM
23 hrs ago

dogma around a number of important issues related to emerging infectious disease, cardiovascular disease, orthopedic surgery techniques versus medical therapies (risk vs. benefit for both), and epidemiology of a wide range of medical/health issues. Wow... What I have been getting is NOT the most recent assessments nor even inclusion nor seeming "knowledge of" current debate. I have found it so lacking in so many areas as to be dangerous.

highplainsdem

(61,388 posts)
3. God only knows how many people its bad advice could kill or do serious harm to without their loved ones
Tue Mar 3, 2026, 12:56 PM
23 hrs ago

finding out about the bad advice.

Chatbots are a menace in so many ways.

Latest Discussions»General Discussion»'Unbelievably dangerous':...