Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)Money-losing OpenAI wants to make money from healthcare - wants the WORLD'S medical data, AND federal support [View all]
And don't worry about chatbots giving bad medical advice. OpenAI is offering to "solve" what it considers the problem with US healthcare - lack of AI.
And btw, I would not trust OpenAI's own reported numbers on how many healthcare questions ChatGPT gets. They've been known to lie, and they're desperate right now. See this story in LBN: https://www.democraticunderground.com/10143593477
From Gizmodo:
More Than 40 Million People Use ChatGPT Daily for Healthcare Advice, OpenAI Claims
OpenAI wants ChatGPT to be a bigger player in healthcare. First, they need regulators to play ball.
By Ece Yildirim
Published January 5, 2026
ChatGPT users around the world send billions of messages every week asking the chatbot for healthcare advice, OpenAI shared in a new report on Monday. Roughly 200 million of ChatGPTs more than 800 million regular users submit a prompt about healthcare every single week, and more than 40 million do so every single day.
-snip-
Many people think Health AI is a promising field with a lot of potential to ease the burden on medical workers. But its also contentious, because AI is prone to mistakes. While a hallucinated response can be an annoying hurdle in many other areas of use, in healthcare, it could have the potential to be a life-or-death matter.
These AI-driven risks are not confined to the world of hypotheticals. According to a report from August 2025, a 60-year-old with no past psychiatric or medical history was hospitalized due to bromide poisoning after following ChatGPTs recommendation to take the supplement. As the tech stands today, no one should use a chatbot to self-diagnose or treat a medical condition, full stop.
-snip-
OpenAIs latest report seems to be their own attempt at putting a comment on the public record. The company pairs its findings with sample policy concepts, like asking for full access to the worlds medical data and a clearer regulatory pathway to make AI-infused medical devices for consumer use.
-snip-
OpenAI wants ChatGPT to be a bigger player in healthcare. First, they need regulators to play ball.
By Ece Yildirim
Published January 5, 2026
ChatGPT users around the world send billions of messages every week asking the chatbot for healthcare advice, OpenAI shared in a new report on Monday. Roughly 200 million of ChatGPTs more than 800 million regular users submit a prompt about healthcare every single week, and more than 40 million do so every single day.
-snip-
Many people think Health AI is a promising field with a lot of potential to ease the burden on medical workers. But its also contentious, because AI is prone to mistakes. While a hallucinated response can be an annoying hurdle in many other areas of use, in healthcare, it could have the potential to be a life-or-death matter.
These AI-driven risks are not confined to the world of hypotheticals. According to a report from August 2025, a 60-year-old with no past psychiatric or medical history was hospitalized due to bromide poisoning after following ChatGPTs recommendation to take the supplement. As the tech stands today, no one should use a chatbot to self-diagnose or treat a medical condition, full stop.
-snip-
OpenAIs latest report seems to be their own attempt at putting a comment on the public record. The company pairs its findings with sample policy concepts, like asking for full access to the worlds medical data and a clearer regulatory pathway to make AI-infused medical devices for consumer use.
-snip-
From British tech paper The Register:
https://www.theregister.com/2026/01/05/chatgpt_playing_doctor_openai/
ChatGPT is playing doctor for a lot of US residents, and OpenAI smells money
One man's failing healthcare system is another man's opportunity
Brandon Vigliarolo
Mon 5 Jan 2026 // 21:00 UTC
-snip-
None of those data points actually get to the point of how often ChatGPT could be wrong in critical healthcare situations, however.
What does that matter to OpenAI, though, when there's potentially heaps of money to be made on expanding in the medical industry? The report seems to conclude that its increasingly large role in the US healthcare industry, again, isn't an indictment of a failing system as much as it is the inevitable march of technological progress, and included several "policy concepts" that it said are a preview of a full AI-in-healthcare policy blueprint it intends to publish in the near future.
Leading the recommendations, naturally, is a call for opening and securely connecting publicly funded medical data so OpenAI's AI can "learn from decades of research at once."
OpenAI is also calling for new infrastructure to be built out that incorporates AI into medical wet labs, support for helping healthcare professionals transition into being directly supported by AI, new frameworks from the US Food and Drug Administration to open a path to consumer AI medical devices, and clarified medical device regulation to "encourage AI services that support doctors."
One man's failing healthcare system is another man's opportunity
Brandon Vigliarolo
Mon 5 Jan 2026 // 21:00 UTC
-snip-
None of those data points actually get to the point of how often ChatGPT could be wrong in critical healthcare situations, however.
What does that matter to OpenAI, though, when there's potentially heaps of money to be made on expanding in the medical industry? The report seems to conclude that its increasingly large role in the US healthcare industry, again, isn't an indictment of a failing system as much as it is the inevitable march of technological progress, and included several "policy concepts" that it said are a preview of a full AI-in-healthcare policy blueprint it intends to publish in the near future.
Leading the recommendations, naturally, is a call for opening and securely connecting publicly funded medical data so OpenAI's AI can "learn from decades of research at once."
OpenAI is also calling for new infrastructure to be built out that incorporates AI into medical wet labs, support for helping healthcare professionals transition into being directly supported by AI, new frameworks from the US Food and Drug Administration to open a path to consumer AI medical devices, and clarified medical device regulation to "encourage AI services that support doctors."
14 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Money-losing OpenAI wants to make money from healthcare - wants the WORLD'S medical data, AND federal support [View all]
highplainsdem
Jan 5
OP
They not only want federal tax dollars, they want governments to ensure they'll never have to pay
highplainsdem
Jan 5
#7
It's really scary that they're doing that. Musk has encouraged Twitter users to use Grok to interpret
highplainsdem
Jan 5
#5
Do these people realize at all that chatbots can hallucinate? Would showing them video of
highplainsdem
Jan 6
#8
Even though they aren't listening yet, I still want to point out some info in reply 11, from tweets I
highplainsdem
Jan 6
#13
The biggest point I try to get across is that running it through AI removes all clues you might follow
Ms. Toad
Jan 6
#14
1 in 5 AI diagnoses of cancer contain inaccurate information. A 23% error rate.
highplainsdem
Jan 6
#11
