General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsMoney-losing OpenAI wants to make money from healthcare - wants the WORLD'S medical data, AND federal support
And don't worry about chatbots giving bad medical advice. OpenAI is offering to "solve" what it considers the problem with US healthcare - lack of AI.
And btw, I would not trust OpenAI's own reported numbers on how many healthcare questions ChatGPT gets. They've been known to lie, and they're desperate right now. See this story in LBN: https://www.democraticunderground.com/10143593477
From Gizmodo:
OpenAI wants ChatGPT to be a bigger player in healthcare. First, they need regulators to play ball.
By Ece Yildirim
Published January 5, 2026
ChatGPT users around the world send billions of messages every week asking the chatbot for healthcare advice, OpenAI shared in a new report on Monday. Roughly 200 million of ChatGPTs more than 800 million regular users submit a prompt about healthcare every single week, and more than 40 million do so every single day.
-snip-
Many people think Health AI is a promising field with a lot of potential to ease the burden on medical workers. But its also contentious, because AI is prone to mistakes. While a hallucinated response can be an annoying hurdle in many other areas of use, in healthcare, it could have the potential to be a life-or-death matter.
These AI-driven risks are not confined to the world of hypotheticals. According to a report from August 2025, a 60-year-old with no past psychiatric or medical history was hospitalized due to bromide poisoning after following ChatGPTs recommendation to take the supplement. As the tech stands today, no one should use a chatbot to self-diagnose or treat a medical condition, full stop.
-snip-
OpenAIs latest report seems to be their own attempt at putting a comment on the public record. The company pairs its findings with sample policy concepts, like asking for full access to the worlds medical data and a clearer regulatory pathway to make AI-infused medical devices for consumer use.
-snip-
From British tech paper The Register:
https://www.theregister.com/2026/01/05/chatgpt_playing_doctor_openai/
One man's failing healthcare system is another man's opportunity
Brandon Vigliarolo
Mon 5 Jan 2026 // 21:00 UTC
-snip-
None of those data points actually get to the point of how often ChatGPT could be wrong in critical healthcare situations, however.
What does that matter to OpenAI, though, when there's potentially heaps of money to be made on expanding in the medical industry? The report seems to conclude that its increasingly large role in the US healthcare industry, again, isn't an indictment of a failing system as much as it is the inevitable march of technological progress, and included several "policy concepts" that it said are a preview of a full AI-in-healthcare policy blueprint it intends to publish in the near future.
Leading the recommendations, naturally, is a call for opening and securely connecting publicly funded medical data so OpenAI's AI can "learn from decades of research at once."
OpenAI is also calling for new infrastructure to be built out that incorporates AI into medical wet labs, support for helping healthcare professionals transition into being directly supported by AI, new frameworks from the US Food and Drug Administration to open a path to consumer AI medical devices, and clarified medical device regulation to "encourage AI services that support doctors."
JustAnotherGen
(37,587 posts)To rugged individualism? Boot straps?
If they need Federal Tax dollars to survive? Then it's not a viable business.
BigMin28
(1,822 posts)what those tax dollars did for Eloon.
highplainsdem
(60,076 posts)for all the copyrighted intellectual property they stole to train their AI models. Both AI bros and the venture capitalists backing them have admitted they won't have a viable business if they have to pay for what they already stole. So they've been trying to get governments to change their laws on intellectual property.
And now Altman wants his company to be given the entire world's medical data...
purr-rat beauty
(979 posts)It's not the wisest idea
Ms. Toad
(38,311 posts)I'm sure the numbers are accurate.
I'm in a lot of communities which are very heavy consumers of medical services. At least once a day, in one of these communities, I see a suggested to post the results of an MRI, or a pathology report, or something similar into ChatGPT for an interpretation.
No matter how often I explain that ChatGPT is designed to be conversational, not factual, they insist it is the same as doing a Google search.
For anyone who thinks it is the same - it's not - in a Google search, you get links to actual websites and can verify the reliability of the website as part of evaluating the information. Generally, those web pages link you to additional pages (whose reliability can be verified, etc.) In ChatGPT you get a word dump - with no clues as to where it got the information from - or if it just made it up out of thin air.
highplainsdem
(60,076 posts)medical results, too. Very dangerous.
God, I hope Trump won't think turning over a lot of healthcare to OpenAI will be a way to pretend he's offering good healthcare.
Ms. Toad
(38,311 posts)And while I've got a few medical communities where it is likely harmless - the bulk of these people have life-threatening illnesses - sarcoma, primary sclerosing cholangitis, and ulcerative colitis/crohns.
highplainsdem
(60,076 posts)chatbots making mistakes change their minds about how much they can trust their answers?
Ms. Toad
(38,311 posts)Doesn't seem to make a difference.
highplainsdem
(60,076 posts)highplainsdem
(60,076 posts)saw just this morning.
Ms. Toad
(38,311 posts)to determine if it is accurate or not. It's just narrative with no references - or if it provides references, they may be entirely made up. So you are starting from ground zero - and have to fact check everything.
Unfortunately, anyone inclined to do that isn't likely inclined to take the AI shortcut anyway.