Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(60,076 posts)
Mon Jan 5, 2026, 08:29 PM Monday

Money-losing OpenAI wants to make money from healthcare - wants the WORLD'S medical data, AND federal support

And don't worry about chatbots giving bad medical advice. OpenAI is offering to "solve" what it considers the problem with US healthcare - lack of AI.

And btw, I would not trust OpenAI's own reported numbers on how many healthcare questions ChatGPT gets. They've been known to lie, and they're desperate right now. See this story in LBN: https://www.democraticunderground.com/10143593477

From Gizmodo:

More Than 40 Million People Use ChatGPT Daily for Healthcare Advice, OpenAI Claims
OpenAI wants ChatGPT to be a bigger player in healthcare. First, they need regulators to play ball.

By Ece Yildirim
Published January 5, 2026

ChatGPT users around the world send billions of messages every week asking the chatbot for healthcare advice, OpenAI shared in a new report on Monday. Roughly 200 million of ChatGPT’s more than 800 million regular users submit a prompt about healthcare every single week, and more than 40 million do so every single day.

-snip-

Many people think Health AI is a promising field with a lot of potential to ease the burden on medical workers. But it’s also contentious, because AI is prone to mistakes. While a hallucinated response can be an annoying hurdle in many other areas of use, in healthcare, it could have the potential to be a life-or-death matter.

These AI-driven risks are not confined to the world of hypotheticals. According to a report from August 2025, a 60-year-old with no past psychiatric or medical history was hospitalized due to bromide poisoning after following ChatGPT’s recommendation to take the supplement. As the tech stands today, no one should use a chatbot to self-diagnose or treat a medical condition, full stop.

-snip-

OpenAI’s latest report seems to be their own attempt at putting a comment on the public record. The company pairs its findings with sample policy concepts, like asking for full access to the world’s medical data and a clearer regulatory pathway to make AI-infused medical devices for consumer use.

-snip-



From British tech paper The Register:

https://www.theregister.com/2026/01/05/chatgpt_playing_doctor_openai/

ChatGPT is playing doctor for a lot of US residents, and OpenAI smells money
One man's failing healthcare system is another man's opportunity

Brandon Vigliarolo
Mon 5 Jan 2026 // 21:00 UTC

-snip-

None of those data points actually get to the point of how often ChatGPT could be wrong in critical healthcare situations, however.

What does that matter to OpenAI, though, when there's potentially heaps of money to be made on expanding in the medical industry? The report seems to conclude that its increasingly large role in the US healthcare industry, again, isn't an indictment of a failing system as much as it is the inevitable march of technological progress, and included several "policy concepts" that it said are a preview of a full AI-in-healthcare policy blueprint it intends to publish in the near future.

Leading the recommendations, naturally, is a call for opening and securely connecting publicly funded medical data so OpenAI's AI can "learn from decades of research at once."

OpenAI is also calling for new infrastructure to be built out that incorporates AI into medical wet labs, support for helping healthcare professionals transition into being directly supported by AI, new frameworks from the US Food and Drug Administration to open a path to consumer AI medical devices, and clarified medical device regulation to "encourage … AI services that support doctors."
14 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

JustAnotherGen

(37,587 posts)
1. What ever happened
Mon Jan 5, 2026, 08:36 PM
Monday

To rugged individualism? Boot straps?

If they need Federal Tax dollars to survive? Then it's not a viable business.

highplainsdem

(60,076 posts)
7. They not only want federal tax dollars, they want governments to ensure they'll never have to pay
Mon Jan 5, 2026, 11:47 PM
Monday

for all the copyrighted intellectual property they stole to train their AI models. Both AI bros and the venture capitalists backing them have admitted they won't have a viable business if they have to pay for what they already stole. So they've been trying to get governments to change their laws on intellectual property.

And now Altman wants his company to be given the entire world's medical data...

Ms. Toad

(38,311 posts)
4. From the number of people I've talked to using ChatGPT this way -
Mon Jan 5, 2026, 10:34 PM
Monday

I'm sure the numbers are accurate.

I'm in a lot of communities which are very heavy consumers of medical services. At least once a day, in one of these communities, I see a suggested to post the results of an MRI, or a pathology report, or something similar into ChatGPT for an interpretation.

No matter how often I explain that ChatGPT is designed to be conversational, not factual, they insist it is the same as doing a Google search.

For anyone who thinks it is the same - it's not - in a Google search, you get links to actual websites and can verify the reliability of the website as part of evaluating the information. Generally, those web pages link you to additional pages (whose reliability can be verified, etc.) In ChatGPT you get a word dump - with no clues as to where it got the information from - or if it just made it up out of thin air.

highplainsdem

(60,076 posts)
5. It's really scary that they're doing that. Musk has encouraged Twitter users to use Grok to interpret
Mon Jan 5, 2026, 11:30 PM
Monday

medical results, too. Very dangerous.

God, I hope Trump won't think turning over a lot of healthcare to OpenAI will be a way to pretend he's offering good healthcare.

Ms. Toad

(38,311 posts)
6. I know -
Mon Jan 5, 2026, 11:39 PM
Monday

And while I've got a few medical communities where it is likely harmless - the bulk of these people have life-threatening illnesses - sarcoma, primary sclerosing cholangitis, and ulcerative colitis/crohns.

highplainsdem

(60,076 posts)
8. Do these people realize at all that chatbots can hallucinate? Would showing them video of
Tue Jan 6, 2026, 12:03 AM
Tuesday

chatbots making mistakes change their minds about how much they can trust their answers?

highplainsdem

(60,076 posts)
13. Even though they aren't listening yet, I still want to point out some info in reply 11, from tweets I
Tue Jan 6, 2026, 03:21 PM
Tuesday

saw just this morning.

Ms. Toad

(38,311 posts)
14. The biggest point I try to get across is that running it through AI removes all clues you might follow
Tue Jan 6, 2026, 04:21 PM
Tuesday

to determine if it is accurate or not. It's just narrative with no references - or if it provides references, they may be entirely made up. So you are starting from ground zero - and have to fact check everything.

Unfortunately, anyone inclined to do that isn't likely inclined to take the AI shortcut anyway.

Latest Discussions»General Discussion»Money-losing OpenAI wants...