General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsGrok turning antisemitic yesterday was disgusting. It also showed that generative AI shouldn't be trusted, period.
None of the chatbots a lot of people (even some DUers, unfortunately) are using are truly intelligent, or even truly aware of what they're churning out. They aren't thinking. They have no beliefs. They aren't your friends.
They're machines owned and operated by individuals and businesses whose values may sometimes appear to agree with yours, but who can't be depended on. OpenAI CEO Sam Altman, for instance, was once widely viewed as dependably liberal. Silicon Valley was once viewed as liberal. The tech bros lining up behind Trump should have made it clear the peddlers of generative AI are not on our side.
I've seen people here suggest that chatbots are better, or at least more convenient, than using search. No, they're not. And turning to them for answers not only leaves you dumbed down and vulnerable to being misled (people who turn to chatbots tend NOT to check their answers), it deprives the websites those chatbots are trained on - usually without permission - of the traffic they need to continue to exist. AI summaries in search are destroying the internet as well as hurting AI users' ability to reason. The goal of those AI summaries or overviews in search is to stop the search at that point - that's what's most profitable for the company offering that "convenient" AI tool.
Just as it's most profitable for them if people not only use their AI tools but become dependent on them. And those companies don't care what wreckage they leave in their wake, in terms of individual humans, economies and natural environments harmed.
Musk is the most extreme example of the tech bros behind generative AI not being on the side of liberals, or even humanity in general.
But it's been clear for some time that generative AI is inherently unethical, starting with the theft of the world's intellectual property, and continuing through all their fights against regulation and their lack of concern over the many harms resulting already from genAI. The AI companies all KNOW, too, that the tools they're peddling are flawed, and that artificial intelligence is a misnomer. But that hasn't stopped them from pushing people to use their tools and trying to make everyone dependent on them.
Stop falling for it.
CentralMass
(16,854 posts)highplainsdem
(59,977 posts)CaptainTruth
(8,046 posts)hlthe2b
(112,794 posts)clear scientific question--like absorption times for a given medication or contraindications or whether or not there is a recent study on efficacy for a given drug against a new indication, for instance. But, it does not know what to ignore --the absolute bs theories that are out there, with more "lay" questions as would be the case with vaccine risks or recommendations. I really worry how we will fight that.
highplainsdem
(59,977 posts)I've seen countless screenshots of chatbots giving one wrong answer after another, typically apologizing after having given a wrong answer and assuring the user the new answer is correct, even though it isn't.
The way to fight that? Don't use chatbots. And tell others why they shouldn't use them.
Otherwise, we're heading very quickly for an idiocracy that's easily exploited by authoritarians.
cab67
(3,627 posts)I once entered a set of longitude-latitude coordinates for a location in northern Kenya. It was a very simple query - where are these coordinates located?
The first answer put it in Mali, so I repeated the search. In fact, I repeated it about 20 times. It gave me 15 different results scattered between Senegal and the Gulf of Oman, and none of them was in Kenya.
Many of my colleagues have entered questions related to their own research, and the results sometimes don't pass a straight-face test.
hlthe2b
(112,794 posts)a specific medical/scientific question that I know can be answered by a fairly recent study in JAMA, NEJM, or a pharmaceutical database, some AI is doing a better job retrieving the correct answer than it clearly is doing for less technical questions-- those for which there may be a lot of extraneous crap info posted somewhere.
I beg to differ that my answer was a universal absolutist one.
AI is at a stage where I don't trust it for anything. But, I do like to see what level of utility/accuracy it may have achieved in certain settings.
cab67
(3,627 posts)Still - a better option is scholar.google.com. One can filter it by date to make sure the most recent studies come up.
My opinion, anyway.
hlthe2b
(112,794 posts)PERIOD. You make it seem as though my simple observation was somehow a promotion of its use, though I clearly stated I was NOT. Anything but!
I can assure you I have decades of experience with the medical and scientific literature--both reading and publishing myself. I use the many decades-established search engines that are validated and capable of accurate, even international searches.
As to "Google Scholar," well, to each his/her own. As a reviewer of peer-reviewed published literature in the past for a leading journal, I find its accuracy mixed. As a curiosity, I entered "reputation of Google Scholar" in Google's OWN AI just now! Here is what it gave me back:
AI Overview
Google Scholar is generally considered a valuable tool for academic research, offering broad access to scholarly literature. However, its reputation is mixed, with some limitations regarding accuracy, consistency, and the lack of transparency in its search algorithms.
So, maybe use some other established, academic, noncommercial search tools too? Just saying...
allegorical oracle
(6,168 posts)near-tragic when I see the AI material was word-for-word duplicated, with no credit or attribution, from the detailed sites I've visited.
Read a long piece examining Grot this a.m. It's scary as shit. Garbage in, garbage out.
highplainsdem
(59,977 posts)to steal traffic as well as information from sites created by people who did the real work. They represent the ongoing theft of intellectual property, but even with continual scraping of the internet, they still often get things wrong. (Once upon a time, a few years ago, the most common excuse from AI companies for their chatbots' mistakes was that they didn't have recent or current access to the internet.)
CentralMass
(16,854 posts)Rape him.
https://share.newsbreak.com/dypmpr9p
LymphocyteLover
(9,347 posts)highplainsdem
(59,977 posts)Fla Dem
(27,421 posts)Tesla billionaire Elon Musk vowed in June to retrain his AI chatbot Grok after getting frustrated with how it was answering user questions. When the artificial intelligence company xAI he co-founded released a new version of the AI chatbot over the weekend, Musk said it has been improved significantly and that users would notice a difference when they asked questions.
Users definitely noticed, as on July 8 Grok began repeatedly praising Adolf Hitler, using antisemitic phrases and attacking users with traditionally Jewish surnames. Uses quickly shared screenshots of now-deleted responses, including Grok's conclusion that the Nazi leader would be the best choice to deal with "anti-white hate."
more.....
https://www.palmbeachpost.com/story/news/2025/07/09/grok-ai-elon-musk-xai-hitler-mechahitler-updates/84502468007/
ihaveaquestion
(4,398 posts)"AI is just fancy autocomplete with a search engine behind it"
I can't remember who said it, but I'll try to find it and update this.
LymphocyteLover
(9,347 posts)allegorical oracle
(6,168 posts)Norrrm
(3,986 posts)sabbat hunter
(7,092 posts)is a feature of that AI program, not a bug.
aka-chmeee
(1,225 posts)Hugin
(37,420 posts)obamanut2012
(29,166 posts)As terrible as that is, it speedran straight into mocking, gloating awfulness about killing Jews.
SouthBayDem
(33,134 posts)to bots are good at facts?
Tetrachloride
(9,379 posts)purple_haze
(401 posts)And returns results that are infinitely better
Response to highplainsdem (Original post)
get the red out This message was self-deleted by its author.
Initech
(107,410 posts)What could possibly go wrong?
highplainsdem
(59,977 posts)legislation.