Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

PurgedVoter

(2,685 posts)
14. I can probably give you the reason for the hallucinations.
Wed Jul 9, 2025, 10:18 PM
Jul 2025

These processes are running on lossy logic. To do the calculations they use systems based on graphic processors. Graphic processors are strange. They draw triangles easily and squares slow them down a lot. They deal with numbers between 0 and 1 while rounding in strange ways that are hard to explain. They do massively parallel calculations quickly. A power that is the basis of our new and magical computer age. It is also a flawed power based on dropping a lot of data and just moving on to the next calculation. All of this is low level and built into the chips. It allows them to do things that are new and amazing. That new and amazing does things with a potential cost, accuracy.

Graphics processors do amazing work, but it is lossy work. In other words, if the system doesn't estimate that you will see it, it doesn't draw it. It drops that data in order to speed up the system. When you are doing calculations and you don't have a grasp for meaning, and AI is going to be hard put to grasp meaning, then dropping data that seems to have no meaning means your calculations, as you go through millions of calculations, can accumulate errors that can produce artifacts of "Knowledge" that do not exist in your sources for knowledge. This can quickly compound into "Hallucinations."

While a few dots of glitch in a fast moving game has little effect on the amazing images produced, when images are made with less and less basic input, and lower levels of basic logic, images can degrade quick. When the AI is working with meaningful text, sadly the lack of real basis can allow small rounding errors to turn into insane creations without basis. AI can do great work but it has no internal understanding to rule out insane results.

If you ask for an image of a "Lady sitting with crossed legs," the odds are quite high that you will get legs that don't connect or legs that connect to one knee with an extra leg thrown in under that double knee. This shows that in the AI graphics system there is no real comprehension of structure or physics. It draws pictures and makes assumptions. You might get a functional and beautiful image, but one leg or three legs will almost be as common as two legs. Take out the crossed legs and you will get much better results, but when you ask a question, you probably don't have a clue what would be crossed legs for a text generating AI.

If you use this as a comparison for how text AI works, you will find your answer. AI can give you great answers that you need to double check just in case. AI because it is organized a bit differently than we are, can bring things out that you might have not seen. It can be very useful. It is also likely for it to fail dramatically for the same reasons that images of hands and faces can get glitched easily. AI does not exist in the same sort of environment that we do. Meaning for it is not the same as meaning for us.

There is another issue that could cause a lot of AI issues. As AI gets more common, AI will base more of it's decisions on what previous AI came up with. If it uses the same sort of logic, the flaws that made sense to a previous AI are likely to be taken as good data. Call if confirmation bias. Confirmation bias messes up human logic all the time. I expect it will end up as a very big issue with AI calculations.

Recommendations

1 members have recommended this reply (displayed in chronological order):

OK, I'm anything but a high tech guru, boonecreek Jul 2025 #1
Frightening? Beyond frightening! There are server banks for single companies using the energy of small cities ... marble falls Jul 2025 #3
I had a hunch it was worse than I thought. boonecreek Jul 2025 #7
A few links: highplainsdem Jul 2025 #5
And thank you for the links. boonecreek Jul 2025 #8
Garbage in garbage out. enough Jul 2025 #2
An unfiltered mishmash of garbage and fact with no editing protocol for truth or fact. marble falls Jul 2025 #4
Those newer AI models are also worse for the environment. See this: highplainsdem Jul 2025 #6
one word: B.See Jul 2025 #9
What the Ukraine war seems to have developed into a lab for. marble falls Jul 2025 #21
Forbidden Planet 1956............... Lovie777 Jul 2025 #10
Exactamundo. I had that thought myself. marble falls Jul 2025 #20
Anybody here Faux pas Jul 2025 #11
This is why all these companies purple_haze Jul 2025 #12
Companies that try to rely heavily on AI The Madcap Jul 2025 #13
I can probably give you the reason for the hallucinations. PurgedVoter Jul 2025 #14
A good explanation RainCaster Jul 2025 #17
Do incorrect arithmetical calculations count as hallucinations? Disaffected Jul 2025 #18
Grammar error -- "but" should be "so". nt eppur_se_muova Jul 2025 #15
Thanks! marble falls Jul 2025 #16
AI bots are COMPLETELY incapable of actual logic William Seger Jul 2025 #19
It's logic absent volunteered intent. It doesn't meant to be right or wrong, it looks to bridging a gap with ... marble falls Jul 2025 #22
Latest Discussions»General Discussion»A.I. Is Getting More Powe...»Reply #14