General Discussion
In reply to the discussion: A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse [View all]PurgedVoter
(2,685 posts)These processes are running on lossy logic. To do the calculations they use systems based on graphic processors. Graphic processors are strange. They draw triangles easily and squares slow them down a lot. They deal with numbers between 0 and 1 while rounding in strange ways that are hard to explain. They do massively parallel calculations quickly. A power that is the basis of our new and magical computer age. It is also a flawed power based on dropping a lot of data and just moving on to the next calculation. All of this is low level and built into the chips. It allows them to do things that are new and amazing. That new and amazing does things with a potential cost, accuracy.
Graphics processors do amazing work, but it is lossy work. In other words, if the system doesn't estimate that you will see it, it doesn't draw it. It drops that data in order to speed up the system. When you are doing calculations and you don't have a grasp for meaning, and AI is going to be hard put to grasp meaning, then dropping data that seems to have no meaning means your calculations, as you go through millions of calculations, can accumulate errors that can produce artifacts of "Knowledge" that do not exist in your sources for knowledge. This can quickly compound into "Hallucinations."
While a few dots of glitch in a fast moving game has little effect on the amazing images produced, when images are made with less and less basic input, and lower levels of basic logic, images can degrade quick. When the AI is working with meaningful text, sadly the lack of real basis can allow small rounding errors to turn into insane creations without basis. AI can do great work but it has no internal understanding to rule out insane results.
If you ask for an image of a "Lady sitting with crossed legs," the odds are quite high that you will get legs that don't connect or legs that connect to one knee with an extra leg thrown in under that double knee. This shows that in the AI graphics system there is no real comprehension of structure or physics. It draws pictures and makes assumptions. You might get a functional and beautiful image, but one leg or three legs will almost be as common as two legs. Take out the crossed legs and you will get much better results, but when you ask a question, you probably don't have a clue what would be crossed legs for a text generating AI.
If you use this as a comparison for how text AI works, you will find your answer. AI can give you great answers that you need to double check just in case. AI because it is organized a bit differently than we are, can bring things out that you might have not seen. It can be very useful. It is also likely for it to fail dramatically for the same reasons that images of hands and faces can get glitched easily. AI does not exist in the same sort of environment that we do. Meaning for it is not the same as meaning for us.
There is another issue that could cause a lot of AI issues. As AI gets more common, AI will base more of it's decisions on what previous AI came up with. If it uses the same sort of logic, the flaws that made sense to a previous AI are likely to be taken as good data. Call if confirmation bias. Confirmation bias messes up human logic all the time. I expect it will end up as a very big issue with AI calculations.