AI's Big Red Button Doesn't Work, And The Reason Is Even More Troubling [View all]
It's one of humanity's scariest what-ifs that the technology we develop to make our lives better develops a will of its own.
Early reactions to a September preprint describing AI behavior have already speculated that the technology is exhibiting a survival drive. But, while it's true that several large language models (LLMs) have been observed actively resisting commands to shut down, the reason isn't 'will'.
Instead, a team of engineers at Palisade Research proposed that the mechanism is more likely to be a drive to complete an assigned task even when the LLM is explicitly told to allow itself to be shut down. And that might be even more troubling than a survival drive, because no one knows how to stop the systems.
"These things are not programmed
no one in the world knows how these systems work," physicist Petr Lebedev, a spokesperson for Palisade Research, told ScienceAlert. "There isn't a single line of code we can change that would directly change behavior."
https://www.yahoo.com/news/articles/ais-big-red-button-doesnt-110021493.html