Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

hatrack

(61,943 posts)
Mon Feb 10, 2025, 09:36 AM Feb 10

Well, Well, Well: Chatbot Study Produces Weak-Sauce AI "Answers" To Climate Collapse & Other Environmental Problems

AI-powered chatbots tend to suggest cautious, incremental solutions to environmental problems that may not be sufficient to meet the magnitude and looming time scale of these challenges, a new analysis reveals. The study suggests that the large language models (LLMs) that power chatbots are likely to shape public discourse in a way that serves the status quo. People have debated whether AI will ultimately be good (the technology can reduce the human effort involved in environmental monitoring and analysis of large databases) or bad (it has a massive energy and carbon footprint) for the environment.

The new study shows “that energy use is one small part of AI’s broader environmental footprint,” says study team member Hamish van der Ven, an assistant professor at the University of British Columbia in Canada who studies sustainable supply chains and online environmental activism. “The real damage comes from how AI changes human behavior: for example, by making it easier for advertisers to sell us products we don’t need or by causing us to see environmental challenges as things that can be dealt with by modest, incremental tweaks to policy or behavior.”

EDIT

The team chose to query the chatbots ChatGPT and GPT4 from OpenAI and Claude-Instant and Claude2 from Anthropic because they wanted to know if bias was present in chatbots from multiple companies, and if newer versions of chatbots have less bias than older ones. Multiple chatbots’ answers to questions about a diverse suite of environmental challenges contain consistent sources of bias, the researchers report in the journal Environmental Research Letters. And the updated chatbots are just as biased as the older ones. First and foremost, chatbots tend to propose incremental solutions to environmental problems rather than considering more radical solutions that could upend the economic, social, or political status quo. “It surprised me how much AI recommends public awareness and education as solutions to challenges like climate change, despite the overwhelming evidence suggesting that public awareness doesn’t work,” van der Ven says. Chatbots mention businesses as having some responsibility for environmental problems, but overlook the role of investors and finance. In terms of making changes to solve environmental problems, the chatbots emphasize the responsibility of governments and public policy levers, while rarely mentioning businesses or investors.

EDIT

“The oracular way in which chatbots present information makes them a particularly insidious source of bias,” the researchers write. “Chatbots provide concise and relevant responses within a single textbox, often in an authoritative tone that can imbue them with an air of wisdom.” As a result, people tend to see chatbots as neutral purveyors of facts, when in fact they reflect biases and implicit values just like any other media source. The consequences of this will take further research to untangle. “A big question is how widely LLMs are used by policymakers are people in positions of power in relation to environmental challenges,” van der Ven says. “The more widely LLMs are used, the more problematic their biases become.”

EDIT/END

https://www.anthropocenemagazine.org/2025/02/how-ai-narrows-our-vision-of-climate-solutions-and-reinforces-the-status-quo/

2 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Well, Well, Well: Chatbot Study Produces Weak-Sauce AI "Answers" To Climate Collapse & Other Environmental Problems (Original Post) hatrack Feb 10 OP
Propaganda has naturally slipped into the LLM. cachukis Feb 10 #1
It's foolish to use chatbots, period. They're not truly intelligent, can give conflicting answers at highplainsdem Feb 10 #2

highplainsdem

(54,594 posts)
2. It's foolish to use chatbots, period. They're not truly intelligent, can give conflicting answers at
Mon Feb 10, 2025, 12:12 PM
Feb 10

different times - and they dumb down the people using them, which has been noticed more and more, including in a new study from Microsoft:

https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/

Btw, that article needed better proofreading. It had "out abilities" instead of "own abilities" and "is pot committed" instead of "is now committed" - the sorts of typos that will be missed by a spellchecker.

Latest Discussions»Issue Forums»Environment & Energy»Well, Well, Well: Chatbo...