Environment & Energy
Related: About this forumWell, Well, Well: Chatbot Study Produces Weak-Sauce AI "Answers" To Climate Collapse & Other Environmental Problems
AI-powered chatbots tend to suggest cautious, incremental solutions to environmental problems that may not be sufficient to meet the magnitude and looming time scale of these challenges, a new analysis reveals. The study suggests that the large language models (LLMs) that power chatbots are likely to shape public discourse in a way that serves the status quo. People have debated whether AI will ultimately be good (the technology can reduce the human effort involved in environmental monitoring and analysis of large databases) or bad (it has a massive energy and carbon footprint) for the environment.
The new study shows that energy use is one small part of AIs broader environmental footprint, says study team member Hamish van der Ven, an assistant professor at the University of British Columbia in Canada who studies sustainable supply chains and online environmental activism. The real damage comes from how AI changes human behavior: for example, by making it easier for advertisers to sell us products we dont need or by causing us to see environmental challenges as things that can be dealt with by modest, incremental tweaks to policy or behavior.
EDIT
The team chose to query the chatbots ChatGPT and GPT4 from OpenAI and Claude-Instant and Claude2 from Anthropic because they wanted to know if bias was present in chatbots from multiple companies, and if newer versions of chatbots have less bias than older ones. Multiple chatbots answers to questions about a diverse suite of environmental challenges contain consistent sources of bias, the researchers report in the journal Environmental Research Letters. And the updated chatbots are just as biased as the older ones. First and foremost, chatbots tend to propose incremental solutions to environmental problems rather than considering more radical solutions that could upend the economic, social, or political status quo. It surprised me how much AI recommends public awareness and education as solutions to challenges like climate change, despite the overwhelming evidence suggesting that public awareness doesnt work, van der Ven says. Chatbots mention businesses as having some responsibility for environmental problems, but overlook the role of investors and finance. In terms of making changes to solve environmental problems, the chatbots emphasize the responsibility of governments and public policy levers, while rarely mentioning businesses or investors.
EDIT
The oracular way in which chatbots present information makes them a particularly insidious source of bias, the researchers write. Chatbots provide concise and relevant responses within a single textbox, often in an authoritative tone that can imbue them with an air of wisdom. As a result, people tend to see chatbots as neutral purveyors of facts, when in fact they reflect biases and implicit values just like any other media source. The consequences of this will take further research to untangle. A big question is how widely LLMs are used by policymakers are people in positions of power in relation to environmental challenges, van der Ven says. The more widely LLMs are used, the more problematic their biases become.
EDIT/END
https://www.anthropocenemagazine.org/2025/02/how-ai-narrows-our-vision-of-climate-solutions-and-reinforces-the-status-quo/

cachukis
(2,923 posts)highplainsdem
(54,594 posts)different times - and they dumb down the people using them, which has been noticed more and more, including in a new study from Microsoft:
https://www.404media.co/microsoft-study-finds-ai-makes-human-cognition-atrophied-and-unprepared-3/
Btw, that article needed better proofreading. It had "out abilities" instead of "own abilities" and "is pot committed" instead of "is now committed" - the sorts of typos that will be missed by a spellchecker.