General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsWarning: If AI social media tools make a mistake, you're responsible
https://english.elpais.com/technology/2024-12-29/warning-if-ai-social-media-tools-make-a-mistake-youre-responsible.htmlWarning: If AI social media tools make a mistake, youre responsible
Platforms now include references to their generative artificial intelligence tools in their terms of service. They acknowledge that these tools may make errors, but place the responsibility on the user for the content they generate
PABLO G. BEJERANO
DEC 28, 2024 - 23:30 EST
Instagram and Facebooks terms of service will be updated on January 1, 2025. LinkedIns terms of service were updated on November 20, 2024, X attempted to update its terms without prior notice, and other social networks are likely to follow suit. One common motivation for these changes is to incorporate frameworks for using generative artificial intelligence (AI) tools specific to each platform.
This is not about using ChatGPT or Google Gemini to generate content and post it on social media. In this case, it is Instagram, Facebook, or LinkedIn themselves offering their own artificial intelligence systems. These tools are integrated into the platforms and easily accessible to users. However, the three social networks shift the responsibility to the user if they share content generated by their AI that is inaccurate or even offensive.
This is even though they admit that the answers offered by their generative AI programs may be wrong or misleading, an inherent issue with this type of technology. Metas terms of service for Meta AI, present on Facebook and Instagram, state: The accuracy of any content, including outputs, cannot be guaranteed and outputs may be disturbing or upsetting.
In LinkedIns updated terms of use, the platform notes that content generated by its AI features might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. It encourages users to review and edit the generated content before sharing, adding that you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.
For Sara Degli-Esposti, a researcher from Spains National Research Council (CSIC) and author of the book The Ethics of Artificial Intelligence, there is no doubt about the platforms position. This policy is along the lines of: we dont know what can go wrong, and anything that goes wrong is the users problem. Its like telling them that they are going to be given a tool that they know may be defective.
more
(Why tech is groveling at ts knees indemnify self against legal battles )
Ferryboat
(1,063 posts)Widespread adoption of AI that generates false information or images seems like a recipe for disaster.
Passing the responsibility of false content onto others just shows how reliable it isn't. Sloppy results adds confusion and uncertainty to a already messed up information environment.
Big Tech out to make money, consequences be dammed.
cbabe
(4,333 posts)highplainsdem
(52,903 posts)and people have been made most dependent on.
And of course people's use of AI adds immensely to the data these companies gather, and allows the companies to manipulate users of their AI tools.
But the companies still want users to be legally responsible for any problems AI causes.
highplainsdem
(52,903 posts)made all users of its AI assume legal liability in August of 2023
https://www.democraticunderground.com/100218171203
and then in September of 2023 offered to shield paying customers (since business customers didn't want to be liable)
https://www.democraticunderground.com/100218261575
but IMO with wording giving MS wiggle room to argue the customer was still responsible for harm the AI caused because they didn't use the AI properly.
Ms. Toad
(35,682 posts)If you use a tool. You are ultimately responsible for what you do with it. No big surprise.
dickthegrouch
(3,618 posts)Which have forever denied any vendor responsibility for failures. Fitness of purpose is specifically denied as is any responsibility for damage to data.
Its disgusting and there is a movement to get better protections and annul some of the worst language, but its taken a long time. Furiously fought, of course, by the software industry.
21 CFR Part 11 Has some long awaited interesting clauses about that. I hope the get adopted by many more agencies
Unladen Swallow
(133 posts)in the medical diagnostics arena, and I must say, as much as I am not a fan of/fear AI, the stuff I have seen is impressive. Its ability to accumulate, digest, sort, and then sift through *huge* amounts of data in an instant is simply mind-blowing.