Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

cbabe

(4,333 posts)
Mon Dec 30, 2024, 09:38 AM Monday

Warning: If AI social media tools make a mistake, you're responsible

https://english.elpais.com/technology/2024-12-29/warning-if-ai-social-media-tools-make-a-mistake-youre-responsible.html

Warning: If AI social media tools make a mistake, you’re responsible

Platforms now include references to their generative artificial intelligence tools in their terms of service. They acknowledge that these tools may make errors, but place the responsibility on the user for the content they generate

PABLO G. BEJERANO
DEC 28, 2024 - 23:30 EST

Instagram and Facebook’s terms of service will be updated on January 1, 2025. LinkedIn’s terms of service were updated on November 20, 2024, X attempted to update its terms without prior notice, and other social networks are likely to follow suit. One common motivation for these changes is to incorporate frameworks for using generative artificial intelligence (AI) tools specific to each platform.

This is not about using ChatGPT or Google Gemini to generate content and post it on social media. In this case, it is Instagram, Facebook, or LinkedIn themselves offering their own artificial intelligence systems. These tools are integrated into the platforms and easily accessible to users. However, the three social networks shift the responsibility to the user if they share content generated by their AI that is inaccurate or even offensive.

This is even though they admit that the answers offered by their generative AI programs may be wrong or misleading, an inherent issue with this type of technology. Meta’s terms of service for Meta AI, present on Facebook and Instagram, state: “The accuracy of any content, including outputs, cannot be guaranteed and outputs may be disturbing or upsetting.”

In LinkedIn’s updated terms of use, the platform notes that content generated by its AI features “might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes.” It encourages users to review and edit the generated content before sharing, adding that “you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.”

For Sara Degli-Esposti, a researcher from Spain’s National Research Council (CSIC) and author of the book The Ethics of Artificial Intelligence, there is no doubt about the platforms’ position. “This policy is along the lines of: ‘we don’t know what can go wrong, and anything that goes wrong is the user’s problem.’ It’s like telling them that they are going to be given a tool that they know may be defective.”

… more …

(Why tech is groveling at t’s knees …indemnify self against legal battles )
7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

Ferryboat

(1,063 posts)
1. Why is everyone so eager to incorporate AI when it generates false information?
Mon Dec 30, 2024, 11:15 AM
Monday

Widespread adoption of AI that generates false information or images seems like a recipe for disaster.

Passing the responsibility of false content onto others just shows how reliable it isn't. Sloppy results adds confusion and uncertainty to a already messed up information environment.

Big Tech out to make money, consequences be dammed.

highplainsdem

(52,903 posts)
4. Yes. It's all about money, power and market share. A race to have the one AI tool that's used most widely
Mon Dec 30, 2024, 11:32 AM
Monday

and people have been made most dependent on.

And of course people's use of AI adds immensely to the data these companies gather, and allows the companies to manipulate users of their AI tools.

But the companies still want users to be legally responsible for any problems AI causes.

highplainsdem

(52,903 posts)
2. Yes. They're following the lead of other tech companies peddling AI that did this much earlier. Microsoft
Mon Dec 30, 2024, 11:27 AM
Monday

made all users of its AI assume legal liability in August of 2023

https://www.democraticunderground.com/100218171203

and then in September of 2023 offered to shield paying customers (since business customers didn't want to be liable)

https://www.democraticunderground.com/100218261575

but IMO with wording giving MS wiggle room to argue the customer was still responsible for harm the AI caused because they didn't use the AI properly.

Ms. Toad

(35,682 posts)
5. Just like Amy other tool.
Mon Dec 30, 2024, 11:35 AM
Monday

If you use a tool. You are ultimately responsible for what you do with it. No big surprise.

dickthegrouch

(3,618 posts)
6. No different from all software licenses
Mon Dec 30, 2024, 12:34 PM
Monday

Which have forever denied any vendor responsibility for failures. Fitness of purpose is specifically denied as is any responsibility for damage to data.
It’s disgusting and there is a movement to get better protections and annul some of the worst language, but it’s taken a long time. Furiously fought, of course, by the software industry.

21 CFR Part 11 Has some long awaited interesting clauses about that. I hope the get adopted by many more agencies

Unladen Swallow

(133 posts)
7. I've been researching the use of AI
Mon Dec 30, 2024, 05:22 PM
Monday

in the medical diagnostics arena, and I must say, as much as I am not a fan of/fear AI, the stuff I have seen is impressive. Its ability to accumulate, digest, sort, and then sift through *huge* amounts of data in an instant is simply mind-blowing.

Latest Discussions»General Discussion»Warning: If AI social med...