Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

cbabe

(6,189 posts)
Mon Dec 30, 2024, 09:38 AM Dec 2024

Warning: If AI social media tools make a mistake, you're responsible [View all]

https://english.elpais.com/technology/2024-12-29/warning-if-ai-social-media-tools-make-a-mistake-youre-responsible.html

Warning: If AI social media tools make a mistake, you’re responsible

Platforms now include references to their generative artificial intelligence tools in their terms of service. They acknowledge that these tools may make errors, but place the responsibility on the user for the content they generate

PABLO G. BEJERANO
DEC 28, 2024 - 23:30 EST

Instagram and Facebook’s terms of service will be updated on January 1, 2025. LinkedIn’s terms of service were updated on November 20, 2024, X attempted to update its terms without prior notice, and other social networks are likely to follow suit. One common motivation for these changes is to incorporate frameworks for using generative artificial intelligence (AI) tools specific to each platform.

This is not about using ChatGPT or Google Gemini to generate content and post it on social media. In this case, it is Instagram, Facebook, or LinkedIn themselves offering their own artificial intelligence systems. These tools are integrated into the platforms and easily accessible to users. However, the three social networks shift the responsibility to the user if they share content generated by their AI that is inaccurate or even offensive.

This is even though they admit that the answers offered by their generative AI programs may be wrong or misleading, an inherent issue with this type of technology. Meta’s terms of service for Meta AI, present on Facebook and Instagram, state: “The accuracy of any content, including outputs, cannot be guaranteed and outputs may be disturbing or upsetting.”

In LinkedIn’s updated terms of use, the platform notes that content generated by its AI features “might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes.” It encourages users to review and edit the generated content before sharing, adding that “you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.”

For Sara Degli-Esposti, a researcher from Spain’s National Research Council (CSIC) and author of the book The Ethics of Artificial Intelligence, there is no doubt about the platforms’ position. “This policy is along the lines of: ‘we don’t know what can go wrong, and anything that goes wrong is the user’s problem.’ It’s like telling them that they are going to be given a tool that they know may be defective.”

… more …

(Why tech is groveling at t’s knees …indemnify self against legal battles )
7 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Latest Discussions»General Discussion»Warning: If AI social med...