Warning: If AI social media tools make a mistake, you're responsible [View all]
https://english.elpais.com/technology/2024-12-29/warning-if-ai-social-media-tools-make-a-mistake-youre-responsible.html
Warning: If AI social media tools make a mistake, youre responsible
Platforms now include references to their generative artificial intelligence tools in their terms of service. They acknowledge that these tools may make errors, but place the responsibility on the user for the content they generate
PABLO G. BEJERANO
DEC 28, 2024 - 23:30 EST
Instagram and Facebooks terms of service will be updated on January 1, 2025. LinkedIns terms of service were updated on November 20, 2024, X attempted to update its terms without prior notice, and other social networks are likely to follow suit. One common motivation for these changes is to incorporate frameworks for using generative artificial intelligence (AI) tools specific to each platform.
This is not about using ChatGPT or Google Gemini to generate content and post it on social media. In this case, it is Instagram, Facebook, or LinkedIn themselves offering their own artificial intelligence systems. These tools are integrated into the platforms and easily accessible to users. However, the three social networks shift the responsibility to the user if they share content generated by their AI that is inaccurate or even offensive.
This is even though they admit that the answers offered by their generative AI programs may be wrong or misleading, an inherent issue with this type of technology. Metas terms of service for Meta AI, present on Facebook and Instagram, state: The accuracy of any content, including outputs, cannot be guaranteed and outputs may be disturbing or upsetting.
In LinkedIns updated terms of use, the platform notes that content generated by its AI features might be inaccurate, incomplete, delayed, misleading or not suitable for your purposes. It encourages users to review and edit the generated content before sharing, adding that you are responsible for ensuring it complies with our Professional Community Policies, including not sharing misleading information.
For Sara Degli-Esposti, a researcher from Spains National Research Council (CSIC) and author of the book The Ethics of Artificial Intelligence, there is no doubt about the platforms position. This policy is along the lines of: we dont know what can go wrong, and anything that goes wrong is the users problem. Its like telling them that they are going to be given a tool that they know may be defective.
more
(Why tech is groveling at ts knees
indemnify self against legal battles )