Will the Government Stop Political Deepfakes Like Elon Musk's Kamala Harris Ad?
Elon Musk recently made headlines when he posted a deepfake video of Vice President Kamala Harris, with manipulated audio to make it sound like she called herself the "ultimate diversity hire" who doesn't know the "first thing about running the country." A month earlier, a Republican congressional candidate in Michigan posted a TikTok using the AI-generated voice of Dr. Martin Luther King Jr. to say he'd come back from the dead to endorse Anthony Hudson. In January, President Joe Biden's voice was replicated using artificial intelligence to send a fake robocall to thousands of people in New Hampshire, urging them not to vote in the state's primary the following day.
AI experts and lawmakers have been sounding the alarm, demanding more regulation as artificial intelligence is used to supercharge disinformation and misinformation. Now, it's three months before the presidential election and the United States is ill-prepared to handle a potential onslaught of fake content heading our way.
Digitally altered images - also known as deepfakes - have been around for decades, but, thanks to generative AI, they are now exponentially easier to make and harder to detect. As the threshold for making deepfakes has lowered, they are now being produced at scale and are increasingly more difficult to regulate. To make matters more challenging, government agencies are fighting about when and how to regulate this technology - if at all - and AI experts worry that a failure to act could have a devastating impact on our democracy. Some officials are proposing basic regulations that would disclose when AI is used in political ads, but Republican political appointees are standing in the way.
"Any time that you're dealing with misinformation or disinformation intervening in elections, we need to imagine that it's a kind of voter suppression," says Dr. Alondra Nelson. Nelson was the deputy director and acting director of Joe Biden's White House Office of Science and Technology Policy and led the creation of the AI Bill of Rights. She says that AI misinformation is "keeping people from having a reliable information environment in which they can make decisions about pretty important issues in their lives." Rather than stopping people from getting to the polls to vote, she says, this new type of voter suppression is an "insidious, slow erosion of people's trust in the truth" which affects their trust in the legitimacy of institutions and the government.
https://www.msn.com/en-us/news/politics/will-the-government-stop-political-deepfakes-like-elon-musk-s-kamala-harris-ad/ar-AA1ozzg1
moniss
(5,451 posts)prohibit dispensing false information to get someone to not vote. We only see occasional enforcement like the prosecution of the jerks for the big fake robo-call scam. Those laws could be interpreted to apply or expanded if need be.
Part of the problem is that shenanigans during elections have been allowed to go on unpunished for way too many decades. People stealing/defacing/destroying yard signs for example. Prosecutors and others pass it off as "well that stuff happens during elections" as though my property or the property of others somehow devalues to nothing during an election year. Likewise with knowingly making false, derogatory statements. You're in line for a defamation case any other time but the legal community seems to feel that, because it's an election year and politicians are involved, that doing so is perfectly OK. I've heard more than one supposed expert/legal beagle say with a smile "Well you know it's political speech and pretty much anything goes."