Ad image

Microsoft, Copyright Office, and Lawmakers: Make Deepfakes Illegal

MONews
6 Min Read

Artificial intelligence seems to be everywhere these days, doing good things, like helping doctors detect cancer, and bad things, like helping scammers deceive unsuspecting victims. On Wednesday, a day after Microsoft said the U.S. needs new laws to hold accountable those who abuse AI, the U.S. Copyright Office released the first part of a report on the legal and policy issues surrounding copyright and artificial intelligence, particularly deepfakes.

A government report recommends that Congress enact new federal laws to protect people from the intentional distribution of unauthorized digital copies, and offers recommendations on how such laws should be crafted.

“We believe there is an urgent need for effective nationwide protection against harm to reputation and livelihood,” said Shira Perlmutter, Registrar of Copyrights and Director of the U.S. Copyright Office. “We look forward to working with Congress as they consider our recommendations and evaluate future developments.”

AI Atlas Art Badge Tag

The government report will be published in several parts, with later sections addressing copyright issues related to AI-generated material, the legal implications of training AI models based on copyrighted works, licensing considerations, and allocation of potential liability.

Microsoft calls for regulation

in Blog Post TuesdayMicrosoft says U.S. lawmakers should pass a “comprehensive deepfake fraud bill” that would target criminals who use AI technology to manipulate or steal from ordinary Americans.

“AI-generated deepfakes are realistic, easy for almost anyone to make, and increasingly used for fraud, abuse, and manipulation—especially targeting children and the elderly,” Microsoft President Brad Smith wrote. “The biggest risk is not that the world does too much to address these problems, but that the world does too little.”

Microsoft’s regulatory appeal comes as AI tools proliferate across the tech industry, making it increasingly easy for criminals to gain victims’ trust. Many of these schemes abuse legitimate technologies designed to help people write messages, do research for projects, and create websites and images. In the hands of fraudsters, these tools can create fake forms and trustworthy websites that trick and steal users.

“The private sector has a responsibility to innovate and implement safeguards to prevent misuse of AI,” Smith wrote. But he said governments should also establish policies that “promote responsible development and use of AI.”

Already behind

It’s only been a few years since AI chatbot tools from Microsoft, Google, Meta, and OpenAI became widely available for free, but there’s already a ton of data on how they’re being abused by criminals.

Earlier this year, an AI-generated pornographic video of global music star Taylor Swift spread “like wildfire” online, garnering more than 45 million views on X. February Report from the National Sexual Violence Resource Center.

“Deepfake software was not explicitly designed to create sexual images and videos, yet it has become one of the most commonly used software tools today,” the group wrote. But despite widespread recognition of the problem, the group notes, “there are few legal remedies available to victims of deepfake pornography.”

AI Atlas Newsletter Registration Guide AI Atlas Newsletter Registration Guide

Meanwhile, a report from the Identity Theft Resource Center this summer found that scammers are increasingly using AI to create fake job postings as a new way to steal people’s identities.

“The rapid improvement in the look, feel and messaging of identity fraud is almost certainly a result of the introduction of AI-based tools,” the ITRC wrote. June Trend Report.

This is in addition to the rapid spread of AI-manipulated online posts that seek to tear apart our shared understanding of reality. One recent example came in early July, shortly after the assassination attempt on former President Donald Trump. Manipulated photos were spread online, Depicting Secret Service Agents Smiling Trump is being rushed to safety. The original photo shows the agents with neutral expressions.

Last week, X owner Elon Musk shared a video using a cloned voice of Vice President and Democratic presidential candidate Kamala Harris to insult President Joe Biden and call Harris a “diversity hire.” X service rules Users are prohibited from sharing manipulated content.“Media that could cause widespread confusion about public issues, affect public safety, or cause serious harm,” Musk said. He defended his own post as a parody.

Microsoft’s Smith said while many experts are focused on deepfakes being used for election interference, “the broader role that deepfakes play in these other types of crimes and abuses deserves equal attention.”

Share This Article