Microsoft is calling connected members of Congress to modulate nan usage of AI-generated deepfakes to protect against fraud, abuse, and manipulation. Microsoft vice chair and president Brad Smith is calling for urgent action from policymakers to protect elections and defender seniors from fraud and children from abuse.
“While nan tech assemblage and non-profit groups person taken caller steps to reside this problem, it has go evident that our laws will besides request to germinate to combat deepfake fraud,” says Smith successful a blog post. “One of nan astir important things nan US tin do is walk a broad deepfake fraud statute to forestall cybercriminals from utilizing this exertion to bargain from mundane Americans.”
Microsoft wants a “deepfake fraud statute” that will springiness rule enforcement officials a ineligible model to prosecute AI-generated scams and fraud. Smith is besides calling connected lawmakers to “ensure that our national and authorities laws connected kid intersexual exploitation and maltreatment and non-consensual friendly imagery are updated to see AI-generated content.”
The Senate recently passed a bill cracking down connected sexually definitive deepfakes, allowing victims of nonconsensual sexually definitive AI deepfakes to writer their creators for damages. The measure was passed months aft mediate and precocious schoolhouse students were recovered to beryllium fabricating definitive images of female classmates, and trolls flooded X pinch graphic Taylor Swift AI-generated fakes.
Microsoft has had to instrumentality much information controls for its ain AI products, aft a loophole successful nan company’s Designer AI image creator allowed group to create definitive images of celebrities for illustration Taylor Swift. “The backstage assemblage has a work to innovate and instrumentality safeguards that forestall nan misuse of AI,” says Smith.
While nan FCC has already banned robocalls pinch AI-generated voices, generative AI makes it easy to create clone audio, images, and video — thing we’re already seeing during nan tally up to nan 2024 statesmanlike election. Elon Musk shared a deepfake video spoofing Vice President Kamala Harris connected X earlier this week, successful a station that appears to break X’s ain policies against synthetic and manipulated media.
Microsoft wants posts for illustration Musk’s to beryllium intelligibly branded arsenic a deepfake. “Congress should require AI strategy providers to usage state-of-the-art provenance tooling to explanation synthetic content,” says Smith. “This is basal to build spot successful nan accusation ecosystem and will thief nan nationalist amended understand whether contented is AI-generated aliases manipulated.”