Unveiling the Rise of Celebrity AI Deepfakes in Hollywood and Congress!

Steve Harvey is most renowned for his role in awarding money to “Family Feud” contestants and providing advice on his radio show. However, in recent years, he has become a popular subject of AI-generated memes, with many humorous and seemingly harmless depictions of him as a rockstar or fleeing from demons. Unfortunately, some individuals are misusing AI-generated versions of Harvey’s image, voice, and likeness for scams. Last year, Harvey, along with other celebrities like Taylor Swift and Joe Rogan, had their voices mimicked by AI to promote a scam promising government funds. Harvey is now advocating for legislation and penalties against those behind these scams and the platforms supporting them. Congress is considering various pieces of legislation, including an updated version of the No Fakes Act, that would hold creators and platforms accountable for unauthorized AI-generated content. A bipartisan group of senators plans to reintroduce the act soon, with support from celebrities like Harvey. The Take It Down Act, aimed at criminalizing AI-generated deepfake pornography, has also gained momentum in Congress, receiving backing from First Lady Melania Trump. Harvey expressed concern about the rise of scams using his likeness, emphasizing the importance of protecting fans and others from harm. Notably, celebrities like Scarlett Johansson have experienced AI scandals and are supporting legislative actions to address the dangers of AI. Harvey has joined forces with industry stakeholders to advocate for legislation that safeguards individuals against the misuse of AI technology.

Harvey expressed his concern over the lack of freedom in a situation where AI content is manipulated to misrepresent individuals. He emphasized the need for Congress to intervene to prevent potential harm. The senators plan to reintroduce the No Fakes Act and are seeking support from online platforms, which could face penalties for hosting AI content under the bill. The current legislation imposes fines of $5,000 per violation, potentially amounting to significant penalties for platforms. While some platforms have not yet endorsed the bill, the sponsors remain committed to protecting the interests of the creative industries. Critics, including public advocacy organizations, are concerned that the bill may introduce excessive regulations that could impact First Amendment rights and lead to a surge of lawsuits. Despite these concerns, the bill aims to address the misuse of digital replica technology and protect individuals from harmful impacts. As AI technology continues to advance, companies like Vermillio AI are assisting celebrities in identifying and combating deepfake content. Vermillio’s platform, TraceID, helps track instances of AI-generated content and streamlines the process of requesting takedowns. With the proliferation of deepfakes on social media, celebrities face challenges in managing and removing such content. Neely explained how Vermillio’s technology uses fingerprinting to differentiate between authentic and AI-generated material, enabling the identification of manipulated images.

“Is it available online?” Celebrities can easily access services like Vermillio, but other creators may have limited resources. “The sooner we take action, the better off we’ll all be,” Harvey suggested. “Why wait? How many more people need to suffer before we take action?” Stay updated with CNN news and newsletters by signing up on CNN.com.

Author

Recommended news

Shocking Revelation Jabbar Not on Terror Watchlist Before New Orleans Attack!

David Scott, the FBI Assistant Director in the Counterterrorism Division, made the call. Officials have indicated that they are...