ATLANTA
NCS
—
Steve Harvey is greatest recognized for awarding cash to “Family Feud” contestants or allotting recommendation on his radio present.
But lately, he’s additionally turn into a preferred goal of AI-generated memes, lots of which are humorous and seemingly innocent — like depictions of Harvey as a rockstar or seen operating from demons.
More sinister actors, nevertheless, are utilizing AI-generated variations of Harvey’s picture, voice and likeness for scams.
Last yr, Harvey was amongst celebrities like Taylor Swift and Joe Rogan whose voices have been mimicked by AI and used to promote a rip-off that promised individuals authorities supplied funds.
“I’ve been telling you guys for months to claim this free $6,400 dollars,” a voice that seems like Harvey’s says in one video.
Now, Harvey is talking up by advocating for laws and penalties for the individuals behind these scams — and the platforms internet hosting them. And Congress appears to be listening; it’s contemplating a number of items of laws aiming to penalize these behind nefarious makes use of of AI, together with an up to date model of the No Fakes Act, which goals to maintain creators and platforms accountable for unauthorized AI-generated pictures, movies and sound.
The bipartisan group of senators behind the act, together with Democrats Chris Coons of Delaware and Amy Klobuchar of Minnesota and Republicans Marsha Blackburn of Tennessee and Thom Tillis of North Carolina, are planning to reintroduce it inside the subsequent few weeks, a supply accustomed to the matter advised NCS. It joins different laws aimed toward criminalizing AI-generated deepfake pornography, known as the Take It Down Act, which is additionally earlier than Congress and earned support from first lady Melania Trump this week.
In 2025, Harvey says scams utilizing his likeness are at “an all-time high.”
“I prided myself on my brand being one of authenticity, and people know that, and so they take the fact that I’m known and trusted as an authentic person, pretty sincere,” Harvey advised NCS in an interview at Tyler Perry Studios between filming episodes of “Family Feud.” “My concern now is the people that it affects. I don’t want fans of mine or people who aren’t fans to be hurt by something.”

Major recording artists, actors and different celebrities have been caught up in AI scandals over the final two years as the expertise quickly evolves. A girl in France misplaced $850,000 after scammers used AI-generated pictures of Brad Pitt to con her into pondering she was serving to the actor.
Actress Scarlett Johansson, who has brazenly grappled with AI imitating her likeness, has additionally thrown her assist behind laws.
“There is a 1,000-foot wave coming regarding AI that several progressive countries, not including the United States, have responded to in a responsible manner,” Johansson said in a February statement to NCS after an AI-generated video depicting a phony model of her responding to Kanye West’s antisemitic remarks went viral. “It is terrifying that the US government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of AI.”
Harvey stated he additionally helps the laws, which has garnered assist from the Recording Academy, the Screen Actors Guild, the Motion Picture Association, main expertise companies and a few of the greatest names in Hollywood.
“It’s freedom of speech, it’s not freedom of, ‘make me speak the way you want me to speak,’” Harvey stated. “That’s not freedom, that’s abuse. And Congress has got to get involved in this thing, because it’s going to end up hurting them, too.”
Before reintroducing the No Fakes Act, the senators additionally hope to achieve assist from on-line platforms, which might doubtlessly be penalized for internet hosting that AI content material underneath the invoice. The present invoice fines platforms $5,000 for every violation –— that means a viral AI creation may shortly add up to thousands and thousands of {dollars} in fines. A supply accustomed to the invoice stated the platforms wouldn’t be introduced on at the price of decrease penalties.
“We’ve been very clear with the platforms who have withheld their endorsement for now that we’re not going to make any changes unless the folks that we’re doing this bill for, the folks in the creative industries, are okay with it,” an individual accustomed to the invoice advised NCS. “We’re not going to sell them off to try to get the platforms on board.”
But critics of the invoice, which embody public advocacy organizations like Public Knowledge, Center for Democracy and Technology, American Library Association and the Electronic Frontier Foundation, fear the invoice as written introduces an excessive amount of regulation. In a letter to the senators final yr, they warned it may endanger First Amendment rights and allow misinformation, whereas leading to a “torrent” of lawsuits.
“We understand and share the serious concerns many have expressed about the ways digital replica technology can be misused, with harms that can impact ordinary people as well as performers and celebrities,” they wrote. “These harms deserve the serious attention they are receiving, and preventing them may well involve legislation to fill gaps in existing law. Unfortunately, the recently-introduced NO FAKES bill goes too far in introducing an entirely new federal IP right.”

But as Congress goes by the motions, AI continues to evolve. And celebrities say they really feel they are restricted in how they’ll pursue imitators utilizing their likeness — particularly nameless on-line accounts.
That’s the place firms like Vermillio AI are available in. The firm, which has partnered with main expertise companies and film studios, makes use of a platform known as TraceID that tracks AI situations of their purchasers and automates the usually cumbersome take-down requests.
“Back in 2018 there were maybe 19,000 pieces of deepfake content,” Vermillio CEO Dan Neely advised NCS in an interview. “Today, there are roughly a million created every minute.”
For celebrities, monitoring deepfakes might be particularly difficult as a result of they’ll unfold so shortly on social platforms. “So trying to find them, play this game of Whack a Mole, is quite complex,” Neely stated.
Neely confirmed NCS Harvey’s Vermillio account, which included AI-voice generated chatbots meant to sound like Harvey and faux movies of the TV character encouraging playing.
Neely stated the firm’s expertise makes use of a kind of “fingerprinting” to distinguish genuine content material and from AI-generated materials. That entails crawling the internet for pictures which were tampered with utilizing giant language fashions, higher often called LLMs, which are the constructing blocks of many common generative AI providers.
“An image of you is made up of millions of pieces of data,” Neely stated. “How do we use those pieces of data to go and find that where it exists across the internet?”
Celebrities can afford a service like Vermillio. But for different creators, there are fewer assets.
“The sooner we do something, I think the better off we’ll all be,” Harvey stated. “Because, I mean, why wait? How many people we got to watch get hurt by this before somebody does something?”