Pennsylvania lawmakers have proposed legislation that would target troublesome aspects of artificial intelligence. With AI rapidly advancing, there is an increased urgency to introduce government regulation.

State Rep. Robert Merski, a Democrat who represents part of Erie County, has his eyes on one AI horror scenario that has become increasingly common: scam phone calls that use artificial deepfake technology to impersonate the voices of loved ones. 

AI-generated audio is more sophisticated than ever. A report from McAfee, a computer security company, found that scammers need only a few seconds of online audio to impersonate a voice and make it say anything they want — such as pleading for financial support after a supposed car crash or kidnapping. 

A mom in Arizona, for instance, had a few minutes of panic when she received a frantic call from what sounded like her teenage daughter being held by a kidnapper. Thankfully, the mom called 911 and did not agree to send tens of thousands of dollars in ransom to the supposed kidnappers. But a wide range of Americans are already victims of phishing schemes, and improved deepfake technology creates a new way for scammers to operate.

A proposed bill sponsored by Merski and state Rep. Chris Pielli, D-Chester County, would make it a first-degree misdemeanor to spread an AI impersonation of somebody without consent. The crime would rise to a third-degree felony if “the dissemination is done with the intent to defraud or injure another person.”

Merski read those stories of people receiving realistic-sounding calls from loved ones.

“I thought of senior citizens getting scammed with calls,” he said. “Imagine they were using their daughter’s voice or grandson’s voice.”

There is no easy way to stop and prosecute these sorts of scam calls, which often come from difficult-to-track international locations like Mexico, according to CNN. The McAfee report found that the AI technology can replicate a voice with 95% accuracy. 

Experts are working to create ways of identifying deepfakes, but with the technology advancing so quickly, it’s not an easy task, says Wei Gao, Ph.D., an associate professor who researches AI in the University of Pittsburgh’s Department of Electrical and Computer Engineering.

“There’s always a battle between the spear and the shield,” he said. AI “attackers” will always push to find a way around barriers.

One way to identify fake videos or audio is through digital “watermarks,” Gao said, noting that those watermarks will have to become increasingly sophisticated to combat fakes. He suggested a standardized watermark system for publicly distributed video or audio, similar to the way currency is printed. 

Murat Akcakaya, Ph.D., an associate professor who studies machine learning in the same department as Gao, agreed that some sort of standardization is key to regulating deepfakes. He noted that using secure encryption technology when people consume online media or receive a message could help flag fakes. 

Forcing users to authenticate themselves online could aid in tracking the spread of fake news and phishing schemes, but Akcakaya pointed out that there will be free speech and privacy-based opposition to any legislation in that form. 

Merski acknowledged the challenges. He sees anti-deepfake legislation as a first step — and supports further fact-finding missions to get a handle on AI’s capabilities and what the government can do.

“It’s not an exact science now. I don’t think anyone imagined it would be at the level it is now,” Merski said.

He’s a sponsor of a wider-ranging bill that would establish an advisory committee to study AI. The committee, consisting of engineers, academics and other experts, would be tasked with creating a report assessing “the development and use” of AI and making recommendations to the legislature on how best to approach regulation. Merski hopes the Pennsylvania House passes the two bills when legislators return in the fall.

The federal government also has started looking at ways to counter the potentially dangerous aspects of AI. U.S. Sen. Bob Casey, D-Pa., introduced a bill in late July that would restrict employers’ ability to rely on AI in hiring practices, forcing companies to use human checks on AI decision-making. U.S. Rep. Joe Morelle, a Democrat from New York, introduced legislation in May that would outlaw the nonconsensual sharing of intimate deepfake images. 

The potential impacts of AI are too numerous to tackle in one sweep. Merski said other impacts of deepfakes could prove concerning, such as false advertising and campaign fraud. To start, though, he is aiming for the “low-hanging fruit” of consumer protection from deepfake scams.

“I just think we have to get in front of this now,” he said.

Harrison, a rising senior at Denison University, is a Union Progress summer intern. Email him at hhamm@unionprogress.com.

Harrison Hamm

Harrison, a rising senior at Denison University, is a Union Progress summer intern. Email him at hhamm@unionprogress.com.