AI Fraud Bill Aims to Criminalize Deepfakes of Officials

Brandon Bent
4 Min Read

Lawmakers are introducing the AI Fraud Bill Aims to Criminalize Deepfakes of Officials, a proposal designed to combat the rising threat of artificial intelligence in fraudulent activities. The legislation, spearheaded by Representative Ted Lieu (D-CA) and Representative Neal Dunn (R-FL), aims to expand penalties for AI-related scams and criminalize the impersonation of federal officials using AI technology.

Proposed Changes to Fraud Legislation

The AI Fraud Deterrence Act is set to be proposed on Tuesday and seeks to update existing criminal definitions and penalties to reflect the growing use of AI in fraudulent schemes. “As AI technology advances at a rapid pace, our laws must keep up,” Dunn stated in a press release regarding the bill.

Dunn further emphasized the importance of the legislation, saying, “The AI Fraud Deterrence Act strengthens penalties for crimes related to fraud committed with the help of AI. I am proud to co-lead this legislation to protect the identities of the public and prevent misuse of this innovative technology.”

According to Lieu, “The majority of American people want sensible guardrails on AI. They don’t think a complete Wild West is helpful,” highlighting the public’s demand for regulatory measures in the rapidly evolving AI landscape.

The proposed law would increase the maximum penalty for defrauding financial institutions from $1 million to $2 million when AI is knowingly used in the crime. Additionally, it would explicitly include AI-mediated deception in the definitions of both mail and wire fraud, allowing for the prosecution of individuals who utilize AI to commit these offenses. Violations could result in fines of up to $1 million and prison sentences ranging from 20 to 30 years.

Moreover, the draft legislation would criminalize the use of AI deepfakes to impersonate federal officials, referencing recent incidents where AI was used to mimic figures such as White House Chief of Staff Susie Wiles and Secretary of State Marco Rubio.

Experts warn that while fraud has existed for centuries, the advent of AI could significantly enhance its prevalence. “AI presents a scale, a scope, and a speed for fraud that is very, very different from frauds in the past,” noted Maura R. Grossman, a research professor of computer science at the University of Waterloo.

The FBI has previously issued warnings about the potential for AI to streamline fraudulent activities, stating that “generative AI reduces the time and effort criminals must expend to deceive their targets.” The agency cautioned that AI can also correct human errors that might otherwise serve as indicators of fraud.

As reported by The New York Times, companies specializing in expense and reimbursement management have begun implementing tools to detect fraudulent, AI-generated receipts. For instance, AppZen reported that approximately 14% of all fraudulent documents submitted in September were generated by AI, a significant increase from the previous year.

Concerns have been raised about whether existing institutions, including the judicial system, can keep pace with the rapid development of AI technologies. Hany Farid, a professor of computer science at the University of California, Berkeley, remarked, “AI years are dog years,” underscoring the urgency for legislative action.

The proposed bill also includes a provision that allows for the use of AI in satire or other First Amendment-protected activities, provided that such content is clearly labeled as not authentic. This aspect aims to strike a balance between regulation and creative expression.

For more information on the implications of AI in media and politics, visit BrandonBent.com.

Share This Article
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *