Skip to content
President Joe Biden signs an executive on artificial intelligence in the East Room of the White House, Monday, Oct. 30, 2023, in Washington. Vice President Kamala Harris looks on at right. (AP Photo/Evan Vucci)

President Joe Biden on Monday signed an executive order designed to establish safety and security standards for artificial intelligence technology. The order includes provisions to help the public clearly identify false imagery generated by AI systems. As Biden makes the case for increased safeguards with the use of AI, in recent months the Republican Party and Republican presidential candidates have used AI-generated images in campaign materials.

“The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more,” the White House said in a release.

The order will require companies that develop AI systems to keep the federal government informed if the products they develop could be a risk to national security, public health or the economy. It also instructs the federal departments of Homeland Security and Energy to establish tools, tests and standards to regulate the security of AI systems.

As part of this effort, the Department of Commerce is being tasked with creating standards and practices that will allow the public to know when artificially generated content — videos and images commonly referred to as “deepfakes” — is in use.

The tools created by the department will be able to detect AI-generated content and digitally watermark it.

“Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world,” the White House release said.

The likelihood that misinformation and disinformation will be spread by seemingly authentic imagery has significantly increased as AI technology has advanced in sophistication. A report released in October by the human rights group Freedom House noted, “Over the past year, the new technology was utilized in at least 16 countries to sow doubt, smear opponents, or influence public debate.”

In a 2019 Pew poll, 63% of respondents said that altered videos and images create “a great deal of confusion” about the facts of current events, while another 27% said that “some confusion” is created. Seventy-seven percent of the people polled said that steps should be taken to restrict the use of deepfakes.

There have been several recent instances of false images and videos being used in political communications.

In April, after Biden announced that he would be running for reelection in 2024, the Republican National Committee released a video with AI-generated images depicting a dystopian view of what the country would look like in a second Biden term.

The video showed false images of international conflict, bank failures, street crime, and mass migration that would supposedly occur if Biden won the election.

Republican presidential candidate Gov. Ron DeSantis of Florida released a video in June depicting fake AI-generated videos of Trump hugging Dr. Anthony Fauci, who served as chief medical adviser to the president during the COVID-19 pandemic. The ad was intended to criticize Trump for keeping Fauci on staff. Trump posted a fake AI-generated image of himself to his Truth Social social media account in March that depicted him kneeling and praying. Trump previously posted a digitally altered video that showed him physically attacking a figure labeled “CNN” in 2017.

The Democratic National Committee released an AI-generated video in 2019 with then-chair Tom Perez, but the release was done with the acknowledgment that the video was false. The party was attempting to raise awareness of the possible harm that deepfakes pose to security.

Lawmakers continue to address potential harm to elections from deepfakes.

In July, a group of 50 congressional Democrats led by Rep. Adam Schiff (CA), Sen. Ben Ray Luján (NM) and Sen. Amy Klobuchar (MN) sent a letter to the Federal Election Commission calling for the agency to clarify its rules on deceptive campaign advertising and the role of AI-generated images.Klobuchar in September introduced the “Protect Elections from Deceptive AI Act,” legislation that is intended to prohibit the use of deceptive AI-generated audio, images, and video in federal campaigns. The bill has five co-sponsors, including three Republican senators.

Related articles


Share this article:
Subscribe to our newsletter