WASHINGTON (AP) — With artificial intelligence at a pivotal moment of development, the federal government is about to transition from one that prioritized AI safeguards to one more focused on eliminating red tape.
That’s a promising prospect for some investors but creates uncertainty about the future of any guardrails on the technology, especially around the use of AI deepfakes in elections and political campaigns.
President-elect Donald Trump has pledged to rescind President Joe Biden’s sweeping AI executive order, which sought to protect people’s rights and safety without stifling innovation. He hasn’t specified what he would do in its place, but the platform of the Republican National Committee, which he recently reshaped, said AI development should be “rooted in Free Speech and Human Flourishing.”
It’s an open question whether Congress, soon to be fully controlled by Republicans, will be interested in passing any AI-related legislation. Interviews with a dozen lawmakers and industry experts reveal there is still interest in boosting the technology’s use in national security and cracking down on non-consensual explicit images.
Yet the use of AI in elections and in spreading misinformation is likely to take a backseat as GOP lawmakers turn away from anything they view as potentially suppressing innovation or free speech.
“AI has incredible potential to enhance human productivity and positively benefit our economy,” said Rep. Jay Obernolte, a California Republican widely seen as a leader in the evolving technology. “We need to strike an appropriate balance between putting in place the framework to prevent the harmful things from happening while at the same time enabling innovation.”
Artificial intelligence interests have been expecting sweeping federal legislation for years. But Congress, gridlocked on nearly every issue, failed to pass any artificial intelligence bill, instead producing only a series of proposals and reports.
Some lawmakers believe there is enough bipartisan interest around some AI-related issues to get a bill passed.
“I find there are Republicans that are very interested in this topic,” said Democratic Sen. Gary Peters, singling out national security as one area of potential agreement. “I am confident I will be able to work with them as I have in the past.”
It’s still unclear how much Republicans want the federal government to intervene in AI development. Few showed interest before this year’s election in regulating how the Federal Election Commission or the Federal Communications Commission handled AI-generated content, worrying that it would raise First Amendment issues at the same time that Trump’s campaign and other Republicans were using the technology to create political memes.
The FCC was in the middle of a lengthy process for developing AI-related regulations when Trump won the presidency. That work has since been halted under long-established rules covering a change in administrations.
Trump has expressed both interest and skepticism in artificial intelligence.
During a Fox Business interview earlier this year, he called the technology “very dangerous” and “so scary” because “there’s no real solution.” But his campaign and supporters also embraced AI-generated images more than their Democratic opponents. They often used them in social media posts that weren’t meant to mislead, but rather to further entrench Republican political views.
Elon Musk, Trump’s close adviser and a founder of several companies that rely on AI, also has shown a mix of concern and excitement about the technology, depending on how it is applied.
Musk used X, the social media platform he owns, to promote AI-generated images and videos throughout the election. Operatives from Americans for Responsible Innovation, a nonprofit focused on artificial intelligence, have publicly been pushing Trump to tap Musk as his top adviser on the technology.
“We think that Elon has a pretty sophisticated understating of both the opportunities and risks of advanced AI systems,” said Doug Calidas, a top operative from the group.
But Musk advising Trump on artificial intelligence worries others. Peters argued it could undercut the president.
“It is a concern,” said the Michigan Democrat. “Whenever you have anybody that has a strong financial interest in a particular technology, you should take their advice and counsel with a grain of salt.”
In the run-up to the election, many AI experts expressed concern about an eleventh-hour deepfake — a lifelike AI image, video or audio clip — that would sway or confuse voters as they headed to the polls. While those fears were never realized, AI still played a role in the election, said Vivian Schiller, executive director of Aspen Digital, part of the nonpartisan Aspen Institute think tank.
“I would not use the term that I hear a lot of people using, which is it was the dog that didn’t bark,” she said of AI in the 2024 election. “It was there, just not in the way that we expected.”
Campaigns used AI in algorithms to target messages to voters. AI-generated memes, though not lifelike enough to be mistaken as real, felt true enough to deepen partisan divisions.
A political consultant mimicked Joe Biden’s voice in robocalls that could have dissuaded voters from coming to the polls during New Hampshire’s primary if they hadn’t been caught quickly. And foreign actors used AI tools to create and automate fake online profiles and websites that spread disinformation to a U.S. audience.
Even if AI didn’t ultimately influence the election outcome, the technology made political inroads and contributed to an environment where U.S. voters don’t feel confident that what they are seeing is true. That dynamic is part of the reason some in the AI industry want to see regulations that establish guidelines.
“President Trump and people on his team have said they don’t want to stifle the technology and they do want to support its development, so that is welcome news,” said Craig Albright, the top lobbyist and senior vice president at The Software Alliance, a trade group whose members include OpenAI, Oracle and IBM. “It is our view that passing national laws to set the rules of the road will be good for developing markets for the technology.”
AI safety advocates during a recent meeting in San Francisco made similar arguments, according to Suresh Venkatasubramanian, director of the Center for Tech Responsibility at Brown University.
“By putting literal guardrails, lanes, road rules, we were able to get cars that could roll a lot faster,” said Venkatasubramanian, a former Biden administration official who helped craft White House principles for approaching AI.
Rob Weissman, co-president of the advocacy group Public Citizen, said he’s not hopeful about the prospects for federal legislation and is concerned about Trump’s pledge to rescind Biden’s executive order, which created an initial set of national standards for the industry. His group has advocated for federal regulation of generative AI in elections.
“The safeguards are themselves ways to promote innovation so that we have AI that’s useful and safe and doesn’t exclude people and promotes the technology in ways that serve the public interest,” he said.
___
The Associated Press receives support from several private foundations to enhance its coverage of elections and democracy, and from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. See more about AP’s democracy initiative here and a list of supporters and funded coverage areas at AP.org
Brought to you by www.srnnews.com