Deepfake democracy: How AI is bamboozling Congress and threatening the 2024 election
WASHINGTON — America’s in the midst of its first AI-fueled election. Duping voters in 2024 — a year where “deepfakes” are expected to supplant our current meme-driven political unreality — will be easier than ever.
Bogus but hyper-realistic videos of Donald Trump secretly plotting with Russian President Vladimir Putin or President Joe Biden in a secret White House confab with antifa activists? Entirely fake speeches delivered by Rep. Marjorie Taylor Greene (R-GA) or Rep. Ilhan Omar (D-MN)?
All possible now. Just watch the wouldn’t-have-been-possible-in-2020 deepfake video starring a computer generated Florida Gov. Ron DeSantis, who’s depicted as desperately trying to convince his colleagues in “The Office” that he’s not wearing women’s clothes. Donald Trump Jr. is among the people who've shared it on social media in recent days.
Among the most unprepared for AI-infused election shenanigans: members of Congress themselves.
“I haven't heard it talked about here,” Sen. Josh Hawley (R-MO) told Raw Story when asked about deepfakes and AI impacting Election 2024.
It’s not that the the Capitol isn’t buzzing with AI regulatory chatter since OpenAI CEO Sam Altman testified before lawmakerslast Tuesday — including telling Hawley that even he is “nervous” about large language learning platforms, such as his company’s ChatGPT, being used to manipulate voters. The problem: this was news to many at the Capitol.
That’s why experts are nervous, too, especially since AI technology is evolving at warp speed.
OpenAI CEO Sam Altman: "If this technology goes wrong, it can go quite wrong."youtu.be
“Congress should have been proactive yesterday — decades ago,” Woodrow Hartzog, professor of Law at Boston University, told Raw Story.
Congress has a ton of catching up to do, mainly because U.S. policymakers — at the behest of Silicon Valley’steams of Washington lobbyists — have dithered for years in writing rules for the digital road, more or less allowing tech companies to police themselves.
“At the very least, it needs to think about the fact that this is not just a technology and deepfakes problem, that the problem of deepfakes in our democracy is rooted in significantly broader structural concerns around tech accountability, generally, mixed with our laws surrounding privacy, surveillance, free expression, copyright law, equality and anti-discrimination,” Hartzog continued. “All of those seemingly disparate areas — and the cracks that have been growing in our protections around them — are part of this story.”
How dangerous, really?
Artificial intelligence offers great promise of taking humanity to new technological heights.
But the ability to create increasingly realistic fake media is getting easier by the nanosecond, too. What formerly required specialized expertise — not to mention days and weeks worth of time; thus dedication — only to concoct clunky deepfakes is now available to all. The democratization of fakes has many experts freaked out.
It’s easy to see how AI-based deceptions, propaganda and scams could damage an election’s status as truly free and fair, even if just a small fraction of voters are affected.
Consider that the 2016 election was decided by some 80,000 votes across three states. Countless bots and Russian intelligence officers involved themselves (if Senate Republicans are to be believed). Campaign operatives — domestic and foreign, and as bad as they can be — have nothing on AI’s powers (if its creators are to be believed). Especially when combined with today’s always-improving deepfake technology, the ability to dupe is almost easy.
“Think about this as nuclear technology,” Siwei Lyu, a SUNY Empire Innovation Professor in the Department of Computer Science and Engineering at the University at Buffalo, told Raw Story. “Right now, instead of just the U.S. government and a few governments in the world knowing the techniques for making atomic bombs, like everybody now can have a toolkit off of Amazon to make their own atomic bombs. How dangerous that could be, right?”
Lyu continued: “Of course, somebody may use that as a generator to power up my house and then I don't need to be on the electricity grid anymore, but there are people for sure who will misuse it — and those are the things we have very little control over. So that's really where the problem is.”
The fear for Election 2024 isn’t, necessarily, one big, earth-altering digital atomic explosion; the fear is dozens, hundreds or even thousands of personal smart bombs — polished, powered and propelled by generative AI — being quietly dropped on susceptible-to-vulnerable populations in swing states.
They might originate from domestic sources: say, unscrupulous super PACs or lone-wolf political agitators unconcerned about the nation’s largely antiquated election laws and regulations that, in some cases, haven’t been updated since the dawn of the World Wide Web. If that.
Worse, they could come from foreign actors — think Russia, or perhaps Iran and North Korea — who’ve already demonstrated an insatiable appetite for sowing chaos in U.S. elections.
“The makers of deepfakes will create those fake media to reinforce, strengthen your belief, and then the recommendation algorithm will actually push that to you as a user so you will start to see more of this stuff,” Lyu said.
This will all be guided by the private data of millions of Americans, which Silicon Valley firms already have access to because of congressional inaction. When fed into generative AI platforms like ChatGPT the algorithmic loop of fear-drenched, truthy sounding falsehoods and fakes could prove infinite.
'Got to move fast'
Back on Capitol Hill, Senate Majority Leader Chuck Schumer is now a part of bipartisan negotiations – along with Sens. Martin Heinrich (D-NM), Todd Young (R-IN) and Mike Rounds (R-SD) – focused on legislating artificial intelligence.
“We can’t move so fast that we do flawed legislation, but there is no time for waste, or delay, or sitting back,” Schumer told his colleagues on the Senate floor after Altman testified. “We've got to move fast."
There’s only a short window to act, because generative AI is becoming more ubiquitous – more than 100 million people have already signed up for ChatGPT alone.
“And so while it is important for Congress to act, I hope that they realize that they can't just pass one anti-deepfake law of 2023 and dust their hands and call it a day, because this problem is one that is significantly larger than just a few algorithmic tools,” Hartzog, the BU law professor and co-author of Breached: Why Data Security Law Fails and How to Improve It, told Raw Story. “It's fundamental to our whole sort of media information distribution networks and free expression and consumer protection laws.”
Other lawmakers don’t feel the same pressure. Many assume America’s safer than other nations when it comes to AI-powered deepfakes.
“I think in a more advanced ecosystem, like our new system, it's probably easier for campaigns to jump on it pretty quickly and knock it down. I think in the developing world it could start riots and civil wars,” Sen. Marco Rubio (R-FL), the vice chairman of the Senate Intelligence Committee, recently told Raw Story.
Others in Congress – including party leaders – think the government is largely helpless when it comes to preventing the deepfake-ification of American elections.
“All we can do is tell the truth and appeal to the public not to believe everything they hear and see,” Sen. Dick Durbin (D-IL), the Senate majority whip, told Raw Story.
While 2020 was the "alternative fact” election, 2024 is primed to be the alternative reality election. “Fake news” isn’t just a bumper sticker anymore; it’s now reality.
“We’re in it,” Sen. Kirsten Gillibrand (D-NY) told Raw Story, “and AI is making it exponentially easier to create a false narrative, to project that false narrative worldwide, to make the false narrative believable by creating much more detailed and thorough content and it will be very hard to take something that’s disseminated worldwide and knock it down as false.”
Gillibrand has been calling for the creation of a new federal Data Protection Agency for years now, arguing the Federal Trade Commission is toothless when it comes to regulating big tech. The Federal Election Commission, meanwhile, often takes years to reach any agreement on even the most modest updates to its political advertising regulations.
“I think we have to keep focusing on the truth and making sure we have levers of government and a legal system to create accountability and oversight to make sure the truth is protected,” Gillibrand said.
Legislating "truth" in a post-truth political universe may prove impossible, but we really won’t know until the dust settles after Election 2024. That’s why many lawmakers, experts and privacy advocates are bracing for an election like no other in U.S. history.
“Every anti-democratic trick in the book will be played in 2024. No doubt,” Rep. Jamie Raskin (D-MD) – a Trump impeachment manager and member of the select Jan. 6 committee – recently told Raw Story. “The guy dines with racists and anti-Semites, Trump seems determined to prove that he can do anything he wants, including shoot somebody on Fifth Avenue, and his cult following will not budge. So this is where we are in the 21st century.”
- Will artificial intelligence overthrow its capitalist overlords? ›
- Experts demand 'pause' to artificial intelligence until regulations are imposed ›
- Why 'artificial intelligence' is still just a marketing buzzword ›