AI disinformation is a threat to elections − learning to spot Russian, Chinese and Iranian meddling in other countries can help the US prepare for 2024
Published in Political News
Elections around the world are facing an evolving threat from foreign actors, one that involves artificial intelligence.
Countries trying to influence each other’s elections entered a new era in 2016, when the Russians launched a series of social media disinformation campaigns targeting the U.S. presidential election. Over the next seven years, a number of countries – most prominently China and Iran – used social media to influence foreign elections, both in the U.S. and elsewhere in the world. There’s no reason to expect 2023 and 2024 to be any different.
But there is a new element: generative AI and large language models. These have the ability to quickly and easily produce endless reams of text on any topic in any tone from any perspective. As a security expert, I believe it’s a tool uniquely suited to internet-era propaganda.
This is all very new. ChatGPT was introduced in November 2022. The more powerful GPT-4 was released in March 2023. Other language and image production AIs are around the same age. It’s not clear how these technologies will change disinformation, how effective they will be or what effects they will have. But we are about to find out.
Election season will soon be in full swing in much of the democratic world. Seventy-one percent of people living in democracies will vote in a national election between now and the end of next year. Among them: Argentina and Poland in October, Taiwan in January, Indonesia in February, India in April, the European Union and Mexico in June and the U.S. in November. Nine African democracies, including South Africa, will have elections in 2024. Australia and the U.K. don’t have fixed dates, but elections are likely to occur in 2024.
Many of those elections matter a lot to the countries that have run social media influence operations in the past. China cares a great deal about Taiwan, Indonesia, India and many African countries. Russia cares about the U.K., Poland, Germany and the EU in general. Everyone cares about the United States.
And that’s only considering the largest players. Every U.S. national election from 2016 has brought with it an additional country attempting to influence the outcome. First it was just Russia, then Russia and China, and most recently those two plus Iran. As the financial cost of foreign influence decreases, more countries can get in on the action. Tools like ChatGPT significantly reduce the price of producing and distributing propaganda, bringing that capability within the budget of many more countries.
A couple of months ago, I attended a conference with representatives from all of the cybersecurity agencies in the U.S. They talked about their expectations regarding election interference in 2024. They expected the usual players – Russia, China and Iran – and a significant new one: “domestic actors.” That is a direct result of this reduced cost.
Of course, there’s a lot more to running a disinformation campaign than generating content. The hard part is distribution. A propagandist needs a series of fake accounts on which to post, and others to boost it into the mainstream where it can go viral. Companies like Meta have gotten much better at identifying these accounts and taking them down. Just last month, Meta announced that it had removed 7,704 Facebook accounts, 954 Facebook pages, 15 Facebook groups and 15 Instagram accounts associated with a Chinese influence campaign, and identified hundreds more accounts on TikTok, X (formerly Twitter), LiveJournal and Blogspot. But that was a campaign that began four years ago, producing pre-AI disinformation.
Disinformation is an arms race. Both the attackers and defenders have improved, but also the world of social media is different. Four years ago, Twitter was a direct line to the media, and propaganda on that platform was a way to tilt the political narrative. A Columbia Journalism Review study found that most major news outlets used Russian tweets as sources for partisan opinion. That Twitter, with virtually every news editor reading it and everyone who was anyone posting there, is no more.
...continued

Comments