How worried should you be about AI disrupting elections?
Disinformation will become easier to produce, but it matters less than you might think
Politics is SUPPOSED to be about persuasion; but it has always been stalked by propaganda. Campaigners dissemble, exaggerate and fib. They transmit lies, ranging from bald-faced to white, through whatever means are available. Anti-vaccine conspiracies were once propagated through pamphlets instead of podcasts. A century before covid-19, anti-maskers in the era of Spanish flu waged a disinformation campaign. They sent fake messages from the surgeon-general via telegram (the wires, not the smartphone app). Because people are not angels, elections have never been free from falsehoods and mistaken beliefs.
But as the world contemplates a series of votes in 2024, something new is causing a lot of worry. In the past, disinformation has always been created by humans. Advances in generative artificial intelligence (AI)—with models that can spit out sophisticated essays and create realistic images from text prompts—make synthetic propaganda possible. The fear is that disinformation campaigns may be supercharged in 2024, just as countries with a collective population of some 4bn—including America, Britain, India, Indonesia, Mexico and Taiwan—prepare to vote. How worried should their citizens be?
Explore more
This article appeared in the Leaders section of the print edition under the headline "AI voted"
More from Leaders
Volodymyr Zelensky’s presidential term expires on May 20th
What does that mean for his country?
Canada’s law to help news outlets is harming them instead
Funding journalism with cash from big tech has become a fiasco
Xi Jinping is subtler than Vladimir Putin—yet equally disruptive
How to deal with Chinese actions that lie between war and peace