As the age of AI dawns upon us, it has brought excitement and trepidation in equal measure. The trepidation was mostly around AI-generated deepfakes and how they could change voter minds, influence elections and thus subvert democracy. This was especially true in India, as almost a billion voters were electing a new government. Many other countries due for polling, including the US and UK, confront similar fears.
While India is a developing economy, it has a digitally- and social media-savvy population. Political parties, too, have large IT and social media wings. Deepfakes are not new, with AI technologies like Generative Adversarial Networks (GANs) churning them out since 2011.Â
However, the advent of GenAI and the ubiquity of social media have made their creation, quality and spread at scale much cheaper and easier. This explains the concerns expressed by civil society, governments and media.
In a few articles and comments on AI and deepfakes, I had taken a contrarian stance, arguing that AI, if used well, could help the electoral process and democracy. It can help detect fraud, optimize the complex logistics of booth and voter management, and build resource and cost efficiencies into the vast machinery of Indian elections.Â
GenAI can be used for politicians to reach out to voters in a more personalized and scalable manner, level the playing field, and make voting access for the differently-abled voters easier. Most, however, tended to focus on the negatives and the supposed havoc that GenAI could wreak on Indian elections.
Now that Lok Sabha polling is over, it seems to be much ado about nothing. A post facto analysis done by two Harvard Kennedy School scholars Vandinika Shukla and Bruce Schneier (bit.ly/4er8ISi) concluded that AI was mostly used constructively, rather than the destructive use that most of us expected.Â
They estimated that political parties spent $50 million on authorized AI-generated content, and used it for targeted communication aimed at their constituency’s voters. In Tamil Nadu, for instance, both Karunanidhi, with his trademark dark glasses, and Jayalalithaa were resurrected to appeal to voters, and this was openly authorized by their respective political parties—an example of ‘deepfakes without deception’.Â
Party workers at the lowest rung of the ladder frequented small tech companies that created personalized ‘deepfake’ videos of them. These could then be distributed at scale. This created an unprecedented opportunity for young techies, who set up nimble outfits to serve such political parties. India has 25 official languages and thousands of local dialects, and some politicians leveraged the Bhashini AI platform to dub their speeches.Â
Voice clones of candidates made millions of calls to voters, explaining their promises. Even Prime Minister Narendra Modi got into the act when in Tamil Nadu he asked his audience to put on earphones for his Hindi speech to be translated into Tamil in real time. Political workers used GenAI and other technologies to flood social media with localized and contextualized memes.
It was not all a bed of roses, though. There were many instances of the use of deepfakes to impersonate candidates spewing hatred and discord, saying things they had never said, and making it difficult for voters to understand what was real and what was fake.Â
But, as James Thornhlll writes in Financial Times (bit.ly/3RBLgI8): “[It] could well be that the increasing use of AI tools by millions of users is itself deepening public understanding of the technology, inoculating people against deepfakes. The election did not appear to be disfigured by the digital manipulation.”
In a sense, the proof was in the pudding. There were more concerns about the heatwave killing poll workers and reducing voter turnouts than of fake videos. The outcome of the election revealed less IT-savvy parties winning in many places. This showed that the wise Indian voter not only saw through the rhetoric of politicians, but also the fake news that some of them generated.Â
In the end, the dire forecasts of doomsayers did not come true, as AI was used more for good than bad. Harvard researchers Shukla and Steiner point out that “the technology’s ability to produce non-consensual deepfakes of anyone can make it harder to tell truth from fiction, but its consensual uses are likely to make democracy more accessible.”
James Thornhill wisely noted that “we should worry more about politicians spouting authentic nonsense than fake AI avatars generating inauthentic gibberish.” Indian voters this summer seemed to agree.
#Deepfakes #Indian #elections #case #ado