Manufactured ‘Realities’

AI-driven deepfake technology is playing a pivotal role in manoeuvring electoral landscape, reshaping political narratives and amplifying misinformation—exposing regulatory failures in the AI domain;

Update: 2025-03-06 14:23 GMT

In the bustling streets of Kolkata, a corporate professional sat next to me in a lively food joint. He was intently scrolling through his phone, his expression marked by disbelief. A video of Bollywood actor Ranveer Singh allegedly making controversial political remarks had gone viral in his WhatsApp groups. But something felt off - the celebrity’s lip movements didn’t quite match the words, and the voice sounded slightly mechanical. "I've been a big fan of Bollywood since I was a child," he said. "Celebrities who aren’t involved in politics rarely speak so bluntly on political matters." His suspicions were right. The video was an AI-generated deepfake, one of thousands that flooded social media platforms and messaging services as the world's largest democracy navigated its General Elections, followed by three state elections in Maharashtra, Jharkhand, and Delhi.

AI as the Driving Force

India’s 2024 general election, involving 969 million eligible voters and over 2,600 political parties, saw AI-driven campaigning take center stage. Political parties invested an estimated $50 million in AI-generated content, leveraging synthetic media for outreach, cultural resonance, and emotional engagement. AI was used not just for voter outreach but also for misleading narratives, transforming political communication in a way unseen before. One of the most striking examples was the use of AI-generated deepfakes to resurrect deceased political leaders. In Tamil Nadu, AI-generated deep fake videos of former Chief Ministers M. Karunanidhi and J. Jayalalithaa were widely shared, showing them endorsing contemporary candidates. These videos exploited nostalgia, creating a false sense of legitimacy around campaign messages. Similarly, AI-powered speech synthesis tools allowed translation and customisation of speeches, making leaders sound fluent in regional languages where they had historically struggled to connect. A viral AI-generated meme of Prime Minister Narendra Modi dancing spread across social media, drawing humorous reactions, in contrast, a deep fake video of West Bengal Chief Minister Mamata Banerjee showed her dancing in a saree, with background audio altered to mock defectors to Modi’s camp. The latter video prompted a police investigation. An AI-cloned audio message falsely claimed that Rahul Gandhi had resigned from the Indian National Congress (INC). In response, INC supporters released an AI-generated speech of Gandhi attacking Modi’s business ties, fuelling an AI-driven propaganda battle.

Maharashtra and Delhi state elections also witnessed a widespread circulation of AI-generated deep fake content. In Maharashtra, an AI-generated audio clip circulated on November 19, 2024, falsely claimed that Supriya Sule, Nana Patole, and IPS officer Amitabh Gupta were involved in election fraud. Fact-checkers later debunked the claim, but by then, it had already been amplified across social media platforms. Meanwhile, the Nationalist Congress Party (NCP) used AI-generated campaign videos, featuring synthetic avatars promising free education and subsidised cooking gas. These videos blurred the lines between political messaging and outright deception, as they created false endorsements and exaggerated claims. Delhi elections witnessed a surge in AI-driven disinformation cases, with multiple FIRs filed against AAP for allegedly sharing AI-generated deep fake videos of PM Narendra Modi and Home Minister Amit Shah. One such video altered a 90s Bollywood film scene, replacing the villains' faces with BJP leaders and modifying the audio. Another widely circulated 27-second deepfake video depicted Arvind Kejriwal playing a guitar inside Tihar Jail, amplifying corruption allegations against him.

Professionalisation of India’s Deepfake Industry

India's deepfake industry, valued at $60 million according to Wired, is rapidly evolving into a professionalised sector, with firms specialising in synthetic media for political campaigns. Polymath Synthetic Media Solutions, has been pivotal in deep fake-driven campaigns, creating digital avatars and AI-generated audio clones of political figures, catering to multiple parties. Likewise, Muonium AI, under Senthil Nayagam, has developed highly realistic AI-generated content using just 20 minutes of speech and a few photographs. Meanwhile, firms like iToConnect have transformed campaign economics, deploying 25 million AI-generated voter calls in just two weeks before the general elections in Telangana and Andhra Pradesh. By using AI-driven calls at ₹0.5 per call, compared to ₹4 for human operators, they have significantly reduced campaign costs. 

Ethical Concerns and Regulatory Failures

On May 6, 2024, just before the third phase of the General Election, the Election Commission of India (ECI) issued a directive to all political parties to remove AI-generated disinformation within three hours. However, enforcement was inconsistent and ineffective. The directive lacked clear legal penalties, and official party accounts themselves shared deep fake content. A report by India Civil Watch International (ICWI) and Ekō, published by The Guardian, revealed that Meta (parent company of Facebook, Instagram, and WhatsApp) approved 14 out of 22 AI-generated political ads, many containing disinformation and communal hate speech. These ads included inflammatory messages, such as false claims that opposition leaders planned to “erase Hindus from India”, along with AI-generated images of burning religious sites and border crossings. Despite Meta’s stated policy against AI-generated misinformation, the findings showed serious lapses in moderation, exacerbating communal polarisation and electoral disinformation.

The Way Forward

Recognising the risks, India has initiated countermeasures, but gaps remain. The Misinformation Combat Alliance (MCA), in collaboration with Meta, launched the Deep Fakes Analysis Unit (DAU) to monitor and analyse synthetic media. Using a WhatsApp tipline, it reviewed hundreds of user-submitted audio and video files, identifying a rise in “cheap fakes”, where synthetic audio was overlaid on unaltered visuals.

Meanwhile, the Ministry of Electronics and IT (MeitY) is drafting India’s first AI-specific legislation. Google India and NCERT are updating school curriculums to include AI awareness programs. Fact-checking organisations like Alt News and Boom Live continue to play a critical role in debunking viral disinformation. However, self-regulation by tech platforms is inadequate, and India’s legal framework remains outdated. The Election Commission must implement strict AI content disclosure laws, enforce penalties for disinformation, and mandate platform accountability. Without robust regulations and digital literacy programs, AI-generated electoral disinformation will continue to undermine India’s democratic integrity. Election campaigns have demonstrated the urgency for AI governance—India must act now to prevent AI-driven manipulation from becoming the new norm in electoral politics.

Abhishek Roy Choudhury is a German Chancellor Fellow, Alexander von Humboldt Foundation, Germany. Views expressed are personal

Similar News

The Age of A-morality

Weaponisation of Consent

Questionable integrity?

Replicable roadmap

Beyond a Colonial Relic

Emerging from the Shadows

Murky Waters

Stuck in the ‘middle’