MillenniumPost
Exclusive

Game of real or fake

As India heads to the polls, technology races onto the political scene, characterised by the use of AI in society, marked by its potential to influence and change the perception of voters

Game of real or fake
X

In a year marked by political challenges and economic uncertainties, one technological force that is capturing attention and evoking both wonder and concern is Artificial Intelligence (AI).

While AI made a notable entrance onto the global stage in 2022, it truly became mainstream in 2023, pushing the boundaries of reality and prompting essential discussions about its future while 2024 awaits its evolution into bustling arenas.

Social media platforms have become more than mere tools for communication these days. Among these platforms, X stands out as a prominent battleground where disinformation campaigns thrive, perpetuated by armies of AI-powered bots programmed to sway public opinion and manipulate narratives.

AI-powered bots are automated accounts that are designed to mimic human behaviour. Bots on social media, chat platforms and conversational AI are integral to modern life. They are needed to make AI applications run effectively, for example.

But some bots are crafted with malicious intent. Shockingly, bots constitute a significant portion of X’s user base. In 2017 it was estimated that there were approximately 23 million social bots accounting for 8.5% of total users. More than two-thirds of tweets originated from these automated accounts, amplifying the reach of disinformation and muddying the waters of public discourse.

Social influence is now a commodity that can be acquired by purchasing bots. Companies sell fake followers to artificially boost the popularity of accounts. These followers are available at remarkably low prices, with many celebrities among the purchasers.

In the course of research, it has been found that bots can post hundreds of tweets offering followers for sale.

Through AI methodologies, malicious social bots manipulate social media, influencing what people think and how they act with alarming efficacy. In such challenging times, it has become all the more crucial to comprehend how both humans and AI disseminate disinformation in order to grasp the ways in which AI can be leveraged for spreading misinformation.

Meanwhile, the Lok Sabha elections, which will run across seven phases from April 19 to June 1, are set to be one of the world’s biggest and most expensive elections. An estimated 970 million voters will decide the fate of the 543 members of the House of Parliament as Narendra Modi gears up for an unprecedented third consecutive five-year term.

An AI-generated version of the Prime Minister that has been shared on WhatsApp shows the possibilities for hyper-personalised outreach in a country with nearly a billion voters. In the video — a demo clip whose source is unclear — Modi’s avatar addresses a series of voters directly, by name.

However, it is not perfect as Modi appears to wear two different pairs of glasses while some parts of the video are pixelated.

Not far behind are his workers sending out videos in which their own AI avatars deliver personal messages to specific voters about the government benefits they have received and ask for their vote.

Those video messages can be automatically generated in any language, including phone messages by AI-powered chatbots that call constituents in the voices of political leaders and seek their support.

Such outreach requires a fraction of the time and money spent on traditional campaigning, and it has the potential to become an essential instrument in elections.

The onus ultimately falls on users to exercise caution and discern truth from falsehood, particularly during election periods. By critically evaluating information and checking sources, users can play a part in protecting the integrity of democratic processes from the onslaught of bots and disinformation campaigns on X. Every user is, in fact, a frontline defender of truth and democracy. Vigilance, critical thinking, and a healthy dose of scepticism are essential armour.

Disinformation is also frequently propagated through dedicated fake news websites. These are designed to imitate credible news sources. Users are advised to verify the authenticity of news sources by cross-referencing information with reputable sources and consulting fact-checking organisations.

Self-awareness is another form of protection, especially from social engineering tactics. Psychological manipulation is often deployed to deceive users into believing falsehoods or engaging in certain actions. Users should maintain vigilance and critically assess the content they encounter, particularly during periods of heightened sensitivity such as elections.

By staying informed, engaging in civil discourse and advocating for transparency and accountability, we can collectively shape a digital ecosystem that fosters trust, transparency and informed decision-making.

Around the world, elections have become a testing ground for the AI boom.

There are no Indian laws, at present, against the misuse of AI but they are being established. However, with the Lok Sabha elections beginning, the government has urged caution on the introduction of new generative AI tools that could allow manipulation. Critics argue this move could hinder innovation and restrict freedom of speech.

In March, Prime Minister Modi told supporters in Ahmedabad the government was now requiring tech firms to seek permission before releasing under-tested generative AI models or tools. Additionally, it has cautioned companies against developing AI products that could compromise the integrity of the process as India prepares for elections.

On March 25, India’s Misinformation Combat Alliance and Meta (Instagram and Facebook’s parent company) joined forces to introduce a deep face analysis tool to try to identify deep fakes in the run-up to the election.

This shift towards monitoring and regulating artificial intelligence departs from the government’s previous hands-off approach. Only a year ago, India’s Parliament was told there were no plans for legislation to govern AI practices. In the year since, Google Gemini, an AI-driven search tool, earned the ire of India’s Ministry of Electronics and Information Technology for its response to the prompt “Is Modi a fascist?” Gemini’s response referred to the implementation of policies seen as authoritarian around crackdowns on dissent and violence against minorities.

The World Economic Forum’s Global Risks Report 2024 lists misinformation and disinformation as the top risk, for India, highlighting its importance as a significant danger facing the nation in the upcoming two years. A recent Indian media report identified Chinese and North Korean cyber agents seeking to influence the upcoming elections through planting disinformation in the electorate.

Views expressed are personal

Next Story
Share it