Privacy or protection?
Telegram CEO’s arrest throws up the debate on the distinction between privacy protection and accountability for illicit activities;
At first glance, Pavel Valeryevich Durov can pass for a fitness model. Further digging reveals a pioneering career, a brazen and often public commitment to user privacy and freedom of speech, and possibly, the paternity of 100 kids via anonymous sperm donation! The Russian tech entrepreneur, often called Russia’s Mark Zuckerberg or Elon Musk, is most recently in the news for his arrest in France a few days ago. As founder of Telegram, a messenger service “with a focus on security and speed”, the multi-billionaire has found himself on sticky terrain. Over the last few years, Telegram has emerged as the platform of choice for all users who exchange confidential, sensitive information. From the Russian government to anti-establishment dissidents and Ukrainian rebels, Telegram has been the messaging service provider to million-strong groups and users. It has also been used for child pornography and played host to militants, cyber scammers, and drug dealers.
Parisian prosecutors stated that Durov “allowed” distribution of child pornography, drug trafficking, and other illicit activities on the app. Unsurprising that global tech bros, such as Musk, have been up in arms, understandably alarmed by what it means for internet privacy in general, and them, in particular. You see, the tussle to control speech and expression on the internet is a global raging war. In some countries, more than others, censorship is the word of the day. Governments across the world have passed legislation to bring tech platforms, particularly social media ones, within the ambit of the law of the land. What has followed has been years of standoffs, grand committees, and delegations that have met, parlayed, interrogated, or held accountable tech founders. Google, Facebook, Twitter, and now X have gently or forcibly been nudged to crack down on inflammatory or fraudulent posts, as well as those that have been critical of government dispensations.
So far, while policing public platforms has been commonplace; chats on messaging services would be considered private, and therefore, divulging them would be in direct contravention of privacy rights. Durov’s arrest is also the first instance of criminal charges pressed against an entrepreneur. Tech entrepreneurs have rallied together and often pushed back against government demands for sharing user data. Law enforcement agencies and governments have debated that encryption enables the proliferation of illegal activities. Messaging platforms such as WhatsApp, Signal, and iMessage have end-to-end encrypted messaging services that keep exchanged messages secret. News reports suggest that Telegram’s encryption however, was not as foolproof as the others’, leading to its ongoing legal mess. It’s also entirely plausible that tech companies cannot avoid being held responsible for nefarious activities on their platforms.
Herein lies the dilemma — sweeping powers handed over to authority figures could lead to misuse of private data, cause political coercion and individual safety risks, while an unmonitored platform could authorise deviance and criminal acts. Earlier this month, well-known British zoologist Adam Britton was jailed for raping and abusing over 42 dogs over years, and possessing child pornographic material. Reports suggest that Britton used Telegram under aliases “Monster” and “Cerberus” to engage with “like-minded” individuals and shared videos of his perverted abuse of canines. There is no doubt that platforms such as Telegram are being weaponised. But till when do we protect privacy and freedom of speech; what’s the sacrosanct line that cannot be crossed? Given the difference in cultures, who etches this demarcation? What is objectionable versus acceptable? Corporate policies of global companies must remain uniform; how do you ensure that while operating in multiple geographies, religions, cultures etc.? As famously said, one man’s terrorist is another’s freedom fighter…how do we make a distinction?
As a society, we must plead for stricter content moderation, and hopefully, tech companies will have no option but to comply. Using artificial intelligence (AI) to scan and flag off child abuse material should become more rampant. However, expect some headaches for consumers too because AI is not always accurate (read the numerous recent complaints that YouTube users post on Reddit about Google advertisements). To ensure that platforms are moderated and yet people’s privacy is protected will be a tightrope walk. With the far-ranging influence and access of social media platforms, tech companies must devote more time and resources to find effective solutions.
The writer is an author and media entrepreneur. Views expressed are personal