It is naive to suggest that a framework, as weak and sparse as the Rules are, can do justice to an issue as multi-faceted as the use of social media

By Mahesh Uppal
Social media presents two conflicting scenarios. First is its immense popularity, evident from its near-ubiquitous use by individuals, communities, and government. The other, is a growing concern that the content can be deceptive, defamatory, threatening, paedophilic, hateful, inflammatory, or otherwise harmful. The government has the unenviable task of preventing harm without sacrificing the immense value, millions of users—including itself—derive from social media. Precisely because of the huge risks and the wide benefits, the authorities cannot afford to get the balance wrong. However, they have done just that in Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Hereinafter, “Rules”) released on February 25. The balance must be restored as soon as possible.
The Rules—issued under the Information Technology Act, 2000—are intended to curb harmful content on social media, streaming services, and digital news services. They require the players involved to set up a grievance redressal system to address the concerns of users and the government. User complaints must be acknowledged within 24 hours, the offending content taken down within 36 hours of a court order and government notification to do so, and government requests for data disclosure need to be met within 72 hours. An intermediary must disable, within 24 hours of a user complaint, any content that depicts non-consensual nudity, sexual acts, including morphed images, transmitted with malicious intent.
The Rules contain additional obligations for so-called Significant Social Media Intermediaries (SSMI), defined as players with over 5 million users. An SSMI providing chiefly messaging services must “enable the identification of the first originator of the information on its computer resource”. It must “deploy technology-based measures, including automated tools or other mechanisms to proactively identify information that depicts any act or simulation in any form depicting rape, child sexual abuse or conduct, whether explicit or implicit”. They must appoint a senior employee, who would be criminally liable for non-compliance. The rules do envisage an opportunity to be heard before being jailed but provide no details of how this will work.
There are several difficulties with the Rules. Clubbing additional players in the definition of intermediary is a clear case of overreach. Earlier, “Intermediary” referred to any player “who on behalf of another person, receives stores or transmits that message or provides any service with respect to that message.” (emphasis added). However, now, it includes players who curate and publish original content, such as digital news and video streaming players. The latter will also be subject to a three-tier grievance redressal system. Complaints would first need to be dealt with internally, then through a self-regulating body of peers, and eventually by the government.
An effective system to address grievances is clearly necessary. However, it is worrying that a bureaucrat, instead of a judge, will be the final arbiter for grievances. The obligation to remove certain types of content, without an order from a competent body, and based purely on individual complaints, could be useful in many cases, but could also be abused to settle a personal score. While there may be a broad agreement on addressing extreme cases (e.g., child abuse), it might not be so for sexuality and politics. India’s government, in the Centre and states, have a long history of hasty action that courts have frequently overturned. As recently highlighted, several important amendments to rules and legislation, relating to intermediaries, privacy, and user content, were necessitated by distortions due to weak or absent regulation.
The Rules pose many questions for which there are no answers. How credible and practical is the proposed redressal mechanism envisaged in the Rules? Can the Rules work effectively and efficiently given the possible number, complexity, and nuance of grievances? Would the likely lack of adequate expertise and staff mean that the Rules will be applied selectively? This is a valid fear if we acknowledge the uneven implementation of a widely applicable law like the Income Tax Act, where controversies and allegations of abuse are common. We cannot afford such a scenario in matters, arguably, more important to our freedoms. Are there sufficient safeguards in the Rules against possible bureaucratic mistakes, or arbitrariness? If not, is it justified that players deemed in violation of rules face criminal liability under the IT Act? Would it not be preferable to adopt an incremental approach and a period of preparation for all stakeholders?
The provision in the Rules that the SSMIs ensure identification of “first originators of information” might seem like a promising way to control false or malicious information. However, most experts believe it would compromise end-to-end encryption that ensures privacy on major apps. They also argue that the “back doors” available to authorities to trace rogue players could also be exploited by others and make networks more vulnerable.
It is often argued that privacy should concern only those who have “something to hide”, presumably their involvement in illegal activities. Others believe it is important only for specific data relating, for example, to children, health, or finance. Both these positions are misleading.
Traceability and privacy can be relevant for seemingly routine messages, devoid of anything sinister or illegal. Criminality often lies not with the originator of a message but the person who later shared it maliciously. Private messages often contain innuendo, suspicion, or threats. A message saying “I want to kill X” could suggest a conspiracy, a rant, or a joke. We do not communicate with our children, spouses, friends, teachers, bosses, doctors, lawyers in the same way. Recipients ‘decipher’ messages based on their knowledge of the sender and the context and act accordingly. Messages can take a sinister form if shared maliciously.
Sources can be faked using features like SMS spoofing. Therefore, tracing a message back to the “first originator” may not necessarily reveal the motive or the identity—of the source, or mischief. However, it will seriously compromise privacy. There is little evidence that the Rules reflect any of these anxieties.
So, while checking rogue behaviour on social media is necessary, the collateral damage is unacceptable. What is an acceptable cost for a democracy, constitutionally wedded to freedom of speech and personal liberty? It is naïve to suggest that a framework, as weak and sparse as the Rules are, can do justice to an issue as multi-faceted as the use of social media.
What is needed is a framework that can protect our freedoms and reduce, if not eliminate, threats posed by dangerous abuse of internet. This is possible if we can have an extensive and informed consultation, based on a White Paper laying out the issues and implications.
(Author has advised diverse clients in the telecom and internet industry)
Get live Stock Prices from BSE, NSE, US Market and latest NAV, portfolio of Mutual Funds, Check out latest IPO News, Best Performing IPOs, calculate your tax by Income Tax Calculator, know market’s Top Gainers, Top Losers & Best Equity Funds. Like us on Facebook and follow us on Twitter.
Financial Express is now on Telegram. Click here to join our channel and stay updated with the latest Biz news and updates.