Blogs
May 18, 2025
00:00

Why laws fall short in combating the surge in cyber-bullying cases

In the aftermath of the Pahalgam terror attack, Himanshi Narwal, the wife of slain Navy Lieutenant Vinay Narwal, issued a heartfelt appeal for peace and firmly rejected the vilification of Muslims and Kashmiris. Soon after, the grieving newlywed became the target of a vicious trolling campaign on X (formerly Twitter). Anonymous accounts hurled slurs at her, questioned her loyalty to her late husband, and even called for the cancellation of her pension.

However, Ms. Narwal was not alone in facing such online vitriol. Following Foreign Secretary Vikram Misri’s announcement on May 10 that India and Pakistan had reached an understanding to halt military hostilities, his account on Xwas inundated with abusive comments, some of which targeted even his daughter. Mr. Misri was eventually compelled to lock his account, as several diplomats and politicians condemned the toxic trolling culture in unequivocal terms.

Emboldened by the anonymity of the internet, faceless trolls have turned into virtual vigilantes, punishing those who dare to question dominant narratives. What regulatory reforms, then, are necessary to ensure that such depravity is no longer met with impunity?

Regulatory loopholes

A range of terms has emerged to describe contemporary forms of cybercrime, including cyberbullying, stalking, hate speech, and doxxing. Doxxing, short for “dropping dox” (documents), involves the unauthorised online disclosure of private information, often with malicious intent. This may include home addresses, phone numbers, or sensitive images, leaving victims vulnerable to harassment and tangible real-world threats.

Studies show that such abuse disproportionately targets women and minorities, suggesting that these attacks are often driven by organised political motives. The consequences can be severe, frequently escalating to rape and death threats.

India lacks a dedicated law specifically aimed at tackling online hate speech and trolling. Instead, a limited number of provisions under the Bharatiya Nyaya Sanhita (BNS), 2003, and the Information Technology (IT) Act, 2000, cover certain aspects of cyberbullying. The BNS contains provisions applicable to electronic communications, such as Section 74 (assault or criminal force against a woman with intent to outrage her modesty), Section 75 (sexual harassment), Section 351 (criminal intimidation), Section 356 (defamation), and Section 196 (promoting enmity between groups). The IT Act supplements these offences with provisions like Section 66C (identity theft), Section 66D (impersonation fraud) and Section 67 (publishing or transmitting obscene material electronically).

“The existing regulatory framework is functional but far from complete. No provision squarely criminalises sustained online abuse that does not qualify as ‘obscene,’ ‘threatening,’ or ‘fraudulent.’ Stalking under the BNS is gender-specific—limited to men targeting women—and hinges on intent to engage personally, failing to capture the collective harassment that defines much of online trolling. While cyberbullying may sometimes be shoehorned into offences like criminal intimidation or defamation, these require proof of threat or reputational harm and are ill-suited to counter the rapid, anonymous abuse unleashed by online mobs,” Apar Gupta, advocate and founder-director of the Internet Freedom Foundation, told The Hindu.

Moderation or censorship?

Mounting domestic and international pressure to curb disinformation and hate speech has compelled social media giants to moderate and remove harmful content. While many advocate for “self-regulation,” where platforms enforce their own community guidelines, this model has largely failed and faces growing scrutiny. Last year, Telegram founder and CEO Pavel Durov was arrested by French authorities for allegedly failing to moderate criminal activity on the platform, including the circulation of child sexual abuse material and fraudulent content. Telegram later amended its privacy policy to permit the disclosure of users’ IP addresses and phone numbers to law enforcement upon receipt of “valid legal requests.”

Also Read: Should digital platform owners be held liable

The challenge is further exacerbated by the gradual erosion of content moderation policies in favour of monetisation. In a damning report released earlier this month, the Centre for the Study of Organized Hate found that X had become a “high-velocity distribution channel” for hate speech and conspiracy theories, particularly targeting British-Pakistani men as well as other South Asian and immigrant communities. An analysis of 1,365 posts generating more than 1.5 billion engagements revealed that the platform played a central role in weaponising the “grooming gang” discourse to scapegoat Muslims in the U.K., despite police data showing that most offenders were white men.

In India, Section 69A of the IT Act empowers the government to issue blocking orders on grounds aligned with constitutionally permissible restrictions on speech, such as national sovereignty, friendly relations with foreign States, and public order. Platforms that fail to comply risk losing safe harbour protection under Section 79, which ordinarily shields intermediaries from liability for user-generated content. 

However, experts have warned that these provisions are increasingly being used as tools for online censorship. In recent years, the Union Government has frequently taken down content without notifying affected users — a practice that contravenes the Supreme Court’s 2015 ruling in Shreya Singhal v. Union of India. While the court upheld the constitutionality of Section 69A, it underscored that blocking orders must be accompanied by cogent reasons to enable judicial scrutiny.

Following the Pahalgam attack, X disclosed that it had been directed to block more than 8,000 accounts in India, but said the government had not specified which posts violated the law in most cases. In March, the company filed a lawsuit in the Karnataka High Court challenging the government’s reliance on Section 79(3)(b) to issue takedown orders, arguing that it bypasses the procedural safeguards under Section 69A. Unlike Section 69A, Section 79(3)(b) neither defines what constitutes an “unlawful act” nor provides for any review mechanism.

Meanwhile, the Ministry of Information and Broadcasting recently informed a Parliamentary Committee that it is reconsidering safe harbour protections for social media platforms in a bid to combat “fake news.”

X has received executive orders from the Indian government requiring X to block over 8,000 accounts in India, subject to potential penalties including significant fines and imprisonment of the company's local employees. The orders include demands to block access in India to…

— Global Government Affairs (@GlobalAffairs) May 8, 2025

‘Publicly available data’

In February last year, the Delhi High Court directed X to remove tweets revealing the personal and professional details of a woman who had reportedly posted a critical comment about Uttar Pradesh Chief Minister Yogi Adityanath. The post triggered a wave of online harassment, with details of her workplace, residence, and photographs being widely circulated. Although these disclosures raised privacy concerns, Justice Prathiba Singh ruled that the incident did not constitute doxxing, as the information was already publicly available.

However, the judge acknowledged that while doxxing is not yet a statutory offence in India, it poses a serious threat. She observed that it infringes upon the right to privacy and that courts could invoke tort law to offer redress. Accordingly, X was directed to disclose subscriber information associated with the offending posts.

This case highlights the contested nature of what qualifies as public information. The Digital Personal Data Protection (DPDP) Act, 2023, exempts from its ambit personal data that is made “publicly available”, either through voluntary disclosure by the individual or by entities under a legal obligation. However, this exemption is riddled with ambiguity, as the law offers no clear definition for what qualifies as “publicly available data.”

This lack of clarity may inadvertently enable cybercrimes such as doxxing, especially given the ease with which fragmented data from multiple platforms can be easily aggregated and used for harassment or intimidation.

Challenges ahead

Experts underscored that enforcement, or rather the lack of it, often determines whether victims can access remedies.

“All laws are only as effective as their enforcement. While posts and accounts are promptly removed when government directives are issued, the same urgency is rarely extended to ordinary users reporting harassment or abusive content,” Mishi Choudhary, technology lawyer and digital rights advocate, told The Hindu.

She pointed out that for victims of gendered online abuse, legal recourse is typically a last resort. “Survivors are often disbelieved or, worse, blamed for the abuse they face. The lack of awareness and institutional support has a profoundly detrimental impact, forcing many to navigate an uphill battle in their quest for justice,” she said.

Mr. Gupta agreed, highlighting challenges such as perpetrator anonymity, cross-jurisdictional hurdles, and limited cybercrime training. “While the BNS has modernised terminology and broadened the scope of online offences, gaps in legal clarity and enforcement persist. Merely creating new offences is insufficient and may even endanger journalists and rights defenders, especially given India’s weak rule of law framework,” he cautioned.

Published - May 17, 2025 03:32 pm IST