Tech Ticker Edition 76: Is everyone a publisher now?

A bunch of new things have taken place this past month. India's IT Rules got a fresh set of draft amendments. Karnataka became the first state to propose a social media ban for children. In Los Angeles, a jury held social media platforms liable for their design for the very first time.

And in the middle of all of it, we're doing something new ourselves! The first episode of The TechTicker Show is live on YouTube, opening with a conversation on online speech regulation between Nirmal Bhansali from the Ticker team and Manu Kulkarni of Poovayya & Co. Don’t miss it.

 

Created by Nirmal

Deep-dive

Decoding the new draft amendments to the IT Rules

Just weeks after the synthetic content rules came into force, the Ministry of Electronics and Information Technology (MeitY) is back at the drafting table together with the Ministry of Information and Broadcasting (MIB). On March 30, 2026, the MeitY invited public comments on a fresh set of proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The deadline to respond is April 14.

 

MeitY (and MIB) have described the changes as "clarificatory and procedural." The implications, however, suggest something more structural. This explainer looks at the major changes.

 

Your post about current events is now under scrutiny

 

The first change concerns Part III of the IT Rules.

 

Part III of the IT Rules regulates two categories of entities: publishers of news and current affairs (think news outlets like Newslaundry or The Wire), and publishers of online curated content (think streaming services like Netflix, Prime Video, JioHotstar). Part III also establishes a three-tier framework for regulating publishers. Under Rules 9(1) and 9(3), there is a Code of Ethics compliance obligation, and a three-tier grievance redressal system (to handle complaints over content). The Inter-Departmental Committee (“IDC”) sits at the apex of this system.

 

Crucially, this framework has been stayed by the Bombay High Court, and subsequently, reaffirmed by the Madras High Court as having a nationwide effect. All these cases are now consolidated before the Delhi High Court, where arguments commenced in November 2024. No final adjudication has been delivered yet.

 

The proposed amendment to Rule 8(1) would extend the application of Rules 14, 15 and 16 to intermediaries (15 and 16 already applied to them) and users who are sharing news and current affairs content but are not “publishers”.

 

Rules 14, 15 and 16 establish the procedural powers of the IDC. Under Rule 14 the IDC has powers to make certain recommendations: warnings, censures, admonishments, apologies, content reclassification, and modified age ratings. They can also recommend the takedown of content.

 

But, “news and current affairs content" is defined broadly in the Rules. It can cover any content about current events, social, political, economic, cultural, or any other matter of public interest. If you share a news clip, post a video of a local incident, or offer commentary on a public-interest story, you could potentially fall within the ambit of a framework normally designed for publishers. There is no distinction drawn between a large-reach media house and a first-time creator in a Tier 2 city commenting on a local election, or between a verified journalist's report and an ordinary post. A change that mirrors the earlier draft Broadcasting Bill (which was withdrawn amidst public outrage over censorship concerns).

 

Experts have noted that this is going to supress more legitimate speech. The overbroad definition, and its potential implications will cause a chilling effect. When it is difficult to understand whether the law would affect them or not, users are likely resort to staying silent for fear of penal consequences from the government.

 

Hardcoding soft law: all advisories are now obligatory

 

Currently, MeitY can issue advisories — guidance on what it expects from platforms — but these have historically been non-binding. The proposed Rule 3(4) would change that. The Rule makes compliance with "clarifications, advisories, directions, standard operating procedures and guidelines", a due diligence obligation tied to safe harbour protection under Section 79 of the IT Act. Safe harbour is the legal shield that protects a platform from being held liable for content its user’s post. Lose it, and every meme, video, and article hosted on the platform becomes the platform's legal problem - if a case is filed.

 

Section 79(2)(c) does empower MeitY to issue guidelines for intermediaries — but that power comes with conditions. Such guidelines must be placed before Parliament under Section 87(3) and must stay within the four corners of the IT Act. Advisories, SOPs, and clarifications do not go through any of this. Rule 3(4) effectively gives these instruments the force of law without the procedural safeguards that IT Act requires and without the scrutiny that makes delegated legislation legitimate. If intermediaries don't comply with these instruments, it could be read as a failure of due diligence, and safe harbour could be at risk.

 

People within the industry have highlighted that this gives the government the ability to shape platform obligations in real time, without formal rulemaking. Critics have flagged that such advisories have been non-binding and, at times, go beyond what is allowed within the IT Act or Rules. The practical concern reinforces the legal one. MeitY's March 2024 advisory on AI licensing requirements (revised shortly after issuance following industry pushback) is an example of why informal communications are not always the most stable vehicle for binding legal obligations.

 

If every soft law instrument carries safe harbour consequences, intermediaries face compliance obligations that can shift without notice, without consultation, and without parliamentary scrutiny.

 

A committee that can issue orders without complaints

 

The third critical change seems procedural but is impactful. The Inter-Departmental Committee - an oversight body set up to handle complaints about digital content - would no longer be limited to responding to grievances and complaints. Under the proposed amendment, it could also take up matters directly referred by MIB. The committee would be able to initiate content reviews on its own, without waiting for a formal complaint to land first. This shifts the regulatory posture from reactive to proactive and gives the government an earlier entry point into the content lifecycle. Critics have noted that this expansion raises questions around institutional independence and creates a censorship apparatus.

 

Clarifying data retention

Apart from the above three major changes, the amendments also clarify how long intermediaries must retain user registration information and information about content that has been removed, either on account of court orders/ government request under Rule 3(1)(d), based on user grievance or voluntarily by the platform. The default period under the IT Rules is 180 days. Where another law — the DPDP Act, the Companies Act, the Income Tax Act, or applicable labour laws — prescribes a longer period, that longer period will apply.

What to watch out for next?

MeitY and MIB held two rounds of stakeholder meetings on April 7. One with industry, including Meta, Google, YouTube, Snapchat, ShareChat, IAMAI, and NASSCOM, and a separate session with civil society. Early reporting from those discussions suggest revision to the draft is likely. On advisories, the government is considering reissuing past advisories and ensure that they are within the remit of the IT Act and the IT Rules. On Rule 14, MIB has signalled that its intent is not to treat intermediaries as directly accountable for user-generated content, and that the language around the IDC's scope will likely be tightened. Intermediaries are being included to access user information about unregistered news publishers.

The government is also open to extending the timeline for consultation, which is currently 14 April. The final rules, when notified, will be the document to watch.

Connecting the dots 

 

Telangana turns up the heat on hate speech online

Telangana's government introduced the Hate Speech and Hate Crimes (Prevention) Bill, 2026 in the state assembly on March 29, positioning it as a necessary tool against the spread of divisive content on social media. The bill proposes imprisonment of one to seven years and a fine of INR 50,000 for hate crimes, with repeat offenders facing up to ten years in prison. If enacted, Telangana would become the second state in India, after Karnataka, to pass dedicated legislation targeting hate speech and hate crimes.

Members across party lines — including from the ruling Congress — raised concerns over its scope, safeguards, and potential for misuse, prompting its referral to a Select Committee. Critics from the BJP and other opposition allege the provisions are broad enough to target dissenting voices rather than genuine hate speech.

Running in parallel, Hyderabad City Police have placed an order for an AI-powered social media monitoring platform called "Blura Saga," designed to conduct real-time sentiment analysis, track misinformation, and flag potential threats. These movements signal that Telangana is not waiting for central regulation to act and is slowly moving towards a state-level framework to regulate online speech.

NHRC issues notice to government ministries over misuse of children’s data

 

On March 25, the National Human Rights Commission (NHRC) issued notices to MeitY, the Ministry of Education, and the Ministry of Communications. The trigger? A complaint alleging that 14 major platforms—including Meta’s Instagram and WhatsApp, and AI players like Gemini and Perplexity—are violating the DPDP Act 2023. The NHRC flagged a lack of verifiable parental consent and tracking mechanisms for children's data, demanding a compliance report within 15 days. Industry bodies like IAMAI argue the notice is premature, as the DPDP’s child-safety provisions (Section 9) don’t take effect until May 2027. The NHRC’s move signals a broadening enforcement landscape. By asserting jurisdiction over data protection through a human rights lens, the Commission is extending regulatory pressure well beyond MeitY’s traditional mandate—effectively creating a parallel track of accountability for data protection issues.

 

Peeking into the Parliament

Parliamentary committee weighs in on AI: the SCIT's AI report and what it means

India's Standing Committee on Communications and Information Technology (SCIT) released it's report on the impact of artificial intelligence in March, offering a comprehensive parliamentary examination of AI governance in India. Spanning 29 recommendations, it covers everything from compute infrastructure to deepfakes — but two areas stand out.

On AI governance, the Committee endorsed the government's pro-innovation posture, anchored in the IndiaAI Mission and the AI Governance Guidelines (finalized in November 2025). It recommends the government to explore a comprehensive legislation to regulate AI and urges the Parliament to pass the Digital India Act (currently in formulation) to replace the IT Act.

On social media, the Committee backed exploring age-based restrictions for certain platforms, framing them as a safeguard against AI misuse. This aligns with a broader momentum across Indian states and at the Union level toward active platform regulation. The SCIT's recommendations carry significant weight as the government has cited the need for parliamentary consensus before introducing national-level age restrictions, signalling intensifying legislative focus on child safety measures including age verification and usage limits.

 

Report card on cyber safety of women

 

The Fourth Report of the Parliamentary Standing Committee (Committee) on the Empowerment of Women (CEW) titled “Cyber Crimes and Cyber Safety of Women” (the Report) was presented before both the houses of the Parliament on 23 March 2026. The Report is based on inputs received from the Ministry of Home Affairs (MHA), MeitY, the Centre for Development of Advanced Computing (CDAC), the Cyber Peace Foundation, SMIs (such as Google, and Meta), across four Committee sittings held between June and August 2025. It examines the vulnerabilities women and children face in the digital space and evaluates the legal, institutional, and technological response to the rise of cybercrimes against them. It also scrutinises the role and accountability of Social Media Intermediaries (SMIs). We have a detailed summary of this report on our website here.

 

From the courtroom to your inbox 

 

  •      The DPDPA faces further scrutiny: The Digital Personal Data Protection Act, 2023, and the Rules notified late last year, are now facing a constitutional challenge before the Supreme Court. The Court has issued notice on the petition, with a bench led by CJI Surya Kant observing that data privacy has become a pressing global concern — and raising a question that cuts to the heart of the legislation: what exactly counts as "public" data versus "personal" data? The CJI's pointed query — how do we protect individuals when the Act contains sweeping provisions? — signals that the Court intends to examine not just the procedural validity of the law, but its substantive design. This challenge is significant given that the DPDP Rules were finalised only recently and are yet to be operationalised. A ruling here could reshape the architecture of data protection in India before it has even had a chance to take effect.
  •     When AI Training meets copyright: India's first test case awaits a verdict: The Delhi High Court has reserved judgment in ANI v. OpenAI—India's first legal test of whether training AI models on copyrighted content violates the Copyright Act, 1957. After 32 hearings since November 2024, the core question remains: does scraping news articles to build ChatGPT constitute fair dealing, or infringement? The ruling will determine whether India's copyright law accommodates machine learning or demands explicit consent before content fuels artificial intelligence. You can read more Ticker coverage of this case here and here.

  •      The Fact Check Unit is back in court: The government's Fact Check Unit (FCU) — struck down by the Bombay High Court in September 2024 for being vague and capable of chilling free speech — is back in legal reckoning. The Supreme Court has agreed to hear the Centre's challenge to that judgment, issuing notices to respondents including comedian Kunal Kamra and the Editors Guild of India. The bench, led by CJI Surya Kant, declined to stay the High Court's ruling for now but signalled it would hear the matter on merits, framing the core question as one of balance: how do you combat misinformation without silencing legitimate speech? The Centre maintained that the FCU was never meant to target satire or criticism — only to flag false information about government functions. Petitioners counter that when the government itself decides what is "fake," platforms may feel compelled to remove content regardless, making the chilling effect all but inevitable.
  •     Silence isn’t due diligence: the Rajasthan High Court held that social media platforms cannot stay passive when unlawful content appears. Justice Farjand Ali directed coordination with Meta to remove obscene images of a minor on Instagram, stressing that intermediaries have a “positive and continuing obligation” under the IT Act and 2021 Rules. The case followed a father’s complaint that the content remained online despite police action. The Court ruled that once flagged, platforms must act promptly to remove or block such material—delay amounts to failure of due diligence. Safe harbour under Section 79 applies only if this duty is fulfilled. Highlighting the lasting harm of non-consensual content, the Court called it an “enduring digital scar” and warned of its chilling effect on online expression. The message: intermediaries are expected to actively curb unlawful content and not wait for court orders.

Global tech stories

The LA Verdict and what Section 230 has to do with it

On March 25, a Los Angeles jury found Meta and YouTube liable for the harm caused to a young woman who began using Instagram at age nine and YouTube at age six. The jury concluded that the platforms were deliberately built to be addictive and that executives knew this — awarding $6 million in damages, with Meta bearing 70% of the liability. It was the first time a jury has held social media companies accountable for the design of their platforms, not just the content on them.

The plaintiff’s argument was that features like infinite scroll, auto-play, push notifications, and beauty filters made apps equivalent to a "digital casino" — and that courts should treat these as product defects, not editorial choices. Section 230 of the Communications Decency Act has protected platforms from being held liable for what user's post, but the jury disagreed with the safe harbour immunity and held platforms liable for the algorithmic design. Platforms will be challenging this verdict.

Not everyone sees the verdict as a clean win. Critics argue that the distinction between platform design and content is unstable — that when users are "addicted" to social media, what they are addicted to is ultimately third-party content, and that Section 230 should therefore apply to design decisions as well. Digital rights groups have raised a different concern — that the legal theories used against Meta could be turned against a far wider range of platforms, threatening foundational internet speech protections.

Reading reccos   

  •       On Platformer, Casey Newton closely analyzes the social media trials taking place in the US and their implications on Section 230.
  •       Reuters has reported on the many ways in which Indian movie studios are adopting AI, and how that is rewiring conventional business models.
  •       With Assembly Elections approaching, The Hindu examines the influx of AI-generated content and misinformation on platforms like Instagram.

 Shout-outs!  

Launch of the TechTicker Show:

·    We are excited to announce the launch of The TechTicker Show on YouTube! This podcast takes the themes of our newsletter into a deeper conversation at the intersection of tech, law, and policy. Our inaugural episode features Manu Kulkarni, Partner & Head of Dispute Resolution at Poovayya & Co. Manu represents global giants in landmark Indian court battles and has a sharp view on internet regulation in the country. You can watch the episode here.

In the spotlight

·    Nehaa Chaudhari and Sreenidhi Srinivasan are both part of the Cyber 20 Ranked Lawyers by Rise Rated Lawyers 2026.

In the media

  •      Nehaa Chaudhari spoke to the Indian Express about the recent policy conversations in Karnataka on a potential social media ban for children.
  •    Pallavi Sondhi was quoted in the Hindustan Times sharing her views on the recent draft IT Amendments and how it expands the power of the MIB. 

Challenge
the status quo

Sparking Curiosity...