The DPDP Rules have landed | Issue 72: November 2025

The year is wrapping up, and while some of us may be dreaming about a winter break, tech policy isn’t slowing down. After a long wait, the DPDP Rules are finally notified—which means the TechTicker team must, with a heavy heart, officially retire our stash of "waiting for the rules" memes.

 

 

But the DPDP Rules weren’t the month’s only headline. We also saw a potential proposal to widen the IT Rules with a new definition of “obscene digital content,” an advisory urging platforms to step up after the Red Fort blasts, new AI governance guidelines, fresh movement on gender-sensitive cyber safety, and a full slate of pre-summit activity leading up to the IndiaAI Impact Summit. So, grab your sweaters, get cozy, and let’s walk through everything November threw at tech policy in India.

Deep dive

The DPDP Rules, finally here

After months of refreshing the Gazette like a reflex, the Digital Personal Data Protection Rules, 2025 are here, giving organisations their first view of what implementation under India’s privacy law will look like. We have a more detailed breakdown of the Rules here. Here is a quick overview of what the rules say:

  •     The countdown begins: The Rules roll out in stages. The provisions needed to setup the Data Protection Board are now in force. Consent managers and the Board’s powers activate in twelve months, and the rest of the framework — from notice standards and breach reporting to retention, children’s data, SDF obligations, and cross-border rules — goes live in eighteen months, on 13 May 2027. For organisations, the clock is ticking. Better to start prep now, than scramble in 2027.
  •     Valid notice and consent: The Rules require a standalone, plain-language notice that itemises what data is being collected, sets out the specified purpose(s) of processing, and clearly describes what the processing enables. It must also provide simple pathways to withdraw consent and exercise rights. The shift to “specified purpose(s)” may appear minor, suggests that some degree of purpose bundling (for the consent) may be permissible. We’ll have to wait and see how market practice evolves. 
  •     Data retention gets an update: The Rules introduce a new baseline for retention (which wasn’t present in the draft version). Every organisation must now retain personal data, associated traffic data, and logs for at least one year for defined lawful purposes before deletion. This sits alongside a separate rule requiring large platforms — e-commerce, gaming and social media intermediaries — to erase data after three years of user inactivity, with a 48-hour pre-deletion notice.
  •     Significant Data Fiduciaries (SDFs)  SDFs fall under the Act’s enhanced compliance tier. They must conduct annual Data Protection Impact Assessments and audits, share key findings with the Board, and verify that the technical and algorithmic tools they use do not create risks to individuals’ rights. The criteria for SDF designation remain broad, and there are still questions on when this will be operationalized by the government.  Many large or data-intensive companies should begin assessing themselves now. 
  •     What next: With the Rules finally notified, the uncertainty is over — and the real work begins. Over the next 18 months, organisations will have to map their data flows, update notices and vendor contracts, prepare for breach reporting, put proper retention and deletion systems in place, revisit how they handle children’s data, and assess early whether they might be designated as SDF. They’ll also need to keep an eye on future government directions on cross-border transfers and SDF categories, which could shape compliance in unexpected ways.

If you want a more in-depth analysis, you can check out our blogpost here, and our recently held AMA which answered various questions around issues of data processing, notice, consent and early implementation strategies.

Connecting the dots

A new ethics code for social media users?

Remember the Samay Raina controversy that made its way to the Supreme Court earlier this year? (If not, our previous deep dive has you covered). The court had asked the Solicitor General to explore a regulatory framework for “obscene” online content — one that could respond to public outrage without trampling on free speech

Reportedly, the Ministry of Information and Broadcasting has submitted to the Supreme Court that the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021) should be amended to include the definition of “obscene digital content” and a separate code of ethics on obscenity that covers all digital content should be included. The proposal aims to extend rules originally designed for broadcast TV to the entirety of the internet. The impact could be like the earlier versions of the now-shelved Broadcasting Bill.

In the latest hearing of this case, the Supreme Court expressed its concern on how harmful user-generated content often goes viral without any accountability. The court highlighted the need for stronger safeguards, including better age-verification systems, and even noted that a neutral, independent regulatory body may eventually be needed. It directed that the MIB’s proposed guidelines be released for public consultation and suggested that an expert committee be set up to study the issue in detail.

MeitY wants platforms on high alert after the Red Fort blast

On 21 November, just nine days after the Red Fort blast, MeitY released a critical advisory for intermediaries such as social media platforms to curb content that could disrupt public order.

MeitY noted that some platforms were still hosting posts that justified the alleged attackers or, even more concerning, described how explosives are made. Intermediaries were reminded of their due-diligence obligations under the IT Rules, 2021 and urged them to take such content down quickly and proactively. The note made clear that leaving this material online isn’t a neutral act: it can inflame violence, disturb public order, and undermine national security. And while the advisory didn’t spell out specific BNSS sections, it did caution that non-compliance could attract penal consequences under the new code.

Even before MeitY’s advisory, the Ministry of Information and Broadcasting urged cautious reporting on sensitive matters like the Red Fort blast. And with Operation Sindhoor still fresh in political memory, it’s clear that security incidents now trigger a sharper, more sustained phase of digital scrutiny.

After the Pahalgam attack earlier this year, authorities blocked multiple social media accounts and a Parliamentary committee launched a wider inquiry into platforms and influencers seen as “working against the national interest.” This underscores the increased regulatory expectations around proactive, real-time moderation by platforms – especially for content linked to public order and national security.

The NCW wants the internet to be safer for women

On 4 November 2025, the National Commission for Women (NCW) submitted a detailed, year-long study to four Union ministries, urging a clearer, more gender-sensitive approach to India’s cyber laws. After holding consultations across eight states and two national forums, the NCW's report offers 200+ recommendations that consider the lived realities of online harm unfolding on the ground.

The recommendations range from mandatory account verification and quicker takedown timelines for non-consensual content, to formally recognising harms like cyberbullying, trolling, and deepfakes, and even treating metadata and temporary files as proper digital evidence. It also proposes stronger support systems, including dedicated victim-assistance mechanisms at the district level.

While these ideas aren’t law yet, they signal how the conversation is heading towards a more structured, safety-first approach where platforms may face stronger expectations beyond their existing trust & safety policies. Regulatory pressure may accumulate next on platforms’ ability to keep users safe online. 

AI Governance framework that has an innovation-first approach

After the draft version came out in January 2025, the government has finally published the IndiaAI Governance Guidelines.

The guidelines lay out an innovation-first vision for how India wants AI to grow. They introduce seven guiding principles (“Sutras”) for responsible AI — safety, transparency, accountability, fairness, and keeping humans in the loop — but without turning them into rigid mandates. They map out sector-wise risks (like bias in hiring, unsafe medical AI, and opaque financial models) and propose a risk-classification system to help spot high-risk use cases early.

Importantly, the Guidelines also take stock of India’s current legal landscape. They note that most AI risks can be handled within existing laws, but calls for a systematic review to plug gaps - especially around intermediary liability for generative models, content provenance, and copyright issues – and address them through targeted amendments.

It also lays out an action plan that includes: expanding AI skilling in tier-2/3 cities, improving access to compute and datasets through national assets like AI Kosh and DPIs, setting up the AI Safety Institute and piloting regulatory sandboxes for high-risk applications. 

Put simply: build fast, mitigate sensibly, and don’t smother innovation with rules unless you absolutely must.

IndiaAI Impact Summit watch

Pre-summit events galore

If you felt a ripple of déjà vu this November, you’re not alone. India’s AI calendar has become so full that ‘pre-summit events’ now feel less like warm-ups and more like regularly scheduled programming - the kind where industry, academia, and policymakers meet often enough to have favourite seats. Still, a few gatherings stood out this month for nudging the broader narrative forward in meaningful ways.

In New Delhi, the BSA & Ikigai Law AI Pre-Summit Forum unveiled the Enterprise AI Adoption Agenda, spotlighting responsible governance, workforce readiness, and smart pathways to scale innovation. We had the privilege of hosting Secretary S. Krishnan and Additional Secretary Abhishek Singh (MeitY) for keynote addresses and IBM India’s Manging Director Sandip Patel set the stage for a rich discussion. With conversations anchored around the India AI Impact Summit’s Social Good and Human Capital pillars, the Forum spotlighted actionable pathways from adoption to real-world impact.

NASSCOM’s CXO Roundtables — from Pune to Kolkata — continued to anchor industry conversations around workforce skilling, resilience, and how AI can strengthen existing business models, especially outside the usual metro hubs.

Meanwhile, MozFest 2025 in Barcelona added an academic twist, championing inclusive Responsible AI with low-compute models and multilingual learning tools fit for real-world classrooms.

In the healthcare space, the CII Annual Health Summit zoomed in on AI and healthcare, exploring how technology can extend not just data streams, but meaningful, healthier years of life.

Inside the new IndiaAI experience zone

The “IndiaAI Zone,” established by Minister Shri Jitin Prasada at the newly inaugurated ‘MeitY Pavilion’ during IITF 2025, turns India’s booming AI ecosystem into an immersive, walk-through experience. Framed by the themes of Digital India, IndiaAI and MyGov, it showcases an “Action to Impact” journey across the seven pillars of the IndiaAI Mission. From AI Kosh to startups and safe AI, visitors get a lively preview of flagship initiatives, talent programmes and high-stakes global challenges - all in one engaging sweep.

From the courtroom to your inbox

The Sahyog portal tussle continues: X has challenged Karnataka High Court’s recent decision upholding the Sahyog Portal. The Karnataka High Court had (among other things) described Sahyog as “an instrument of public good” and ruled that Article 19 protections apply only to Indian citizens—not foreign firms like X.

X’s appeal now challenges this interpretation, arguing that the portal effectively sidesteps the structured safeguards of Section 69A by subjecting platforms to a constant flow of takedown requests from multiple government departments. According to X, this turns content moderation into a system of rapid compliance, compressing due process and raising concerns around transparency, accountability, and free speech. The case thus places a spotlight on whether speed-driven enforcement can coexist with the legal guardrails meant to keep state power in check. Joining the challenge, DigiPub News India Foundation and journalist Abhinandan Sekhri have also appealed the High Court’s ruling, arguing that blocking via the portal curtails freedom of speech under Article 19(1)(a). 

Online money gaming funds terror: Amidst the hearings looking at the constitutionality of the Promotion and Regulation of Online Gaming Act, the centre has responded with an affidavit to the batch of petitions filed by gaming companies, and asserted that certain real-money gaming platforms are doubling as vehicles for terror financing and money-laundering. Such terror financing links have already been identified by the central act itself in its reasoning for enactment. The government noted that these inputs were collected from several investigations, including those revealing the use of bank accounts belonging to individuals like students or homemakers as "shields" for criminal organizations. The government has even offered to place classified material before the bench if required. Beyond looking at issues of innovation, and consumer harm – national security concerns will also closely have to be addressed during this case.

The rise of personality rights injunctions: In November, Indian courts continued a sharper focus on protecting identity and reputation in the digital age. The Delhi High Court ordered Google to take down deepfake-driven YouTube channels targeting a senior journalist, Rajat Sharma, and share subscriber and monetisation details, while also blocking sites misusing Jaya Bachchan’s images. The High Court also granted podcaster Raj Shamani interim relief, recognising that his name, likeness, voice and image are protectable personality rights and restraining unauthorised use including AI-deepfakes. Shortly after, the Madras High Court issued an interim protection in the case of Ilaiyaraaja, prohibiting digital platforms from using his name, image, voice or AI-generated likeness without consent — signalling that modelled or morphed content of a person’s attributes may breach personality rights.  A growing list of creators and public figures appear to see these rulings as door-openers to tighter digital control over how their persona is used online. At this pace, “don’t use my face” might become the most sought after injunction.

Global tech stories

Europe’s pivot on AI and privacy

Europe, always seen as the global standard-bearer for strict digital rights, is signalling a nuanced shift in its regulatory approach through the Digital Omnibus Proposal. The European Commission has proposed easing parts of its AI and privacy regime - delaying stricter rules for high-risk AI to December 2027, relaxing cookie-consent requirements, and clarifying when data ceases to qualify as personal. Beneath this lies a strategic recalibration: with rising AI competition from the U.S. and Asia, Europe appears keen to give businesses more room to innovate without constant regulatory friction.

Activists & civil rights groups, however, call this a massive rollback of digital rights in EU history, arguing that flexibility for industry could come at the expense of long-standing protections. Other criticisms asserts that the European Commission is not just responding to innovation concerns, but to external pressure - namely from the Donald Trump administration in the U.S. and powerful tech-industry lobbies. They warn this isn’t mere simplification, but caving in to Trump’s proposals could weaken precisely the safeguards Europe spent a decade constructing.

This Digital Omnibus proposal still needs the blessing of EU member states and the European Parliament, so the ink is far from dry. For companies, the new proposal offers a two-fold opportunity: a longer runway with timelines, and more flexibility in compliance – potentially signalling how even the EU is adapting to the realities of global AI competition.

Reading reccos

  • Deepak Varuvel Dennison writes a thoughtful essay on Aeon of how generative AI omits vast swathes of indigenous knowledge and what that means for preserving culture.
  • In TechPolicy.Press, Mark McCarthy analyzes the AI Bubble and what policymakers must prepare for.
  • The Mint has good reporting on the impact of the Labour codes on quick commerce and delivery platforms, and what it means for gig workers on the ground.

Shout-outs!

Event spotlight

Truth, Trust & Technology – A Policy Dialogue: As conversations around online speech, misinformation, deepfakes and digital media accountability grow increasingly urgent across India, Ikigai Law and the National Law School of India University convened Truth, Trust & Technology in Bengaluru to bring clarity to the chaos. The policy dialogue created a shared space for policymakers, journalists, academics and legal experts to unpack how India should navigate the delicate balance between regulating harmful content and preserving free expression. The response was phenomenal.

Shri Priyank Kharge, Karnataka’s Minister for Electronics, IT/BT & RDPR, set the tone in a candid keynote and Q&A. He outlined how the state is thinking about misinformation, walked through the draft Misinformation Bill, and shared details of the proposed Information Disorder Tracking Unit. This was followed by two panels that unpacked the constitutional limits on state-level speech regulation, interrogated the nature of online speech regulation, and explored practical, less punitive ways to tackle misinformation beyond criminal law and state-controlled fact-checking.  

If you want the full picture, you can read more in our blog post, “Truth, Trust & Technology: A Policy Dialogue on Online Speech Regulation.

In the media

Signing off,

Ticker team for this edition: Ayush Nehaa Nirmal Vijayant

Image Credit: Created by Nirmal

Challenge
the status quo

Sparking Curiosity...