TechTicker Issue 73: December 2025

Wishing you a very happy new year from the TechTicker team! We hope this edition finds you in that brief pocket of calm before calendars fill up, and the inevitable “circling back” from the winter break gets going.

 

No alternative text description for this image

                                                                          Source

Tech policy, of course, didn’t take a winter break. December delivered a steady stream of developments — from the government finally laying out some of its thinking on copyright and AI, to Parliament flagging unresolved questions in digital governance, and pre-summit conversations picking up pace as the IndiaAI Impact Summit draws closer.

So, grab your coffee and settle in. There’s a lot to catch up on — and plenty to unpack as 2026 gets going.

Deep-dive

The price of training AI

In December, the Department of Promotion of Industry and Internal Trade (DPIIT) attempted at answering a global question on AI and copyright: can AI systems be trained on copyrighted material, and if so, on what terms?

Through, Working Paper on Generative AI and Copyright, (WP or Paper), which is open for public comment until February 6, 2026, DPIIT proposes a new hybrid licensing system for AI training.

This WP is the product of a dedicated Committee constituted amidst the ongoing conversations around AI and copyright in India from recently released IndiaAI governance framework, to the existing dispute between OpenAI and ANI. Alongside officials from DPIIT and the Ministry of Electronics and Information Technology (MeitY), the Committee includes representatives from industry bodies such as NASSCOM, practicing lawyers, and academic experts.

The Paper is just Part-I, which deals with copyright infringement in AI training, while a Part-II of the working paper is in the works with a focus on copyrightability of AI outputs specifically.

So what's on the table? Here are the most important takeaways from the report:

a.   Copyright law wasn’t built for AI

The Working Paper begins by mapping how AI training interacts with the Copyright Act, 1957. Training large language models involves copying, storing, and processing massive volumes of data—often through multiple temporary reproductions. All these activities fall squarely within the exclusive rights of copyright owners, meaning you’re likely to be committing infringement under the existing copyright law.

While AI developers argue that training only extracts statistical patterns rather than expressive content, the Paper notes that Indian courts have never recognised a blanket exemption for such “non-expressive” use. Whether training amounts to infringement remains a fact-specific, case-by-case determination—an uncertainty that creates real legal risk for AI developers operating at scale.

The Working Paper is equally sceptical of relying on fair dealing under Section 52 as a solution. It highlights that India’s fair dealing framework is narrow, cannot be invoked proactively, and offers little predictability for commercial AI training. Expanding fair dealing, the Committee concludes, is neither sufficient nor desirable as a long-term policy response

                                                          Source: Created by Nirmal

b.   Licensing everything sounds good (until you try)

The Working Paper systematically evaluates global approaches and finds most of them wanting. Blanket text and data mining (TDM) exceptions are rejected for stripping creators of both control and compensation. Opt-out TDM regimes are viewed as administratively unworkable, dependent on perfect transparency and machine-readable signals that do not yet exist.

Voluntary and direct licensing models are rejected as structurally impossible. Negotiating licences with millions of rightsholders would raise transaction costs to prohibitive levels and entrench incumbents who can afford large licensing deals. Even collective and extended collective licensing models are treated cautiously, with the Committee warning against hold-outs, excessive royalty demands, and veto power that could choke AI innovation.

c.   DPIIT’s core proposal is a hybrid licensing model

The most important recommendation is a hybrid statutory licensing model for AI training. Under this framework, AI developers would be permitted to train on all lawfully accessible copyrighted works without seeking individual permissions. In return, copyright holders receive a statutory right to remuneration and cannot opt out of AI training.

The Committee presents this as a deliberate trade-off. Developers gain legal certainty and access to diverse datasets; creators gain a guaranteed, economy-wide revenue stream linked to AI commercialisation. The model is also framed as startup-friendly, reducing transaction and compliance burdens while helping mitigate dataset bias through broader access.

To operationalise this system, the Paper proposes a new umbrella body—the Copyright Royalties Collective for AI Training (CRCAT)—to collect, allocate, and distribute royalties through existing copyright societies.

d.   How should you pay for AI training?

Recognising that usage-based licensing is impractical for AI systems, the Working Paper proposes a flat revenue-share model. Royalties would be calculated as a percentage of the global gross revenue generated by a commercial AI system, not per-work usage.

Rates would be set by a government-appointed committee and revised periodically, with judicial review as a safeguard. Developers would also be required to make high-level disclosures about training data categories. Notably, the framework is retrospective. Existing AI systems trained on copyrighted works would be brought within the royalty regime—a design choice that is likely to be one of the most debated aspects during public consultation.

e.   Nobody’s fully happy and the road is unclear

Predictably, the response has been mixed. NASSCOM, representing firms like Google and Microsoft, has dissented, calling the scheme a “tax on innovation” and urging a broad text-and-data-mining exception instead. A similar response was made by BSA (representing more tech giants like Adobe, IBM, etc.) in urging the government to adopt a TDM exception in view of AI innovation. Some startup founders warn that compliance costs and bureaucracy could negatively impact India’s young AI ecosystem. Creators, are cautiously receptive but uneasy. Legal experts welcome the certainty but worry about the loss of consent. Publishers, in particular, fear losing control over their catalogues, facing opaque valuation methods and government-set rates that may underprice their work.

What next

There are real concerns with the Working Paper. But this is one of the first serious attempts by a government to clearly articulate how AI training and copyright might coexist—and to explore the trade-offs involved. Whether this proposed model survives consultation, revision, and implementation is an open question. For now, the conversation is finally on the table. And Part II is still coming.

For a more detailed legal and operational breakdown of the proposed framework—including CRCAT’s structure, royalty distribution mechanics, and the suggested enforcement design—you can read Ikigai Law’s full blog post on the DPIIT Working Paper.

Connecting the dots

Grok draws scrutiny from MeitY

Grok, the AI chatbot on X, has come under the scanner of the not just the government in India, but regulators around the world.

On 2 January 2026, the Ministry of Electronics and Information Technology (MeitY) issued a formal notice to X after reports that Grok was being used to generate and circulate obscene and sexually explicit images and videos of women and children. The platform was given 72 hours (later extended to 7 January) to submit a detailed report on its internal practices. The notice flagged potential violations of the IT Act, 2000 and the IT Rules, 2021, and warned that failure to meet due-diligence obligations could lead to loss of safe-harbour protection under Section 79.

Officials were not satisfied with the response submitted by X, which asserted its compliance with the law and highlighted its content moderation practices. More details have been requested by MeitY, indicating that the matter is far from closed. Elon Musk, for his part has responded with a firm warning on X for users creating such content. The platform has also echoed that stance, promising swift removals and permanent bans for rule-breakers. India isn’t alone in raising an eyebrow – many countries like the UK, France, and Malaysia have raised their concerns with Grok’s model usage.

This episode reflects a continuing trend of heightened scrutiny of AI-enabled platforms. Safe harbour protections no longer seem automatic — they may increasingly depend on how actively platforms prevent, detect, and respond to content misuse.

IT Ministry reminds intermediaries that safe harbour is conditional

The Grok notice comes right at the heels of a recent advisory MeitY had issued. The advisory urged intermediaries to tighten compliance with their existing due-diligence obligations for preventing the circulation of obscene, vulgar, pornographic, and other unlawful content online. Framed explicitly as a reminder — not a rule change — the advisory reiterates responsibilities under the IT Act, 2000 and the IT Rules, 2021, while making clear that enforcement will be stricter. Non-compliance, the Ministry warned, could trigger loss of safe-harbour protection and penal action under the IT Act and other applicable laws.

This advisory also builds on MeitY’s October 2025 SOP on non-consensual intimate imagery (NCII) (covered in our earlier issue of the Tech Ticker), which tightened takedown timelines, mandated safeguards against re-uploads, and required closer coordination with the Indian Cybercrime Coordination Centre (I4C). Faster takedowns, stronger moderation systems, and demonstrable compliance may increasingly determine whether safe harbour immunity will be applicable.

States are stepping up to regulate hate speech on their own terms

While the Centre is grappling with deepfakes, AI-generated content, and platform-level safeguards, state governments are increasingly willing to step into the terrain of speech regulation. In Karnataka, the Hate Speech and Hate Crimes (Prevention) Bill has been passed by the legislature and is currently awaiting the Governor’s assent. The Bill adopts a stringent approach, combining criminal penalties with powers for state authorities to direct the takedown or blocking of online content deemed to constitute hate speech or hate crimes.

The move has triggered pushback on multiple fronts. Technology companies and industry bodies had urged Karnataka to open the Bill up for consultations. Simultaneously, the opposition party in Karnataka is attempting to prevent the bill from becoming law and has reportedly staged a protest, submitting a memorandum to the state government, calling the legislation unconstitutional. In neighbouring Telangana, Chief Minister A. Revanth Reddy has announced plans to introduce a state law to curb hate speech targeting any religion or community, signalling that more such interventions may follow.

These moves raise questions about how a patchwork of state-level speech regulations will affect online content. As part of a policy dialogue we organised in November, we examined the states’ approach of entering the domain of speech regulation. You can read more about that discussion here.

Peeking into the Parliament

The Winter Session of Parliament ran from December 1 to December 19, 2025, packing 15 sittings into the session. Unsurprisingly, technology, digital policy, and governance did feature across a range of discussions.

  •     Private Members’ bills flag gaps in digital governance: During the recent Winter Session of Parliament, several Private Members’ Bills (PMBs) were introduced to highlight regulatory gaps in India’s digital governance framework. While PMBs rarely become law, they often flag emerging policy concerns in the country. One such bill - the Regulation of Deepfake Bill, 2024 — was tabled in the Lok Sabha by Shiv Sena MP Shrikant Shinde to regulate deepfakes and protect individuals from malicious use of synthetic media by mandating prior consent and penalties for misuse. It also sought a dedicated Deepfake Task Force to study national security and privacy implications and develop detection technologies. Another bill was the Artificial Intelligence (Ethics and Accountability) Bill, 2025, introduced by BJP MP Bharti Pardhi in the Lok Sabha, which seeks to establish an ethical and accountability framework for AI systems, including transparency obligations, audits for algorithmic bias, and oversight by a statutory ethics committee — with penalties up to ₹5 crore and potential criminal liability for misuse.
  •    Broadcasting Bill still on the cards: The Broadcasting Bill has not been shelved, with the government confirming in Parliament that it remains under active consideration. In response to questions in the Rajya Sabha, Minister of State for Information and Broadcasting Dr. L. Murugan said the draft is being revised after wide and extensive stakeholder consultations, countering speculation that the legislation was dead. The proposed bill — of which the first version was released for public comment in November 2023 — seeks to overhaul the regulatory framework for broadcasting services, including television and digital content, and has attracted industry and creator commentary due to its scope and potential impact on online platforms.

IndiaAI Impact Summit watch

Since December drew to a close, activity around the India AI Impact Summit 2026 (scheduled for February 15–20, 2026, in New Delhi) has picked up measurable momentum.

  •      Summit’s vision and framing: In late December 2025, the Ministry of Electronics & Information Technology and IndiaAI Mission, formally briefed media on the Summit’s core vision: “Democratising AI, Bridging the AI Divide.”  The focus is on making AI widely accessible as a horizontal technology that supports inclusive development and bridges global divides in capability. This emphasis mirrors the position articulated by the Office of the Principal Scientific Advisor (OPSA), which pushed for the democratisation of AI through a whitepaper released around the same time.
  •     Global momentum and participation: Reports indicate that 100+ global CEOs and top tech leaders may attend the Summit, signalling strong international interest in India’s AI leadership role. Ahead of the Summit, India plans to present six compendiums of sectoral AI case studies to the world — spanning health, energy, gender, agriculture, education, and disability — developed in collaboration with multilateral bodies like the WHO and World Bank.
  •  Regional pre-summit events and showcases:

o   Working group meetings: Early January saw thematic preparations intensify. At the Human Capital Working Group meeting held at IIT Guwahati on January 6–7, senior policymakers, academics, industry experts and practitioners gathered to discuss education, workforce transitions, and human-centric AI adoption. The discussions — convened by MeitY, the IndiaAI Mission and state partners — are intended to feed into national AI policy outcomes that will be showcased at the Summit.

o   Rajasthan’s AI conference: Rajasthan released its state level AI Policy at the Rajasthan Regional AI Impact Conference. Union Minister Ashwini Vaishnaw (participating virtually) and other officials highlighted India’s intent to drive transformation at scale and expand training for one million young people in AI skills under the India AI Mission. In parallel, industry and regional ecosystem events like Rajasthan DigiiFest (January 4–6) positioned local startups and entrepreneurs in the broader AI conversation, signalling that pre-Summit engagement is stretching beyond national capitals into state-level innovation hubs.

  • AI4ALL Initiative: In collaboration with Meta, Ikigai Law conducted two pre-summit events as part the AI4ALL Initiative series. The roundtables in New Delhi and Bengaluru set out to examine how AI can enhance inclusion, and the challenges to ensure inclusive AI is deployed at population scale. This will be followed by a concluding roundtable in Mumbai (you can find more details here)

From the courtroom to your inbox

  •       Delhi High Court continues to draw lines on AI misuse to protect celebrities: On December 22, 2025, actr  R. Madhavan became the latest public figure to secure interim protection, with the Court restraining digital platforms and websites from using his name, photographs, or related material without prior permission. Multiple websites were also directed to take down objectionable content linked to him. Notably, Madhavan first approached social media platforms directly — reflecting the Court’s earlier ruling in the case involving Ajay Devgn, which clarified that individuals seeking urgent takedowns must first exhaust statutory grievance redressal mechanisms under Rule 3 of the IT Rules, 2021 before moving the courts. The High Court has also signalled that it is now engaging with broader legal questions around AI-generated content, with the matter listed for further hearing in May 2026.
  •       Apple vs. Competition Commission of India: Apple's battle with India's competition watchdog rolled into December, but the stakes remain unchanged. At the Delhi High Court, the company is challenging provisions that let the Competition Commission of India levy penalties based on a company's global turnover for abuse of dominance or anti-competitive conduct. Apple argues this 2023 amendment oversteps constitutional boundaries. The CCI's counter? Without global-turnover penalties, companies with negligible India revenue would slip through the enforcement net entirely. With the next hearing set for January 27, 2026 this case will shape how India penalizes competition violations, and whether a company's global success can also be involved in penal liability.
  •       NCLAT draws limits on WhatsApp – Meta data sharing: The National Company Law Appellate Tribunal (NCLAT) clarified that WhatsApp cannot use user data for advertising without your explicit, revocable consent. The Tribunal stressed that users must be given a genuine choice to opt in or out of data sharing, along with clear explanations of how their data is used and shared across entities. The clarification follows an earlier ruling that lifted a five-year ban on WhatsApp sharing user data with Meta for advertising purposes – after CCI had found that WhatsApp's 2021 privacy policy forced users into a "take-it-or-leave-it" corner, undermining autonomy while strengthening Meta's advertising dominance.

Reading reccos

  •      Akshat Agrawal in SpicyIP presents a thoughtful argument of how copyright is ill-suited to deal with the issues affecting content creators due to Generative AI.
  •     Nikhil Pahwa has a great essay about how AI is fundamentally disrupting the way we use the internet.
  •       On the Platformer, Casey Newton walks everyone through an extremely popular Reddit post by a “whistleblower” that turned out to be entirely AI-generated.

Shoutouts

Upcoming events

  •      ITech Law Conference is happening: The ITech Law Conference is coming to India this month, and our Partner Nehaa Chaudhari is one of the co-chairs for the conference. It’s happening between January 29 and 30 in Bengaluru. If technology law, digital regulation, or policy questions keep finding their way into your work (or your group chats), this is very much a mark-your-calendar kind of event. Early bird registrations close January 9th, 2026 - details are available on the conference website.

In the spotlight

  •      We’re delighted to share that Ikigai Law was recognized by Chambers & Partners as a band 2 ranked practice in the Technology, Media & Telecommunications (TMT) category as well as the FinTech category.
  •       On the individual front, our Managing Partner Anirudh Rastogi has been recognized by Chambers & Partners as a band 1 ranked practitioner in FinTech, and a band 2 ranked practitioner in TMT. While our Partner Nehaa Chaudhari has been recognized by Chambers & Partners as a band 3 ranked practitioner in TMT.
  •     We’re also proud to see Aparajita Srivastava recognized as a band 3 ranked practitioner in FinTech, and Astha Srivastava being named among the “Associates to watch” in the FinTech category.
  •       Astha Srivastava also spoke at the Cyber Security Grand Challenge 2.0, engaging with founders on regulatory and compliance considerations for cybersecurity startups.

 

In the media

 

·    Pranav Mody and Chikita Shukla co-authored an opinion piece on “Balancing India’s AI Ambitions: CCI’s approach to competition and innovation,” discussing the Competition Commission of India’s AI market study and its implications for competition and innovation.

 Signing off,

Ticker team for this edition: Ayush Nehaa Nirmal Vijayant

 For any queries or information reach out to us at contact@ikigailaw.com

Challenge
the status quo

Challenging the status quo...