Tech Ticker Edition 75 (February – 2026): India’s AI Moment

February was anything but incremental for tech policy. India hosted the world during the India AI Impact Summit 2026, showcasing the country’s growing AI ecosystem and announcing a slate of commitments by the end of the week. The Summit 2026 did something far harder than setting records in attendance and investments: it established India as an indispensable voice in deciding how AI is built, governed, and shared — not just within its borders, but globally. The month also saw new regulatory developments – the IT Rules were amended, and the conversation on restricting children’s access to social media platforms gained momentum

As summer begins, grab some cool watermelon juice, settle in, and join us as we unpack all of this in this month’s edition.

After the India AI Summit

Source: Created by Ayush

Deep dive

AI is Everything, Everywhere, All at Once

Somewhere between the infamous robot dog and the $250 billion in investment pledges, a quiet but significant milestone slipped by: for the first time, the world's AI conversation had moved south. The IndiaAI Impact Summit 2026, held from 16–20 February in New Delhi, was the first summit in this series hosted by a Global South nation and it drew delegations from over 100 countries and 20+ international organisations.

The AI summits that came before this one - Bletchley in 2023, Seoul in 2024, Paris in 2025 - were anchored in the language of risk, safety, and action. New Delhi focussed on impact – where should this technology go and whom should it serve? 

The picture is not without its complications. India at this summit was playing three distinct roles simultaneously: the diplomat, shaping how the world governs AI; the consumer, leveraging its scale to attract unprecedented capital, talent and infrastructure; and the builder, showcasing its sovereign and technical capabilities. Each role carries its own tensions.

The diplomat: Reframing the global conversation, and the work still needed to lead it

India's contribution at this summit was not merely a launch of new technology or investment announcements. It was a narrative reframing and strategic, more strongly inserting the Global South into a conversation that had, until now, been shaped largely by countries from the West.

For context: Bletchley Park drew around 28 signatories; Seoul convinced 10 countries for the Declaration and 27 for the broader ministerial statement. New Delhi brought in over 90 - including, notably, both the United States and China. India has, in one summit, more than tripled the breadth of the global AI governance conversation and brought in the nations that represent the majority of the world's people. The Declaration they signed together embeds a principle that reflects exactly that demographic weight: that AI's benefits must be equitably distributed, not merely equitably accessible.

Over five days at Bharat Mandapam, the summit produced a remarkably tangible set of outputs for a gathering of this scale:

  •      New Delhi Declaration on AI Impact — The headline multilateral statement, endorsed by 89+ countries and organisations, built around seven pillars: democratising AI resources, economic growth, safe and trusted AI, AI for science, social empowerment, human capital, and resilient systems. Non-binding but expected to shape follow-on guidance globally.
  •     Charter for the Democratic Diffusion of AI — Paired with the MAITRI platform, a federated Digital Public Good that countries can adopt and build upon to expand access to AI infrastructure.
  •       Framework for the Trusted AI Commons — A shared repository of technical AI safety resources to support responsible deployment across the full lifecycle of AI systems.
  •       Guidance Note on AI Governance — Eight principles for national governments on building proportionate, adaptive, and multi-stakeholder AI governance frameworks.
  •        Alliance for Advancing Inclusion Through AI — A voluntary forum to advance AI-for-inclusion initiatives, operating across three tracks: use case repository, knowledge-sharing, and a digital literacy resource hub.
  •     Network of AI for Science Institutions — A global network to connect researchers and institutions and accelerate AI-driven scientific discovery across disciplines.
  •  Guiding Principles on Skilling & Reskilling — A human-centric framework for workforce transition in the age of AI, covering lifelong learning, job quality, and employer-led reskilling.
  • Guiding Principles on Resilient, Innovative & Efficient AI — Focused on energy-efficient AI, environmental transparency, and international cooperation on sustainable infrastructure; accompanied by a new Playbook and the Resilient AI Challenge.
  • New Delhi Frontier AI Commitments — Two forward-looking pledges: one on generating anonymised insights into real-world AI deployment, and another on strengthening multilingual and contextual AI evaluations, particularly for under-represented languages.
  • Thematic tracks on AI & Child Safety, AI & Climate, and Human Flourishing with AI — Each producing dedicated recommendations, with the Child Safety track notably proposing age tokens, interoperable parental controls, and a Youth Safety Advisory Council for India.

In an attempt to spotlight the ethical boundaries that needed to be upheld – India articulated the MANAV doctrine - a values-driven framework centred around human dignity, responsibility, and societal benefit. India also launched VoicERA — an open-source voice AI stack built on national language infrastructure – helping people of diverse languages utilise the benefits of AI and thereby improve its societal benefit.

Some groups like Amnesty International and Mozilla argued that civil society, labour groups, and directly affected communities had limited representation in high-level deliberations. Critiques pointed out that India's ambition to speak for the Global South is harder to sustain when the summit's spotlight remained on large corporations, mirroring the power asymmetries the Declaration sought to address.

A fair critique of summits like these is that voluntary, non-binding outcomes aren’t as durable as the world needs them to be. But a closer read of the New Delhi deliverables suggests India was aware of that trap. Several outcomes come built-in with continuity mechanisms — the Alliance for Advancing Inclusion Through AI establishes a standing forum with three operational tracks; the Trusted AI Commons is designed as a living repository, not a one-time document; the Network of AI for Science Institutions is structured around sustained collaboration rather than a single convening. In a domain where alignment is as much a geopolitical question as a technical one, that architecture matters. The harder task now is to keep the momentum — institutionalising these forums and ensuring periodic reviews - are a real and recurring fixture in the global AI calendar rather than a footnote to a summit.

The consumer: Turning scale into strategy, not just attraction

India entered the summit with a form of leverage that few countries can rival: the sheer scale of its population and its rate of AI adoption. The evidence of its attractiveness was impossible to miss. OpenAI highlighted 100 million weekly users in India and formally launched operations in the country from the summit stage. Google committed $15 billion over five years, including a gigawatt-scale compute hub and a subsea cable connecting India directly to the US. Reliance pledged $110 billion toward national AI infrastructure; the Adani Group another $100 billion. No AI company operating at global scale can afford to treat India as a secondary market.

But market scale and technological agency are not the same thing. Analysts and industry have increasingly noted that when advanced chips, frontier models, and hyperscale cloud infrastructure remain concentrated abroad, and when data generated by hundreds of millions of Indian users contributes to model development elsewhere, size does not automatically translate into domestic capability or control. Over time, favourable entry terms, partnerships, and pricing from foreign firms can deepen dependencies — through rising costs, platform lock-in, and data safety risks — consolidating market power rather than dispersing it.

The critical question, then, is whether India is using its scale to negotiate stronger terms and build long-term capability — or whether it risks remaining a large and lucrative market that others build on top of. Initiatives such as Pax Silica signal a genuine intent to prioritise AI sovereignty and reduce supply chain dependence, but they address only part of the challenge. India's scale must translate into systematic domestic ownership across the AI stack — from data, chips, and models to platforms and standards — so the country does not remain only a consumer but becomes a long-term builder and owner of AI systems.

The builder: Real progress, and the follow-through it demands

The summit also offered India a stage to demonstrate progress under the IndiaAI Mission. The signals of progress were tangible, Sarvam AI launched 30- and 105-billion-parameter models trained on domestic infrastructure, achieving state-of-the-art benchmarks on criteria relevant to India. BharatGen's Param2 (a 17-billion-parameter multimodal foundation model supporting 22 Indian languages across text, speech, and vision) was presented as public digital infrastructure rather than a proprietary product. The expanded access to 58,000 GPUs and subsidised rates points to a growing foundation for sovereign capability. These are not token gestures — they represent real investment in building at the foundational layer.

Several Indian startups also showcased domain-specific AI solutions aimed at tangible impact from voice AI capable of multilingual speech synthesis to AI platforms targeting education, diagnostics, fintech, agriculture and manufacturing challenges. These on-the-ground innovations reflected an ecosystem that is beginning to build across the stack, from core models to applications.

At the same time, the summit offered a candid look at where the gaps remain. A robot dog in the exhibition halls (a Chinese-manufactured product showcased as an indigenous creation) — became an inadvertent symbol of the distance between indigenisation in policy and indigenisation in practice. The earlier reduction in the IndiaAI Mission's budget raised some questions about the sustainability of public funding commitments, even as foreign investment interest grows.

While frontier AI commitments addressed important issues such as transparency in sectoral AI deployment and support for diverse languages and cultural contexts, other challenges critical to Global South builders including: interoperability standards, funding parity, and workforce development — remained largely within working group discussions and document level outcomes.

None of this diminishes the progress on display but highlights that India still has a lot more left to do. The "third way" India has sought to promote - avoiding both Western dominance and exclusionary protectionism (like from China, for instance) - is still a valuable position to hold, but whether the Summit becomes a turning point or a missed opportunity will depend on how consistently India converts ambition into institutional depth and sustained capacity. 

Three roles, one trajectory for India

The India AI Impact Summit highlights the growing importance of India's role as a diplomat, consumer, and a builder. As a diplomatic actor, India's credibility depends on institutional follow-through and on ensuring that the voices it claims to represent are genuinely included even post the Summit. As a consumer, its scale is a genuine asset, one that works better as a negotiating position than as a passive invitation. As a builder, the foundations are visible. There is compute access, domestic models, and an expanding startup ecosystem. What matters now is sustained funding, deep research capacity, coordination, and long-term integration across the AI Stack.

If there’s one takeaway from the Summit, it is that AI is no longer within the domains of a few players. It is being shaped, contested, and built in India simultaneously. 

Connecting the dots  

Labelling deepfakes and quicker take downs – new amendments to the IT Rules

If you use AI to generate a meme, a voice clip, or a video before posting it on social media — you may soon need to declare that before it goes live. And if a platform fails to act on a complaint about it within three hours, it could lose the legal protections that have long shielded it from liability.

MeitY's amendments to the IT Intermediary Rules, which took effect on 20 February 2026, are India's sharpest regulatory move on synthetic media to date. At one end of the spectrum, four categories of AI-generated content are now flatly prohibited and must be taken down by platforms to protect users: deepfakes of real people, synthetic voices used to impersonate, AI-fabricated documents, and non-consensual intimate imagery. Everything else that is AI-generated — the meme, the AI-polished video, the synthetically enhanced audio — must be clearly labelled so you know what you're looking at. Only genuinely routine edits are excluded: adjusting the brightness of a photo, trimming background noise from a video, or using a filter on a selfie don't cross the threshold.

What this means in practice: platforms must label AI-generated content you encounter in your feed, embed provenance markers so its origin can be traced, and — for larger platforms — ask you to declare upfront if what you're posting was AI-generated. The idea is that by the time synthetic content reaches you, it should be identifiable; and if it isn't, the platform bears the consequences.

The open question is whether the rules will work as intended. Digital policy experts warn that the tight timelines — three hours for general takedowns, two for NCII content — could push platforms toward imperfect automated systems that remove legitimate content alongside harmful content, creating new problems for ordinary users even as they try to solve old ones. Companies such as Google, Meta and ShareChat, alongside industry bodies including the Internet and Mobile Association of India (IAMAI), Broadband India Forum (BIF), and the US-India Strategic Partnership Forum (USISPF), have flagged operational concerns. MeitY, for its part, held firm at an industry meeting on February 25, signalling that the compliance clock will not slow down. Whether the infrastructure behind these rules — human, technical, and institutional — is ready to protect users without catching them in the crossfire, remains to be seen.

Source: Created by Ayush

Growing up offline: Potential ban for children to use social media

Across India, a growing number of states are re-examining how children and adolescents access social media, signalling a decisive shift from platform self-regulation to state-led intervention. Karnataka has emerged as a focal point: its cabinet has cleared ₹67.2 crore for an AI-based social-media monitoring system even as it weighs restrictions on social media and smartphones for children under 16. Similar debates are underway in Andhra Pradesh and Goa, while Bihar is exploring policies to curb screen time amid concerns over addiction. Maharashtra, meanwhile, has constituted an expert task force to assess the scale and nature of digital addiction among children under 16, with recommendations expected soon.

At the Union level, some policy makers are openly endorsing age-based limits. The Chief Economic Adviser has asserted that India should consider such thresholds, and IT Minister Ashwini Vaishnaw has confirmed discussions with platforms. The Economic Survey, too, frames social media through a public-health and societal-harmony lens.

This domestic momentum mirrors global moves. Germany’s ruling party has backed curbs for children, Norway is planning an under-15 ban, and the UK is seeking stronger regulatory powers. Even France has urged India to join a coalition for safer social media for children - Macron called for a "new coalition of the willing" on child digital safety, urging Prime Minister Modi to join, and made it a stated priority for France's G7 presidency.

These suggest that age-based access controls and active monitoring are fast becoming a global regulatory norm – one that Indian states are increasingly willing to test on the ground.

From the courtroom to your inbox  

ANI vs OpenAI – dispute continues: In ANI Media Pvt. Ltd. v. OpenAI, Delhi High Court continued listening to India’s first major generative-AI copyright dispute on 20 February 2026. ANI, a leading Indian news agency, alleges that OpenAI’s training of ChatGPT involved unauthorised copying and storage of its news content, alleging this violates the Copyright Act, 1957. In the latest hearing, Senior Counsel Mr. Amit Sibal for OpenAI argued that training LLMs involves abstract statistical information (vectors and parameters), and does not reproduce original expressive works, stressing copyright protects expression, not ideas or grammar. He separated the “training claim” from the “output claim”, arguing that neither involves substantively reproducible content, and that any snippets in ChatGPT outputs are transformed and materially distinct from ANI’s works. ANI’s counsel Mr. Sidhant Kumar focused on raw copying during scraping and storage, asserting this itself constitutes infringement and thwarts the Section 52 fair dealing defence, especially given alleged continuing harm via model output and search functionalities.

The Court has asked both sides to file concise notes interpreting Section 52 of the Act, a central issue in determining if AI training qualifies as permissible fair use, and listed further arguments, including from intervenors, for 20 March 2026

Reading reccos   

  •      Taylor Lorenz writes about the potential grave consequences by banning social media for children.
  •    In Understanding AI, Timothy B. Lee explains the recent conflict between Anthropic and the Pentagon – and what that means for frontier AI labs.
  •     Zeynep Tufekci has a great keynote on generative AI, its spectacular rise and its potential to destabilize society.

Shout-outs!  

In the spotlight

·     Ikigai Law had an exceptionally productive week at the AI Impact Summit 20226. Between putting together key reports, organising and speaking at panels, hosting a mixer, and engaging across a packed schedule of sessions and events, it was a week fuelled by equal parts coffee and adrenaline. You can read more about everything we were involved in at the AI Impact Summit here.

 In the media

  •       Aman Taneja was quoted in Business Standard on the IT Ministry tightening takedown timelines for intermediaries and the implications for AI-generated content regulation.
  •    Aman Taneja also spoke to Moneycontrol about the new three-hour takedown window for deepfakes, flagging risks of over-removal and collateral censorship.
  •     Rahil Chatterjee was quoted in The Indian Express on the government’s move to mandate a three-hour deadline for social media platforms to act on AI-generated content and the difficulty in operationalising such a requirement.
  •       Rutuja Pol was quoted in Mint on amendments to the IT Rules requiring AI content labelling and faster takedowns, perpetuating the flaw in intermediary classification.

Signing off,

Ticker team for this edition: Ayush Nehaa Nirmal Vijayant

 

Challenge
the status quo

Sparking Curiosity...