Trick or treat?
October was lit — and we mean literally. Between Diwali diyas, good ol’ fairy lights in our homes, glowing pumpkin lanterns, and policy fireworks in the inbox — it’s been a packed festive season. For those of you who are just easing in back into work after your time off, welcome back! We know you’re probably feeling a little like Ross..
...but, don’t worry! We got you covered.
In this month’s edition, we unwrap the latest facelift by the IT Ministry — from tweaks to the takedown mechanism to new draft rules on labelling AI-generated content. Add to this: new gaming rules on the block, CCI’s awaited AI market study, and states doubling down on misinformation — you’ve got quite the candy spread. Meanwhile, as the India AI Impact Summit continues to shape up — with this edition, we introduce a shiny new section to track all the pre-summit action.
So, grab your basket — this issue is your policy candy bag, packed with enough tricks, treats, and AI titbits to keep the festive spirit alive.
Deep dive
Adding a ‘Made by AI’ tag?
The IT Ministry dropped the Draft IT Amendment Rules, 2025. Released on October 22, 2025 (and open for public feedback until November 6, 2025), these draft rules expand platform obligations around AI-generated and synthetic content, called Synthetically Generated Information (SGI). Basically, if something is created, modified, or polished by AI or digital tools but seems “authentic or true”, it needs to wear a name tag saying so. That living room mock-up straight out of a magazine, which has you rethinking your furniture? - might need to be labelled ‘synthetic’.
Trigger points
The government’s intention is clear: to prevent generative AI from creating convincing falsehoods through deepfake audio, videos and synthetic media going viral. It doesn’t seem like the Draft Amendment’s timing is an accident: the Parliamentary Standing Committee had already called for tougher rules on fake news, Minister Vaishnaw had expressed desire for a deepfake law, the Supreme Court is hearing a PIL on deepfakes, and the ECI had warned political parties to steer clear of AI fakery ahead of the Bihar polls.
What the draft says
Platforms allowing creation or carrying of SGI will need to:
· Label it loud and clear: If an intermediary platform permits or facilitates creation, generation, modification or alteration of SGI, every such information should be labelled or embedded with a permanent unique metadata or identifier.
· The 10 percent rule: In cases of video content, the label, metadata or identifier should be visibly displayed, within that SGI – covering at least 10 percent of the visual display. In case of audio content, it should be made audible prominently, during the initial ten percent of the audio’s duration. Platforms must embed clear, tamper-proof identifiers in the metadata or as permanent watermarks, so that no amount of cropping, clipping, or clever editing can make the label or identifier disappear.
- SSMI-specific asks: Every user uploading content, on significant social media intermediaries (SSMIs), i,e. platforms having more than 5 million registered users, will have a few additional obligations. This includes: (i). prior to display, upload or publication, SSMIs must ask users to declare if information is SGI; (ii). deploy reasonable and appropriate measures to verify accuracy of users’ declaration; and, (iii) where declaration or technical verification confirms information is synthetically generated, ensure appropriate label or notice is clearly and appropriately displayed. Contravention of these, will be deemed to mean that SSMI has failed to exercise due diligence obligations.
- Skip the label, lose the shield: Failure to comply with the rules can lead to a platform losing the safe harbour protection (which shields intermediaries from liability for third-party content) under Section 79 of the IT Act.ECI joins the bandwagon
ECI in its latest advisory to political parties, requires political parties to clearly label AI-generated campaign material. Like the IT Ministry’s draft rules, the advisory calls for 10 percent of the screen space (or first few seconds of audio clip) to carry a visible disclosure of SGI. Parties also have to name the creator in the caption or metadata and keep logs for verification.
Reactions galore
No big policy update is complete without a flurry of takes — and these new changes are no exception. Some say the definition of “synthetically generated information” might be a little too vague — broad enough to include everything from deepfakes to your favourite Instagram filter or that cinematic Diwali reel with VFX sparkles. Others are asking where platforms draw the line: do AI chatbots and model developers count as intermediaries too? And will “better safe than sorry” moderation end up sweeping up creative content in the process?
Connecting the dots
Ctrl + del — but with documentation
During X’s challenge to government’s takedown powers (struck down by the Karnataka High court), it was argued that ‘millions of officers’ could direct content takedown with little oversight. Soon after, Secretary Krishnan emphasized that the government must exercise these powers cautiously and in a prudent manner. Springing into action, the IT Ministry released the IT Amendment Rules, 2025 on October 22, 2025, to tighten the government’s takedown mechanism. With this amendment, only Joint Secretary-level officials can pull content down, and they must write down why. This includes an explanation of the legal basis, nature of violation, and specific URLs targeted or content identifiers. Further, all takedown actions will undergo monthly scrutiny by a secretary-level officer to ensure legality and proportionality. These amended rules will come into effect on November 15, 2025.
Online gaming gets a rulebook
India’s gaming sector levels up on regulation — with the Promotion and Regulation of Online Gaming Act, 2025 (PROGA) (which has been given Presidential assent), and has been effective from October 1, 2025. PROGA implements a comprehensive ban on online money games, while calling for promotion of online social games and e-sports. Some key takeaways include: establishment of a gaming authority, registration for online social games, three-tier grievance redressal mechanism for users, among others.
In true fast and furious fashion, on October 2, 2025, the government released the Draft PROG Rules, 2025, meant to operationalize the PROGA (check out our summary of the rules here.) The sudden rollout of the gaming framework has led to a series of challenges in the Supreme Court challenging PROGA and its constitutionality. A fresh PIL has also been filed in the Supreme Court seeking a nationwide ban on gambling platforms, posing as e-sports or social games. While these disputes unfold, gaming platforms are undergoing financial losses, leading to layoffs, as the sector adjusts to this new normal.
When AI meets antitrust
This month, the competition watchdog, the Competition Commission of India (CCI) dropped its awaited market study on AI and competition. Partnering with Management Development Institute Society, the CCI surveyed 100+ stakeholders, tech firms, and investors to map India’s AI stack. The study finds that global tech giants still dominate AI’s building blocks (i.e. compute, data, infrastructure), while Indian startups are thriving in applied AI. It flags algorithmic collusion, AI-driven price discrimination, and self-preferencing as rising risks, especially for smaller players short on compute and capital. At the same time, the study notes that most of these risks can be addressed within the existing competition framework and the upcoming Digital Competition Bill. The study nudges the ecosystem toward self-audits, transparency, capacity building, and tighter cooperation between regulators at home and abroad. Interestingly, in August 2025, the Standing Committee on Finance had urged for market studies to map digital competition and pushed for ex-ante rules.
IndiaAI Impact Summit Watch
- Taking the centre stage: Following the previous summit editions in Bletchley (2023) and Paris (2025), the India AI Impact Summit 2026 is the first to be hosted by a nation from the Global South. Scheduled for 19–20 February, 2026 in New Delhi, the summit will be a defining moment in India’s AI journey — bringing together heads of state, leading tech companies, multilateral bodies, startups, and civil society. The two-day Summit being organised by India’s IT ministry, will feature high-level plenaries, global hackathons, and an AI Expo. It will be preceded by over 200 pre-summit events that have already commenced across India and internationally. The Summit’s agenda translates into a forward-looking AI narrative through guiding principles or Sutras of People, Planet, and Progress and seven thematic Chakras spanning skills, inclusion, trust, innovation, science, democratized resources, and development outcomes.
- Come one, come all: India’s preparations for the summit embody a broad, inclusive approach. So far, the ministry has hosted five virtual consultations engaging civil society, global organizations (including UNDP, UNESCO, OECD), major tech firms, startups, and academia, alongside a public consultation in June 2025. A precursor panel was held at the Internet Governance Forum in Norway; and global participation is expanding — with invites being rolled out to global leaders.
- Buzz building up: Pre-summit events have been building momentum, sparking discussions on AI across the globe. Some of these include: Bharat AI Shakti in Delhi, Qualcomm Innovation Forum in Bengaluru, AI Fusion in Andhra Pradesh, Inclusive AI in San Francisco, and Shaping AI Governance in Argentina. Top leaders and government officials have also been participating in these events. External Affairs Minister, Dr. S. Jaishankar delivered the keynote address at the Trust and Safety India Festival 2025 and Secretary Krishnan spoke at the India Mobile Congress 2025.
- Flagship events to watch:
o AI Pitch Fest (UDAAN) showcasing startups from Tier-II and III cities
o AI for All Global Impact Challenge offering prize money for top 10 winners with compute credits and mentorship
o AI by HER Global Impact Challenge for women-led AI teams offering awards and mentorship bootcamps
o YUVAi Global Youth Challenge (ages 13-21) with prize pools and fully sponsored travel to the summit for the top 20 teams
o International Research Symposium fostering collaborations between Indian and Global South researchers alongside international thought leaders.
- States stepping up in India’s AI Race: States have increasingly started participating in the leadup to the main summit as well. Uttarakhand hosted its first AI Impact Summit, as a pre-event summit, in Dehradun on October 17, 2025.
From the courtroom to your inbox
- Industry body backs OpenAI in copyright dispute: The Broadband India Forum (BIF) — which includes Meta, Google, Reliance Jio, and Amazon among its members — has argued in support of OpenAI, during the copyright dispute between ANI and OpenAI. BIF’s main point is: LLMs aren’t photocopiers. They don’t put out copyrighted articles word-for-word. Instead, they look at tons of information, learn patterns, and then create new, original responses — kind of like how we read a bunch of things and then explain them in our own words. According to them, that fits within what the Copyright Act already allows, like temporary storage and fair dealing. They also maintained that if publishers don’t want their content used, they have tools like paywalls and blockers to prevent scraping. So, if something is publicly accessible online, there’s at least some expectation that it may be referenced or summarized. The Court will hear the final arguments from both sides on November 7 and 21, 2025. Stay tuned — this one is shaping up to be a landmark decision for how generative AI will be governed.
- Courts are keeping a close eye on celebrity deepfakes: Indian courts are getting increasingly strict about AI-generated deepfakes and fake celebrity endorsements. Big names like Aishwarya Rai Bachchan, Abhishek Bachchan (we mentioned this last month), and Karan Johar have all filed suits to stop their image & voice rights from being used without permission in AI-made content. Recently, both the Bombay and Delhi High Courts have backed celebrities’ “personality rights” — basically the right to control how their image and identity are used. The courts even described deepfakes as a “depraved abuse of technology,”. Actors like Suniel Shetty, Akshay Kumar, and Hrithik Roshan — as well as journalist Sudhir Chaudhary and singer Kumar Sanu — have all had similar rulings in their favor. For Punjab Chief Minister Bhagwant Mann’s deepfake videos, social media platforms were asked to remove videos within 24 hours, and file compliance reports within 10 days. The overall trend? Courts seem to be leaning toward proactive protections to prevent AI from being used to mimic or misrepresent celebrities without consent.
- New SOP to tackle non-consensual intimate imagery: The government has rolled out a Standard Operating Procedure (SOP) to help quickly remove non-consensual intimate imagery (NCII) from online platforms. Under the new guidelines, platforms must take down such content within 24 hours of receiving a complaint. This move follows a Madras High Court case where a woman sought urgent removal of her private images online. The SOP doesn’t just stop at takedowns — platforms are also required to detect, remove, and prevent the re-upload of the same content. They must also share relevant information with the Indian Cybercrime Coordination Centre (I4C), which helps monitor and address online abuse
Global Tech Stories
- California draws the line on AI risks: Earlier this month, California became the first U.S. state to regulate both the safety of powerful AI models and AI companions. The two new laws — SB 53 (Transparency in Frontier Artificial Intelligence Act) and SB 243 (AI Companion Chatbots Act) establish AI safety benchmarks that will be watched closely worldwide.
1. SB 53 requires large AI companies to be transparent about their safety and security protocols. Specifically, they are required to show how they can prevent their models from catastrophic risks, like being used to commit cyberattacks on critical infrastructure or build bioweapons. Additionally, the law requires developers to report critical AI safety incidents and face civil penalties for non-compliance.
2. SB 243 focuses on chatbot safeguards, particularly for minors. It mandates clear disclosure that chatbots aren’t human, requires suicide-prevention protocols, and bans sexually explicit content in conversations with under-18 users.
Together, the laws mark a pivotal moment: Regulators in the U.S. are no longer debating whether to govern AI, but how to enforce AI safety and accountability in practice.
- Europe wants kids off social media: European Union is considering a proposal to add age-related restrictions to access social media. Members of the European Parliament have adopted a report by the Internal Market and Consumer Protection Committee that looks at social media access. It has proposed an EU-wide digital minimum age of 16 to access social media, video sharing platforms, and AI companions. If you have parents’ consent, then you access these at a younger age, but the proposal has recommended a complete ban on social media for children under the age of 13. Some EU countries are already introducing restrictions. In the Netherlands, authorities recommend banning children and teenagers under 15 from using social media platforms such as TikTok and Instagram and limiting the time they spend on mobile devices. The move by EU also mirrors momentum around the world notably Australia, which in 2024 became the first to push for a full under-16 social-media restriction and platform-led age verification. Expect age restrictions to become a recurring theme worldwide, as governments consider the harms of the internet to children.
Reading reccos
- On Platformer, Casey Newton carefully tracks different platforms efforts on curbing and handling AI generated slop on users’ feeds.
- Derek Thompson breakdown whether the AI industry is a bubble waiting to pop.
- On Ex Machina, Rahul Matthan argues that instead of labelling AI-generated content, it might be better to call out content that is real.
Shout-outs!
Upcoming events
- Truth, Trust & Technology – A Policy Dialogue: Karnataka’s IT minister Priyank Kharge will be gracing us with his presence at our event on misinformation, fake news, and hate speech. With National Law School of India University (NLSIU), we are organizing a policy dialogue that brings together lawmakers, technologists, journalists, academics, and policy leaders to explore the constitutional boundaries, practical challenges, and real-world alternatives to regulating online speech. It’s happening on November 7, 2025 - right in the heart of Bengaluru. If you’re around, feel free to join us. You can sign up here !
- AI Pre-Summit Forum: Ikigai Law in collaboration with Business Software Alliance (BSA) is hosting BSA’s AI Pre-Summit Forum on 6 November 2025. The focus will be on how AI can drive social good, modernize governance, and build a future-ready workforce for India. This is happening in Delhi, and you can register here !
In the spotlight
- Ikigai law has been recognized by Asia Law among its “recommended” practices in the Aviation Sector and “highly recommended” practices in Technology & Telecommunications. Managing Partner Anirudh Rastogi has also been named a ‘Distinguished Practitioner’ in the Technology & Telecommunications space.
In the media
- Astha Srivastava was quoted in MediaNama’s report on RBI constituting a payments regulatory body.
- Aman Taneja spoke to Economic Times about the new draft amendment to the IT Rules introducing transparency obligations on platforms for synthetic content. Listen to Aman share his views for TRT World on surge of celebrities filing suits against deepfakes and seeking protection for their personality rights.
- Nehaa Chaudhari wrote on the Economic Times addressing how the rise of deepfakes makes identity management a critical governance issue for companies and boards.
- Rahil Chatterjee was part of a panel discussion organized by CCAOI that focused on the PROG Act and Rules.
- Rutuja Pol will be in Spain for the 5th Spain–India Forum. She will be part of a session focused on technological innovation and development in the healthcare sector.
Signing off,
Ticker team for this edition: Ayush Nirmal Vidushi Vijayant
Image credits: Created by Vidushi and Nirmal