Introduction
Ikigai Law and the National Law School of India University (NLSIU), Bengaluru, jointly hosted Truth, Trust & Technology: A Policy Dialogue on Online Speech Regulation on November 7, 2025, at WelcomHotel, Central Bengaluru.
The event brought together approximately 140 participants, including journalists, academics, policy professionals, legal experts, students, and researchers. The discussions focused on the rise of misinformation and the role of both central and state governments in shaping regulatory responses.
The event featured a keynote address by Shri Priyank Kharge, Hon. Minister for Electronics, IT/BT and RDPR, Government of Karnataka, followed by a Q&A, and two panel discussions with experts from law, media, policy and politics.
This blog summarises key themes and highlights from (i) the keynote address, (ii) the Q&A, (iii) Panel I, and (iv) Panel II.
Highlights
-
Shri Priyank Kharge’s Keynote Address
-
The scale and severity of the misinformation crisis: Minister Kharge opened by describing the misinformation ecosystem in India. He warned that misinformation, disinformation, malinformation, and fake news have become weaponised deceptions—threatening democracy, public safety, national security, and institutional credibility. Citing the World Economic Forum Global Risk Report 2024, he noted that misinformation ranks as India’s top short-term risk. Quoting national leaders such as the Chief Justice of India, the Prime Minister, the Chief Election Commissioner, and the Chief of Defence Staff—he underscored that the erosion of public trust now cuts across institutions. He pointed to India’s vast digital ecosystem—over 500 million users each on WhatsApp and YouTube, 481 million on Instagram, and average daily use of 2.5 hours per person—arguing that the sheer scale magnifies harm. He cited MeitY data showing that WhatsApp accounts for 64% of misinformation spread, Facebook for 18%, and Twitter for 12%. The Minister illustrated the real-world consequences of misinformation—the Delhi riots (2020), Bhima Koregaon (2018), the Paresh Mesta incident (2017), and the 2025 Pahalgam deepfake episode—showing how online falsehoods have fuelled violence, communal tension, and institutional mistrust.
-
The categories of harmful content: The Minister described Karnataka’s framework for classifying harmful content into misinformation (shared carelessly), disinformation (created to mislead), and malinformation (true information used maliciously). He illustrated each with examples. He warned that disinformation, being intentional, is the most corrosive of all. He argued that cheap AI and synthetic-media tools now allow anyone to create deepfakes, cloned voices, and forged documents that appear real. He remarked how platforms often amplify such content in violation of their own public policies, making them indirectly responsible.
-
Karnataka’s policy approach and the proposed Misinformation Bill: The Minister clarified that the forthcoming Karnataka Misinformation Bill will not criminalise dissent or creativity. It excludes satire, parody, opinion, and art, focusing instead on falsehoods that cause law-and-order disruptions or public harm. He highlighted the State’s Information Disorder Tracking Unit (IDTU) pilot, run during the 2024 parliamentary elections. Over 400,000 pieces of content were analysed; about 700 cases were flagged serious; and ~30 criminal cases (approximately 18 ongoing) were filed. All actions were legally vetted, supported by AI start-ups and research scholars to trace origins and verify content. He emphasised that the state’s approach is grounded in accountability over censorship, guided by credible verification, legal processes, and alignment with constitutional and global best practices. He said that the state wants to “name, shame and pull up people who deliberately mislead the public”.
-
Timeline and intent: The Minister announced that the Bill would potentially be introduced in the Belagavi Assembly Session in December 2025. He closed by emphasising that Karnataka’s goal is to build a society rooted in truth, scientific temperament, and trust.
-
ForQ&A with Shri Priyank Kharge
-
Existing legal framework and implementation challenges: Responding to a question on why a new law is necessary given the existing provisions in the Bharatiya Nyaya Sanhita (BNS) and prior Supreme Court jurisprudence on hate speech, Kharge said the government’s effort is not to reinvent the wheel but to connect the dots across existing laws. He cited recent enforcement action in coastal Karnataka, where individuals who had routinely delivered hate speeches were, for the first time, summoned to police stations and charged under applicable provisions. Acknowledging that implementation takes time, he said the government has circulated handbooks to district police and superintendents on handling hate-speech cases to help them deal with such cases. He further clarified that the draft bill was not developed in isolation, but through consultations involving tech companies, lawyers, and think tanks. He noted that they’re willing to take in more written inputs if required, adding that the government aims to produce a law that does not get stayed in court, as happened with the 2023 IT Amendment Rules.
-
Jurisdiction, technology, and the role of the Information Disorder Tracking Unit: Addressing a techno-legal question on jurisdiction and manipulated IP addresses, Kharge explained that the state’s Information Disorder Tracking Unit (IDTU) employs AI and machine-learning firms to trace the origins of online falsehoods, especially during sensitive periods such as religious festivals or politically charged moments. While cross-border servers cannot be directly accessed, public platforms hosting such misinformation can be blocked when it poses a law-and-order threat. He added that the government not only identifies false information but also debunks it publicly through official channels and its website, ensuring that verified facts are accessible. If anyone continues to spread such piece of info after it’s been debunked, they will be held accountable.
-
Safeguards against misuse and ensuring transparency: Responding to concerns raised about potential misuse of disinformation laws across the world, Minister Kharge provided a detailed account of the IDTU’s internal structure and review process. Every flagged item, he explained, is bucketed as misinformation, disinformation, malinformation, or fake news. A multidisciplinary committee that includes independent journalists assesses whether content is harmless, and a legal team within the unit determines if it violates the law. Only after legal vetting is a case referred to the cyber team from Department of Home for action, ranging from notices to criminal proceedings. He emphasised that no enforcement action occurs without legal review, and that the process is open to external scrutiny: The government will publish online why content is categorised as misinformation or disinformation and the relevant legal provisions invoked. He acknowledged that any law can be abused, but underscored that the state’s intention is to build a framework grounded in accountability and openness.
-
AI regulation, central coordination, and future framework: When asked about enforcement of the draft synthetic content labelling amendments and the liability of individuals who unknowingly share misinformation, Kharge clarified that Karnataka will not act in isolation from the Centre’s IT framework. The state plans to establish a committee for responsible AI to align its approach with the new central draft rules. He also specified that they would work with public platforms on synthetic content. He described a recent case involving the fake news of a farmer’s suicide in Haveri. He highlighted how the incident illustrates why a systemic tracking framework is essential — to trace the full chain of dissemination and ensure verified information is prominently circulated. Admitting that the process will evolve, he said that it is important to get started even if it isn’t perfect.
-
Panel I – Regulating Speech in Karnataka: A Constitutional Tug of War
Speakers: Nehaa Chaudhari (Partner, Ikigai Law – Moderator), Prof. Sudhir Krishnaswamy (Vice-Chancellor, NLSIU), Jayna Kothari (Co-Founder, Centre for Law & Policy Research), Alok Prasanna Kumar (Co-Founder, Vidhi Centre for Legal Policy), Manu Kulkarni (Partner, Poovayya & Co.), Dr. Malavika Prasad (Lead Counsel, Sadananda & Prasad).
-
Framing the problem: beyond more law: The speakers agreed the harm from disinformation and malinformation is real but differed on remedies. Alok Prasanna Kumar argued the core failure is institutional: weakened, non-independent institutions and poor government transparency create fertile ground for falsehoods. He suggested that the hard solution is to rebuild critical-thinking capacity (through education, civic literacy) and restore openness (proactive disclosure, strong RTI institutions, timely release of drafts), rather than defaulting to new criminal prohibitions.
-
The limits of criminal law as a solution: Several panelists cautioned against criminal-law-led approaches to speech. Alok called criminalisation the easy but incorrect option; Prof. Sudhir Krishnaswamy urged abandoning the criminal-law model and avoiding new offences, noting India’s track record of arrests and bail-denial in speech cases. Jayna Kothari stressed that, even where criminal remedies exist, they often become unusable for vulnerable groups; she recommended adding civil remedies (e.g., injunctions to halt ongoing harm) alongside any criminal tools.
-
Constitutional competence and federal prudence: Dr. Malavika Prasad explained that Karnataka’s bill rests on Entries 1 and 2 of the State List (public order and police) and can draw on the Concurrent List to legislate on criminal law and actionable wrongs. She stressed that states should experiment within their competence and use Article 254 creatively to survive repugnancy tests, though the question remains one of prudence rather than power. Prof. Krishnaswamy added that fragmentation is inevitable but manageable: as in the U.S., a “California effect” could see the strictest state law become the de facto standard for platforms nationwide.
-
State power and the risk of defining truth: Manu Kulkarni warned that governments should not become arbiters of truth. He urged Karnataka to avoid replicating the Centre’s fact-check unit, struck down by the Bombay High Court for overreach. He argued that laws should target actual harm rather than disagreement. He and Alok Prasanna Kumar stressed that transparency and public consultation are essential to avoid misinformation about the law itself.
-
Platforms: amplification, liability, and independence: A central tension emerged around platform responsibility. Krishnaswamy contended there is no constitutional “right to amplification”, so platforms can be required to take responsibility for amplification effects (without being made arbiters of truth). Manu Kulkarni countered that amplification is inherent to the historic right of the press and is now democratised; he warned that over-regulating platforms risks shredding protected speech and “shooting the messenger” rather than bad actors. He also pointed to existing non-criminal avenues (transparency reports, grievance appellate pathways in platforms) that already largely constrain harmful content.
-
Panel II – Tackling Misinformation in Practice: Risks, Responsibilities and Alternatives
Speakers: Shrabonti Bagchi (National Features Editor, Mint – Moderator), Aiyshwarya Mahadev (Chairperson, Karnataka Congress Social Media Department), Surabhi Hodigere (Public Policy Professional and Spokesperson, BJP Karnataka), Dhanya Rajendran (Co-Founder and Editor-in-Chief, The News Minute), Rajneil Kamath (Vice President, Trusted Information Alliance), Siddharth Narrain (Assistant Professor of Law, NLSIU).
-
Defining misinformation and the nature of the problem: Speakers agreed that misinformation in India is not merely a technological challenge but a deeply social and political one. Dhanya Rajendran argued that the issue has evolved beyond “fake news” into a broader problem of false information and information falsity, driven by both media dynamics and state narratives. She cautioned that governments claiming to fight misinformation often seek to define truth themselves, which undermines credibility. Surabhi Hodigere called misinformation a digitised version of bad governance, where technology amplifies pre-existing biases. Aiyshwarya Mahadev added that unregulated amplification has already caused tangible harm, from communal panic and the exodus of North-Eastern residents in Bengaluru to lynchings and ostracism during the pandemic, making some regulation necessary.
-
Balancing regulation, accountability, and free expression: The discussion then turned to the challenge of designing accountability without undermining free speech. Aiyshwarya argued that regulation must deter deliberate falsehoods but not be arbitrary or partisan and preserve legitimate criticism. Surabhi questioned whether Karnataka’s draft Bill focuses on prevention or punishment and warned that a punitive approach could undermine trust. She called for broad-based public consultations before any law is introduced. Siddharth Narrain situated the debate within global experience, referencing Singapore, Australia, and UN special-rapporteur studies that recognise misinformation as a policy issue, but he reiterated that the central question remains who decides what is misinformation? Any government-controlled truth panel, he warned, risks losing legitimacy from the start.
-
Fact-checking, independence, and the limits of technology: Rajneil Kamath emphasised that credible fact-checking relies on replicable methodology and transparency, not official decrees. AI tools can detect reused images or timestamps but cannot adjudicate truth in conflicting accounts. Kamath and Rajendran noted that most independent fact-checkers declined to join the government’s IDTU — nineteen of twenty IFCN-accredited organisations chose to remain autonomous and independent. They contrasted India’s approach with the European Union’s model, where public funds support independent fact-checkers without state interference. Kamath warned that conflating official denial with verification undermines credibility.
-
Media literacy, culture, and systemic bias: The discussion also focused on the social conditions that sustain misinformation. Narrain described a fragmented information ecosystem in which audiences are siloed within algorithmic and partisan feeds, eroding any shared reality. He stressed that media literacy and civic education are long-term defences. Hodigere observed that falsehoods spread not because people are uninformed but because they are influenced by bias and identity. Mahadev added that social accountability and a measured fear of consequence are necessary to deter those who deliberately propagate falsehoods.
-
Towards constructive and non-partisan solutions: The panel concluded that misinformation is as much a cultural and institutional problem as a technological or legal one. Speakers called for independent multi-stakeholder oversight rather than state-controlled fact-checking units, transparency in official communication, and stronger public access to information. They also underlined the importance of education, digital literacy, and civic responsibility. Rajendran closed the discussion by noting that governments must model the transparency they demand.
Author credits: Ikigai Law
For more information reach out to us at contact@ikigailaw.com