Tech Ticker Issue 68 — July 2025

Don’t Ju-ly (read: you lie) you missed us, right? This edition marks a whole year of this new Ticker team, and we are so grateful for all the love you have shown us! We hope you enjoy reading this, as much as we enjoy writing it. Sharing a quick BTS of what the past year has looked like:

Created by Vidushi (Based on true events)

And on that teary-eyed (but rain-happy) note — we bring you a digital downpour of tech updates. Grab your chai, dip that Parle-G and dig in.

Deep-Dive

Karnataka’s double trouble on the grid?

Heads up: we totally blame the (alleged) Bollywood movie Ta Ra Rum Pum’s recently released spin-off for the F1 metaphor overload in this deep dive.

Last month, the Karnataka government, proposed two draft laws targeting online harms: the Misinformation and Fake News (Prohibition) Bill, 2025 and the Hate Speech and Hate Crimes (Prevention and Control) Bill, 2025.  Currently undergoing inter-departmental consultations, the bill is likely to be tabled in the monsoon session of the legislature — scheduled to begin next month.

We break down this legislative double feature here for you:

Basis

Fake news and misinformation

Hate speech and hate crimes

First lap — the what and why?

Content red flag: Prohibits fake news and misinformation on social media, introduces new no-go content categories.

Swipe left on hate: Cracks down on hate speech and identity-based violence — covering religion, caste, gender, language, sexual orientation, and more.

Key definitions

Fake news includes misquoting, distorting facts through edited media, or publishing entirely fabricated content.Misinformation means knowingly or recklessly sharing false or misleading factual statements, even partially. 

Hate crime means any act causing or inciting emotional, physical, social, economic, or psychological harm, or spreading hatred based on identity factors Hate speech involves: (i). intentionally publishing/ promoting/ promoting or communicating identity-based content inciting harm; (ii). electronically sharing content or; (iii). displaying content accessible to likely victims — through words, visuals, or symbolic references. This can be written or oral words, illustrations, visuals, or any representation or reference.

Race control (a.k.a enforcement authorities)

New authority in town: Proposes setting up a state-appointed Fake News on Social Media Regulatory Authority, comprising primarily of government officials, to oversee implementation of this draft law. Its key functions are:

· Ban fake news on social media· Prohibit abusive, anti-feminist, or obscene content· Block content disrespecting Sanatan symbols

· Curb superstitious content

· Allow only research-backed content on sensitive topics such as history, religion

· Enable penalties under the Bharatiya Nyaya Sanhita, 2023

Special courts with special powers:  To fast-track trial of offences, the special courts will have powers to issue correction and disabling directions to intermediaries, publishers, broadcasters or others with regulatory/ supervisory control over communication mediums in Karnataka. 

Mic drop, quite literally: District magistrates can ban gatherings, processions, or loudspeakers in areas at risk of public unrest. 

Penalties

· Fake news? Real consequences: Think before you post — because if the Authority decides your “forwarded as received” masterpiece counts as fake news, you could be looking at up to 7 years in jail, a ₹10 lakh fine, or both — a pretty steep price for going viral.

· Misinformation mayhem: Post something that messes with public health, public safety, or free and fair elections in Karnataka? You can face criminal penalties.

· When companies catch the blame?: If a company’s account is behind the offence, the company and the people running the show could be held responsible. The only way out? Proving you had no clue what was going on or that you actually tried to prevent it. Oh — and if someone in the office kinda sorta let it happen, they’re also deemed guilty.

Up to 3 years behind bars or a ₹5,000 fine — maybe both. And nope, you can’t bail your way out.

Intermediaries can be held liable if they host hate content — even unknowingly or despite due diligence. Those who fund, abet, or assist in such offences aren’t spared either. Penalty? Up to 3 years in jail and/or fines.

DRS zone for free speech (F1 speak for exemptions)

Satire safe for now?: Opinions, parody, sermons, and comedy are excluded — provided a reasonably sensible person won’t mistake them for facts.

Testing faith?: Art, research, reporting, and religious takes are safe — provided they don’t stir up hate. Also, there is an immunity pass for government officials for anything done “in good faith”.


Red flags waved by stakeholders

· Digital rights groups are saying that the proposals are vague, unconstitutional, and potentially enabling over-censorship. Their concern? Definitions like “anti-feminist” and “disrespecting Sanatan,” platforms could lead to over-policing, and legitimate speech might end up getting caught in the dragnet.

· Legal experts have questioned whether state governments should be in the business of deciding  what’s fake news.

· Reportedly, even the IT, Home, and Law departments are reportedly asking for a rethink.

Time will tell whether Karnataka’s government plans to hit accelerate or take a pit-stop with these two proposals.

Created by Vidushi (In my defence, I have a new hobby because of Brad Pitt)

Connecting the Dots

· Right here waiting (for the data protection law): We may now be waiting longer for India’s privacy law than Richard Marx was in his iconic song. In yet another delay (cue dramatic sigh), the Digital Personal Data Protection (DPDP) Act (which was passed way back in August 2023), is being sent to the Attorney General for legal review. The reason? Mounting pressure that the law weakens the Right to Information (RTI) Act by removing a key safeguard: allowing disclosure of personal data when it serves the public interest. 120 lawmakers, civil society groups, and journalists argue that this restricts citizens’ access to information needed to expose corruption. Journalists are also concerned that the Act could undermine their constitutional freedom of speech and their right to practice any profession. Here’s the kicker: earlier versions of the law included exemptions for journalism — but the final version seems to have quietly dropped them. TL;DR: The privacy law is still stuck in limbo — caught between personal data protection and the public’s right to know.

Created by Vidushi

· Karnataka re-entering the online gaming arena: Karnataka is gearing up for Round 2 on online gaming regulation after its 2021 law was struck down by the High Court for being unconstitutional, vague and excessive. Reportedly, this fresh attempt is being made through the Karnataka Police (Amendment) Bill, 2025. Under this draft law, the Karnataka Online Gaming and Betting Regulatory Authority  will be established — which will license operators, monitor compliance, and crack down on violators through the police. The bill draws a clear line between games of skill (like fantasy sports, chess, and quizzes) and games of chance, the latter being criminalised regardless of whether bets are placed in cash, crypto, or tokens. It also includes whistleblower protections to encourage reporting of illegal betting platforms and focuses on awareness and sensitization around responsible gaming. Some stakeholders Notably, Karnataka accounts for nearly 25% of India’s online gaming market — so all eyes are on how this plays out.

· New number, who dis?: On June 24, the telecom department dropped a fresh draft to amend the Telecom Cyber Security Rules, 2024 — and it’s all about making sure that the mobile number you gave that app belongs to you. The proposed Mobile Number Verification framework also covers platforms that use phone numbers for user verification — think fintechs, delivery apps, e-commerce sites, and of course, your favourite social media doom-scrollers. These newly coined Telecom Identifier User Entities may soon have to verify user numbers via a centralised verification platform, respond to government requests, and even suspend sketchy IDs. It’s not clear which sectors will face mandatory compliance first. Translation: it’s time to lawyer up, audit your backend, and prepare for a lot of “Hi, we’re updating our KYC” emails. Public comments on this framework are open until July 24, 2025.

· Gig workers, assemble!: Telangana has rolled out the Gig and Platform Workers (Registration, Social Security and Welfare) Bill, 2025. This bill makes registration mandatory for all gig workers and aggregators, and proposes a Welfare Board with reps from the government, platforms, and workers. It also sets up a Social Security Fund, bankrolled by the state, workers, and a cut from every aggregator transaction. Perks on paper include termination notices, transparent algorithms, on-time payments, safety protocols, and a proper grievance redressal system. Labour groups have welcomed the move but want sharper teeth on enforcement and AI accountability. Telangana now joins Karnataka and Rajasthan in trying to make gig work a little less… giggy.

· Structure for logging AI crashes: India’s Telecom Engineering Centre has unveiled a draft taxonomy for AI Incident database  — aimed at telecom and critical digital infrastructure. Think of it as a formal system to record AI mishaps with 28 data fields covering what happened, why it did, and how bad the damage was. The design draws inspiration from the OECD AI Incident Monitor and is built to be sector-agnostic and globally interoperable.

From the courtrooms to your inbox

· X’s challenge on the spot (again): Seems like it's getting crowded in the Karnataka High court. The Court has allowed X to widen its challenge — not just against content takedown powers under Section 79(3)(b) of the IT Act, but also against the constitutionality of Rule 3(1)(d) of the IT Rules, 2021, which requires intermediaries to pull down content when directed by the government or a court. While the Union government had filed objections to this change on merits, the lawyer for the government seems to have had a change of mind, since he orally informed the court that they do not have objections. In parallel, the court heard intervention pleas from digital media body Digipub (who represent the likes of Scroll, Newslaundry, The Ken, The Wire, among others) and Newslaundry co-founder Abhinandan Sekhri. DigiPub told the court that takedown orders hurt not just platforms, but digital publications too. This is of course not the first time the IT Rules have been challenged. Petitions were filed across various High Courts, which have been clubbed to be heard in the Delhi High Court. Some High Courts from Bombay, Kerala and Madras had also granted stays.

· Encrypted, but not immune from Proton Mail appeals blocking order: Swiss end-to-end encrypted email service — Proton, has challenged Karnataka High Court’s order from April 2025 directing the Centre to block the service under Section 69A of the IT Act. The reason? Its platform was reportedly used to send obscene emails — and the judge pointed to prior misuse cases like hoax bomb threats. The judge also highlighted Proton's anonymous, encrypted communication with no physical or infrastructural presence in India, rendered it largely unaccountable to India enforcement authorities. Proton claims: it wasn’t served properly, is cooperating under MLAT (because Swiss servers), and already blocked the sender once it got the heads-up. The Central Government told the court that Proton hasn’t been blocked yet — proceedings are still ongoing.

Tech Stories

Fair use or fair game?

In a pivotal month for the AI industry, two major court rulings from California handed partial victories to tech giants Anthropic and Meta in their respective copyright cases. While both outcomes favoured AI companies, the reasoning behind each decision was notably different.

In the Anthropic case, the court indicated that AI training under certain conditions is transformative and qualifies for fair use. In contrast, the Meta ruling noted that transformative use alone does not guarantee fair use. But still, dismissed the case by highlighting the lack of any evidence of infringement or market harm.  

So, while both judgments offer some support for AI training under certain conditions, they don’t establish a uniform precedent. If anything, they highlight the outer edges of this legal conundrum: from clear transformative use on clean data, to dismissal for lack of any evidence of harm.  

Side bar: uhm, what is fair use?

Fair use is a legal doctrine in the US that allows for limited use of copyrighted material without permission from the copyright owner. Basically, fair use is a defence against someone claiming copyright infringement. There is no mathematical way of thinking about fair use, it is a determination made on a case-to-case basis. That’s where things get murky.

India, by contrast, doesn’t follow the fair use model. Instead, it applies a narrower concept called fair dealing, which allows limited use of copyrighted content for specific purposes like research, private study, criticism, or news reporting – each of which are listed out in The Copyright Act, 1957.

Okay, back to the US cases.

Clean data, clean conscience

In Bartz v. Anthropic, Judge Alsup gave a thumbs up to training AI on legally acquired books, calling it “spectacularly transformative.” He likened Anthropic’s Claude model to a human reader: someone who reads widely, learns patterns and structures, and then produces something new. Alsup also held that scanning books into digital form—so long as they were legally obtained—was an acceptable, incidental step in that transformation.

But the ruling wasn’t a total win for Anthropic. Anthropic’s stash of 7 million pirated books from various piracy sites like Z-Library, in a permanent centralized digital “library” was not supported by the court. The court drew a hard line: fair use doesn’t cover pirated goods, no matter how cool your model is. That part’s heading to trial this December.

No proof, no case

A day after the Anthropic judgement, in Kadrey vs Meta, Judge Chhabria dismissed a lawsuit against Meta by 13 authors (including Sarah Silverman and Ta-Nehisi Coates) - saying they failed to show how Meta’s LLaMA models copied or hurt them.

For Judge Chhabria, the transformative-ness of LLMs alone doesn’t guarantee fair use. In his view, the most important factor is whether the use impacts the market value or potential market for the copyright owners unlike the decision in Anthropic. For this case, the judge didn’t find any merits in the claims made by the paintiffs. In his view, the plaintiffs failed to provide any concrete examples showing that the models produced infringing output, nor did they show that Meta’s AI had caused any market harm. Without that, the case had no legal ground to stand on.

While the rulings offer some direction, they stop short of drawing clear lines—leaving future courts to fill in the blanks.

So, what’s next?

Neither court addressed the next frontier clearly: output-based infringement. That is, whether AI-generated responses that imitate or replicate protected works can themselves violate copyright law. These questions are now centre stage in pending cases against OpenAI and Midjourney, which focus not just on the data fed into the model, but what comes out.

For now, U.S. courts are sketching only the outlines of a legal playbook for an AI company, but this is just the beginning and it’s far from over.

Reading Reccos 

· Tiger Feathers has a detailed breakdown of Urban Company’s business model. 

· Ananya Bhattacharya in Rest of World explains why WhatsApp couldn’t crack the fintech market in India.

· Meg Jones in The Conversation provides a thoughtful analysis of the US Supreme Court’s recent decision allowing age verification mandates on adult websites – and what it means for the rest of the internet.

Shout-outs!

· We have a new partner in town!: We’re thrilled to share that Rutuja Pol has been promoted to Partner, where she will continue to lead and build our Government Affairs practice. From navigating regulatory challenges for our clients to building trusted relationships with government stakeholders, she has led from the front — with clarity, curiosity, and an unflinching sense of purpose. With this elevation, Rutuja joins our majority-women partnership. We’re excited for what lies ahead — and proud to have Rutuja leading the way. 

· DPDP roadshow: Our DPDP workshop rolled into Mumbai and what a session it was. After successful runs in Delhi and Bengaluru, this flagship workshop once again delivered what it’s known for: a hands-on, practitioner-led experience that goes well beyond the standard webinars and lectures. Mumbai’s in-house legal community brought sharp insights, fresh perspectives, and distinct negotiation styles to the table. Same format, same energy, but a whole new flavor. We’re hosting this again in Delhi. You can fill out this form to show your interest. 

· Coffee catch-up, anyone?: Vidushi from our team — who is the Her Forum Chapter Lead for Bengaluru — is organising a networking mixer for women lawyers "Brewing Conversations” on July 18, 2025 in Ajji House by Subko in Bengaluru. You can register here to attend. If you are in town, drop by for a cuppa!

· Start-up talk: Our VC lead, Nimisha spoke at ‘India v. US HQ: Making the right choice for global growth’ — a talk organised by Qapita, AVA Insights and Ikigai Law 

· Bringing civil societies in the room: We were partners to the Startup Policy Forum—for a closed-door AI Policy Baithak on AI innovation and copyright law. 

· Talks with the National Health Authority (NHA): Astha spoke at a consultation on the Ayushman Bharat Digital Mission, organised by the NHA in Bengaluru. The session focused on the implications of the DPDP Act for health tech, the role of anonymisation, the need for a national digital health law.

Signing off,

Ticker team for this edition: Nehaa Nirmal  Pallavi Rahil Vidushi.

Challenge
the status quo

Sparking Curiosity...