Keeping a track of tech-evangelists’ “In 5 years AI will….” predictions, is a full-time job. Probably needing a high scale compute in itself. Even Sam Altman once joked that at OpenAI, if you ask ten researchers what Artificial General Intelligence (AGI) is, you’ll get fourteen different answers.
The truth? No one really knows. And aside from those visibly experiencing their Dunning-Kruger syndrome (cognitive bias where people with limited ability overestimate their competence), that uncertainty isn’t necessarily a bad thing.
Because, yes - there’s chaos. Beautiful, wild, exhilarating chaos. AI is already reshaping entire industries: writing code, generating marketing ideas, even creating eerily realistic avatars of you doing the cha-cha. Some of these fields may never return to what they were.
But not everything needs or wants to be optimized. For some, the tipping point was ChatGPT’s Ghibli art moment. For us? It’s music. It feels almost impossible, unthinkable really, to accept, let alone appreciate or even tolerate, the idea that music of the future might be flattened into nothing more than patterns and instructions, executed by a machine.
Still, we’re not purists. Not all algorithmic art needs outright rejection. Case in point: Microsoft’s the Next Rembrandt project. The idea was both bonkers and brilliant. Train AI on the essence of Rembrandt and get it to paint like the master himself (300 years posthumously, no copyright pressure). The result? Surprisingly soulful. And more than just an artistic experiment - it sparked national conversations in the Netherlands about art, tech, and what creativity really means. Oh, and it wasn’t just talk. By 2018, it had 1.8 billion media impressions, 2,000 articles, and €12.5 million in earned media value to support the arts. As historian Gary Schwartz aptly summarized this: “While no one will claim that Rembrandt can be reduced to an algorithm, this technique offers an opportunity to test your own ideas about his paintings in concrete, visual form.”
And then there are some industries whose algorithmic reduction goes beyond mere public perception - it also makes the regulators uncomfortable.
We cover one such sector in this edition’s Main Course – the Capital Markets! Where the rise of AI isn’t just disrupting workflows but forcing regulators to confront entirely new questions about accountability, risk, and the role of human judgment in financial decision-making.
Let’s dive in!
Main Course: exploring how SEBI’s AI regulations are reshaping capital markets and how to rewrite India’s Fast Payments story to prevent digital payments outages.
Dessert: sweets news on RBI’s new regulatory framework for formulation of regulations.
Mints: a refresher about recent fintech developments.
Main Course 🍱
The Post Human Discretion: Case of Capital Markets
Okay, we started on somewhat of a gloomy note. But we are not tech-regressionists. We’re as bullish on AI as anyone who has had front-row tickets to the onset of several disruptions. But this time, it’s different - the speed, the scale, the sheer chaotic brilliance.
By now, it’s no secret: generative AI doesn’t just find language patterns - it learns them, mimics them, and sometimes has the audacity to do it better than us.
Historically, machines took over repetitive or mechanical human tasks. Now AI, powered by extreme computational capabilities and large datasets, has blitzed into rooms that were supposed to be human-only clubs - by law, design, and legacy.
Exhibit-A: Capital Markets.
(Uh-oh, maybe we should have given a trigger warning with all the bloodbath in the markets lately!)
For decades, we’ve trusted the suits: research analysts (RA), asset managers, investment advisors (IA) – all carefully vetted (and SEBI-regulated, of course). [Quick refresher: RAs prepare general research reports and offer stock tips. IAs offer personalized investment advice.] Human hands holding your money. Human brains making the calls. But imagine swapping out that seasoned advisor for... a chatbot. A really smart one. A ChatGPT but for stock tips.
The fintech industry is already experimenting with such tools as we speak. Today’s AI tools are a whole different beast. We’re talking about machine learning, natural language processing, and sentiment analysis. Tools that can scan headlines, smell geopolitical tension, and pivot your investments accordingly. BlackRock’s AI, for example, isn’t just crunching numbers - they’re reading vibes. Its tools could detect rising oil prices, identify a surge in energy stocks, and proactively recommend rebalancing a portfolio towards that sector.
The Regulator Steps In
Naturally, someone had to play the hall-monitor. So, SEBI noticed. It usually does. In 2019, it began collecting reports from all AI/ML-using regulated entities. And it didn’t just sit on its findings. The last few months saw a flurry of rules around AI use:
· November 2024: SEBI issues a consultation paper calling for assigning responsibility for AI use by regulated entities.
· December 2024: Amendments explicitly allow IAs and RAs to use AI - on one condition: they remain fully accountable. You use AI, you own its advice - warts, hallucinations, and all.
· January 2025: SEBI clarifies that any content based on real-time securities market data is regulated. And any educational content must only use data older than 3 months.
· February 2025: SEBI publishes algo trading rules for retail investors. Transparent ‘white-box’ algos are treated more leniently. But opaque ‘black-box’ ones? Heavier scrutiny and more obligations. The message is clear: tread carefully.
Liability and Unintended Consequences.
Looking at these regulations (and more), one can easily figure that there is a reason why registered IA and RAs need compliance professionals. Such mountain of obligations!
Now, imagine expecting a pure AI chatbot to meet these obligations. Yeah…woof!
Naturally, all these rules lead to the million-dollar question: who pays when the AI (inevitably) makes a mistake?
One might argue that if the output is mechanically automated, why not just blame the machine? Sure, but SEBI’s not buying the whole ‘blame the algorithm’ defence. And unlike the internet in the '90s, it appears that AI might not get its grace period.
To SEBI’s credit, its 2024 amendments allowed AI use, but placed responsibility squarely on IAs and RAs. Still, SEBI missed a critical nuance - the current IA and RA framework wasn’t designed for autonomous, unpredictable technology giving research analysis/investment advice. For e.g. IAs and RAs must disclose conflicts of interest, conduct KYC, and explain recommendations. RAs, in particular, must avoid crossing the line between offering research and providing personalized advice. These rules are not optimal for an AI tools performing IA/RA functions because:
First: large language models (LLMs) like ChatGPT or Gemini evolve through training, personalizing and fine-tuning. Even deployers can’t always predict outputs. Yet the law expects IAs and RAs to ensure full compliance, even when the tech is inherently autonomous.
Second: Rules like disclosing every conflict or explaining every recommendation turn into logistical nightmares. LLM outputs aren’t clean formulas - crafty prompts or rogue inputs can skew results. And flooding investors with endless disclosures? That kills transparency.
(Case in point: Privacy Policies & Platform T&Cs: Do we really read them all?)
Third: Access to real-time market data is critical. But AI models today often rely on slightly stale data. Ask, ‘Should I invest in Stock-A?’ - and you’ll get something articulate, maybe insightful, but often generic or dangerously dated.
Markets move fast - wars, tariffs, annual reports, budget speeches, CEO meltdowns (we said what we said) - shift sentiment in seconds. Yet SEBI’s January 2025 rules don’t permit general AI models to scrape live data unless the deployer is regulated.
Fourth (and most importantly): Over-regulation risks stifling innovation. Sophisticated, regulated players - the ones best positioned to use AI responsibly - might end up tiptoeing around these tools, terrified that one unexpected AI output could trigger penalties or even cost them their license. Meanwhile, unregulated players, outside SEBI’s purview, could move fast and happily push AI-powered services with zero compliance headaches. The result? Investor protection takes a hit while innovation runs wild in all the wrong corners.
Safe to assume that these regulations from a decade ago, have not aged well for the AI era.
So, can we ever have a ChatGPT moment for capital markets?
Technically? Yes. But legally? It depends (The lawyer in us couldn’t resist). Depends on what SEBI decides and how the industry responds.
So what’s an innovator supposed to do?
Entities looking to deploy AI in capital markets will need a cautious, proactive strategy:
· Vet inputs and algorithms: Curate training data carefully and test algorithm behaviour to reduce unpredictable outputs. For e.g. develop algo to ensure that you can back each AI recommendation (buy/sell/hold) with clear reasoning such as financial ratios, market trends, and data sources like earnings reports and technical indicators should be part of the explanation
· Manage scope tightly: Keep research analysis and personalized advice clearly separated. Meaning, unless the entity deploying the AI tool has a valid IA license, ensure that no matter the prompts, the tool never spits personalized outputs.
· Engage early with SEBI: Leverage regulatory sandboxes to test tools safely and incorporate regulatory feedback.
· Pre-train for compliance: Where possible, embed compliance requirements into AI models from the start. In fact, considering the cross-border ease in deploying AI models, undertake an AI benchmarking exercise to align you AI practices with global standards to anticipate regulatory changes and stay compliant with emerging best practices.
And what should SEBI consider?
SEBI will need to strike the right balance. Overregulation could chill innovation; under-regulation could expose investors to serious risks. Its early moves were important, but from here, collaboration isn’t optional - it’s critical.
To start, SEBI could lean into industry-driven solutions. An industry code could set clear guardrails for sourcing and processing training data - defining what’s fair game, how to protect privacy, and how to tackle bias. This isn’t just theoretical. The Fintech Open-Source Foundation (FINOS), part of the Linux Foundation, already does this. Its AI Governance Framework, developed through its AI Readiness Special Interest Group, helps financial institutions adopt AI without losing control or accountability.
But codes and frameworks aside, SEBI may also need to take a hard look at its liability model. AI tools evolve, sometimes unpredictably. Expecting human-level certainty from AI outputs may be asking for the impossible. SEBI could help by issuing clearer guidance on how liability applies when outputs aren’t fully human-driven.
Here’s where regulatory sandboxes could play a starring role. Firms experimenting inside SEBI’s sandbox - subject to SEBI’s supervision - could get a regulatory seal of approval. Once they meet the set standards, liability could be limited, encouraging responsible AI without leaving innovators exposed.
Alternatively, SEBI could take cues from the industry. Major tech players like Microsoft have embraced shared responsibility frameworks for AI. Their approach is simple: developers, deployers and users - all share the load. Responsibility is distributed across the AI supply chain, ensuring no single actor is unfairly burdened.
SEBI could do something similar. It could push for distributed responsibility through industry-led Self-Regulatory Organizations (SROs). These SROs could then help regulated entities bake AI governance right into their contracts with developers and deployers. The result? Clearer roles, shared burdens, and a governance model that scales with innovation.
So, where does that leave us?
These amendments make one thing clear: for now, the liability rests on the entity using/deploying AI. No safe harbour. No shrugging and pointing at the machine. And SEBI is not alone. Regulators in the USA, UK, and Canada are all moving in this direction. The SEC has even penalized investment advisory firms for AI-washing - overstating their use of AI to market their services.
The global trend is clear - regulators believe that no matter how autonomous the system, there's always a human-in-the-loop. And the buck? It has to stop somewhere.
So, coming back to the same question - can capital markets have their ChatGPT moment? Sure. But not until the law catches up. And right now, it’s not in a rush.
But it’s not just capital markets feeling the tremors of change. Even one India’s most celebrated innovation – UPI – is showing signs of strain. As we turn to the next big story, we explore how India’s fast payments backbone is being tested under its own weight, and what it might take to future-proof it.
Time to Rewrite India’s Fast Payments Story
UPI recently broke another record – but not the kind you'd celebrate. Over a short span of three weeks, UPI faced four outages. It was an unprecedented moment in its otherwise stellar run. The Finance Minister took quick note. She’s asked all stakeholders to fix the underlying gaps and reminded them of the bigger mission: a billion UPI transactions per day within the next 2-3 years.
The recent outages stemmed from a mix of technical limitations within the UPI ecosystem and the inability of NPCI’s/banks’ servers to handle growing transaction loads.
Here’s what happened:
· 26 March 2025: Fluctuations in telecom networks connecting NPCI, PSPs, TPAPs, and users led to transaction failures.
· 31 March 2025: Bank servers overloaded during the financial year-end rush, causing drop-offs in UPI transactions.
· 2 April 2025: Bank UPI servers slowed down, spiking latency across the UPI network.
· 12 April 2025: Banks flooded UPI servers with transaction status requests. The UPI infrastructure currently lacks a system to limit these requests. The excessive traffic from the requests overburdened UPI servers, leading to a 5-hour UPI outage. NPCI has now asked banks to implement processes to prevent reoccurrence of this.
· 12 May 2025: PhonePe – a leading TPAP – reported disruption in its service due to a network capacity issue. The root cause was a new server handling all UPI transactions while the usual systems underwent disaster recovery drill to avert cybersecurity risks due to Indian-Pakistan conflict.
These incidents aren’t random blips. They show signs of stress on UPI’s infrastructure - but for now, we believe that the gaps are still fixable.
The MDR Link
Before NPCI could issue a statement, conspiracy theorists had a field day. Some even claimed that the outages were ‘engineered’ by NPCI, RBI, and the Government to drum up support for MDR’s comeback. Wild theories aside, there is a connection between UPI outages and MDR.
Ever since the zero MDR policy kicked in, the industry has flagged its sustainability issues. No MDR means no revenues for banks and UPI service providers - and no revenue means no real incentive to invest in upgrading UPI’s infrastructure or introducing new innovations.
The recent outages might be the first visible cracks in the system — cracks that, partly, MDR can fix. If MDR is reinstated, a portion of that revenue can be reinvested into strengthening the UPI rails.
Globally, India’s zero MDR policy is an outlier. Countries like Thailand, Singapore, Malaysia, the U.K., Switzerland, and Brazil allow service providers to charge merchants. Even Malaysia - which initially implemented zero MDR policy for its fast payment system Duitnow – reintroduced MDR. It now has a tiered MDR policy, with higher MDR for large merchants and lower MDR for small ones.
Fuelling Innovation in the UPI Space
MDR is one lever. Startup-driven innovation is another.
As UPI scales, it will encounter increasingly complex technical challenges — like the server-side issues behind the 12 April outage. Instead of firefighting later, the Government/RBI/NPCI could encourage the industry to identify and address such issues pre-emptively. For instance, the Government could launch an UPI Innovation Grant — funding startups that specialize in diagnosing and solving UPI-specific technical problems.
We’ve already seen promising examples of startup innovation in UPI space. Take MissCallPay - a small startup that pioneered UPI payments via UPI123Pay, allowing users without internet access to transact digitally.
These specialized startups could become the ‘white blood cells’ of the UPI ecosystem – fighting off vulnerabilities before they become systemic.
Building Alternatives to UPI
Lastly, we need alternatives. UPI today processes a staggering 83% of India’s digital transactions - clocking 7,000 transactions per second.
Recognizing this concentration risk, RBI had proposed New Umbrella Entities (NUEs) back in 2020. NUEs were supposed to create competing retail payment systems. However, the plan was shelved because most applications just mirrored UPI without offering anything truly new.
There’s a big reason why NUEs didn’t take off. Since UPI was being offered for free, NUEs also had limited means to make money. The business case for building new payment rails just didn’t add up. But now, the conversation is shifting. If MDR is reintroduced, UPI ecosystem offers a viable revenue model — and so do competing systems. That opens the door for real innovation to enter the market.
The Government is also planning Net Banking 2.0 - an independent payment system. Unlike UPI, it won’t rely on existing rails like Immediate Payment Service (IMPS). It’s being built ground-up with its own infrastructure. If done right, this could be a strong secondary rail to support India’s growing digital economy.
To sum up:
· Reinstate MDR - let’s make UPI economically sustainable.
· Promote nimble startups - let’s solve tomorrow’s technical issues today.
· Relaunch NUEs which build real alternatives, not just shadows of UPI.
India’s fast payments story is just getting started and its next chapter must be written a little more carefully.
While UPI’s future hinges on infrastructure upgrades and recalibrated policy choices, RBI appears to be rethinking its playbooks – from RBI’s revamped digital lending guidelines to the framework of regulations. Our next section serves up a quick dessert on the evolving fintech-rulebook.
Dessert 🍮
Presenting: RBI’s Regulatory Tiramisu
Like a well-cooked Tiramisu that needs precise layering and the right balance of ingredients, the RBI’s new Framework for ‘Formulation Of Regulations’ (Framework) brings a refreshing approach to India’s financial landscape.
The main ingredients of the Framework’s recipe include:
· Regulatory Impact Analysis (RIAs): the RBI will now evaluate impact before implementing new regulations - ensuring precision and foresight in policy decisions.
· Global best practices: regulations will now incorporate international standards while maintaining local relevance, creating a balanced regulatory approach.
· Public feedback integration: the RBI’s commitment to integrate public feedback in rule-making is a significant shift toward transparency and inclusivity.
· Dynamic review process: regulations will undergo periodic reviews to stay relevant and effective in our changing financial landscape.
Mints 🌿
💼 LSPs under RBI lens
The Economic Times recently reported that the RBI is now directly auditing Loan Service Providers (LSPs). Until now, RBI queries were routed through Regulated Entities (REs). But this time, during annual RE inspections, the RBI reportedly called in LSP employees directly. Questions were centred around tech stacks, onboarding flows, KYC, and disclosure practices. Check out our post for a detailed overview of the actions LSPs must now take in light of this development.
🔄 UPI API Rules Reset
NPCI has issued a circular to address the situation arising from banks initiating a high number of ‘Check Transaction Status’ API requests at very high rates, which led to the UPI downtime on 14 April 2025. To ensure the system operates as expected, new guidelines require members to monitor and moderate API traffic. Banks must now follow a new rhythm: first check after 90 seconds, maximum three attempts within 2 hours, and no batch processing for non-financial APIs.
⚡UPI gets faster
NPCI slashes API response times to speed up UPI transactions. Request payment now needs just 15 seconds (down from 30 seconds), while transaction status checks drop to 10 seconds. Banks must upgrade systems by 16 June 2025 without compromising on technical success rates. These changes aim to make UPI fast while maintaining reliability - think instant coffee, but for payments.
🔍 UPI Name Lock
NPCI tightens beneficiary name rules to boost payment security. Only official banking names allowed now - no more custom names or QR code variations. Apps must remove name editing features, ensuring what you see is who you pay. This is aimed at making UPI transactions more trustworthy - like checking ID before handing over cash.
🧭 RBI’s New Digital Lending Directions
The RBI has introduced the Digital Lending Directions, 2025. Key updates include mandatory reporting of Digital Lending Apps (DLAs) to a new RBI portal by 15 June 2025, stricter monitoring of LSPs by REs, and conditions for offshore data processing. These changes aim to improve transparency and borrower protection. Check out our post for a detailed breakdown.