Launching Inclusive by Design: India’s AI4ALL Playbook

In February, we launched Inclusive by Design: India’s AI4ALL Playbook (Playbook), with support from Meta, at the India AI Impact Summit 2026.  

The central finding of the Playbook is simple: AI can be a force multiplier – and a powerful leveller. The Initiative maps AI’s potential role in advancing inclusion across India.  

Across three cities — Delhi, Bengaluru and Mumbai — we asked deployers, founders, investors, policymakers and communities a simple question: how can AI improve lives regardless of barriers such as language, culture, age, gender or disability? 

You can read the full Playbook here. 

This post offers a snapshot of the Playbook’s key findings, along with themes that emerged during the panel discussion at the Summit. 

Brief findings of the Playbook: 

  • Inclusion does not automatically follow deploymentWhile there is broad recognition that well-designed AI systems can meaningfully advance inclusion, the research makes clear that deployment alone does not deliver inclusion. The real barriers are often institutional rather than technical: funding structures that treat inclusion as an afterthought, procurement timelines that move at a pace AI development cannot match, and datasets that fail to reflect the diversity of the populations they are meant to serve. The Playbook finds that AI-led inclusion depends on three levers working in concert – design, access, and investment When any one of these is missing, initiatives tend to stall at the pilot stage and rarely move beyond it. 

  • What the use cases show: The Playbook does not examine these questions in abstract. It analyses a range of real-world use cases across sectors in India to understand how inclusion is being operationalised in practice. These use cases show that AI-enabled inclusion is not about scale alone, but about specific design choices that respond to real constraints. For instance, Shishu Maapan reimagines newborn measurement by replacing heavy equipment and fragile manual workflows with a smartphone-native tool that works offline and integrates into existing public health systems. Similarly, agriculture-focused tools such as Dhenu.ai demonstrate how voice-first, vernacular interfaces trained on domain-specific data can deliver expert advice directly to smallholder farmers, bypassing literacy and connectivity barriers. In both cases, inclusion is achieved not by retrofitting features later, but by designing from the outset for the hardest user and the most resource-constrained setting. 

  • Language as foundational infrastructureOne of the Playbook’sharpest findings concerns language. Language infrastructure is foundational to population-scale inclusion and must be embedded into service design from day one, not layered on later as translation. Multilingual and voice-based systems are not optional accessibility features, and English-trained models, regardless of capability, cannot serve diverse populations by default. 

  • Designing for real-world conditions: Various stakeholders during our consultations noted that AI systems must be built for real-world operating conditions – low-bandwidth environments, feature phones, noisy public settings, frontline health visits, crowded courtrooms, and varying levels of digital literacy. Designing for countries like India means accounting for unstable connectivity, non-standard speech patterns, dialect variation, and contexts where users may not be typing on a screen at all. In other words, contextual design also determines whether inclusion scales. 

  • Universal design as a principleThe Playbook also makes a pointed case for universal design, reframing it not as accommodation for a minority of users but as a principle that strengthens products for everyone. Features built with the hardest constraints in mind tend to become standard over time. This is as much a business argument as it is an equity one. 

  • Building institutional capacity: Then there is the institutional piece. Across the roundtables, stakeholders repeatedly emphasised that AI-led inclusion depends on building institutional capacity within government. This includes capacity for dataset preparation and standardisation, clearer procurement pathways, and importantly, the technical skillsets required to evaluate, deploy, and oversee AI systems. Participants noted that without internal technical understanding of model limitations, data quality, evaluation metrics, and integration challenges even well-designed tools struggle to move beyond pilots. Initiatives like AIKosh are a step in the right direction, making representative and diverse datasets more accessible to developers, but the work of building that infrastructure is ongoing, and the investment required is significant. 

Reflections from the panel discussion 

The panel reflected a deliberate mix of perspectives across engineering, enterprise deployment, public systems, and global innovation policy.  

Agustya Mehta (Director of Hardware Engineering, Meta) brought experience from building AI-enabled hardware, Archana Joshi (Global Head, AI Value Management, Xoriant) contributed insights on enterprise AI implementation and value realisation, Arghya Bhattacharya (Co-Founder, Adalat AI) shared lessons from deploying AI within India’s justice system, and Olivier Twagirayezu (Director, AI Scaling Hub, Rwanda Centre for the Fourth Industrial Revolution) offered a government-led view on scaling AI through national innovation ecosystems.  

You can watch the panel discussion hereSeveral themes emerged: 

  • Build for real constraints, not ideal conditions: Across multiple deployments in emerging markets, the products that gained real traction were those that solved urgent, undeniable pain points first. In one case, this meant digitising court processes in jurisdictions where nearly 90% of judges did not have access to stenographers. In another, it meant designing agricultural advisory tools for farmers using basic feature phones, operating on unstable networks, and communicating only in Kinyarwanda. When the benefit is immediate and the alternative is genuinely broken, adoption follows. The lesson was clear: design for the hardest user in the most difficult environment. 

  • Inclusion that is retrofitted always costs more: Some speakers explained that companies usually fall into two groups. One group builds inclusion into the product from the start. The other focuses on quick returns and postpones inclusion. For example, designing a product to work offline because connectivity often fails shows early planning. Launching a customer bot only in English in a largely Hindi-speaking market because it is easier to show early ROI shows the opposite. When inclusion is built in early, the product works for more users from day one. When it is postponed, companies often must redesign systems and rebuild features to re-earn user trust. Retrofitting inclusion almost always costs more.  

  • Procurement cycles are misaligned with how AI develops: Classical procurement, built for infrastructure and roads, does not work for technology which evolves at breakneck speedThe most successful deployments had found ways around this — by co-developing with institutions in real time rather than waiting for formal procurement cycles, and by building enough on-the-ground familiarity to eventually shape better, more informed RFPs. It was also observed that countries such as Rwanda have taken a more coordinated and innovation-friendly policy approach, where regulators and implementers engage early to enable controlled experimentation within public systems. Panellists further highlighted the role of nonprofit organisations in building trust, particularly in sectors such as law, health and disability. By working closely with communities and institutions, nonprofits can help align incentives and strengthen credibility, which in turn supports more durable adoption. 

  • Framing inclusion as CSR results in inclusion staying underfunded: When inclusion is positioned as a social responsibility initiative, it does not make for good products or good economics. The harder truth is that diverse datasets are expensive, and until that cost curve shifts through government-led data infrastructure, enterprises will continue to defer inclusion. As such, beyond the moral case, financial case for inclusion must be made 

  • The promise of India’s purple economy: One key takeaway was the size and importance of the “purple economy” – the market for products and technologies built for persons with disabilities. India has one of the largest populations of persons with disabilities in the world, and the assistive technology market here alone is estimated at around USD 150 billion. These are not fringe users; they are consumers with purchasing power and real needs. Building accessible products is not charity. It is indeed a significant market opportunity  

  • Universal design: Universal design makes products better for everyone. The flatbed scanner, OCR, text-to-speech, all began as accessibility efforts before becoming industry-wide standards. Teams that build for the lowest-resource users and the most underserved contexts tend to produce the most durable and widely useful products. Universal design is not a regulatory checkbox; it is where good design begins. 

You can read the Playbook here.   

Authors: Ikigai Law Team 

For any queries, reach out to us at contact@ikigailaw.com

Challenge
the status quo

Sparking Curiosity...