Project BUILD – Takeaways from our bilateral, multistakeholder, and interdisciplinary exploration on building inclusive AI in healthcare

 

A. Project BUILD:

 

Through “Project BUILD – Building Inclusivity by Design in AI/ML-Powered Healthtech” (Project BUILD),[1] we aimed to provide policy recommendations and partnership ideas for the Indian and Australian government[2] We sought to address a critical gap in global AI policy: how do you leverage global AI principles to build AI that does not exacerbate known and unknown barriers to providing, accessing, and benefiting from healthcare? How should AI developers and deployers elicit and accommodate the lived experiences of users, providers, and end-beneficiaries in AI development and deployment?

 

A powerful takeaway from Project BUILD – It became clear that AI design, deployment, and governance can and should be participatory and iterative, especially in the context of healthcare. Lived experiences, when elicited and leveraged ethically and meaningfully, can shape AI tools to be fit for purpose. There is a strong business case for such collaborative design, risk and impact evaluation, deployment, and, incident reporting, and post-market evaluation. The reports from Project BUILD lay the path for such collaboration.

 

Our findings are captured in two reports – one for policymakers, and the other for developers and deployers of AI in healthcare:

  1.      Building Inclusivity by Design – Governance, Processes and Partnerships: A Memorandum for National, Regional and Inter-Governmental Officials (Report here).
  2.          Co-Designing Inclusive AI in Healthcare: A Toolkit for Developers and Deployers (Report here).
  3.       Supplementary Materials (here) - which is a repository of qualitative research methods (e.g., to evaluate inclusivity of AI tools), definitions of inclusion, and a summary of each exploration undertaken during the exchange tours.

        B. Key findings from Project BUILD:

1. Identifying the issues and solutions impacting building and deploying inclusive AI in healthcare: Through our conversations we identified a clear and pressing need to unpack “inclusion” and what it would mean for an AI tool in healthcare to be inclusive. Providing such clarity is essential for developers and deployers to ensure AI is inclusive by design, by understanding what steps to incorporate through the AI’s lifecycle and who should be tasked with those steps. Co-design or participatory approaches to designing AI was also identified as central to ensuring that the lived experiences of consumers (e.g., patients) and the realities of providing healthcare in different contexts (e.g., urban versus rural), specialities (e.g., triaging or a super-speciality such as cardiology), healthcare professionals (e.g., clinicians or community health workers) were accounted for in the AI system.

 

2.  Providing conceptual clarity through a definition:  Based on the issues and solutions identified, we proposed a definition for “inclusive AI in healthcare” that reaffirmed the importance of digital health equity.[3] The definition (presented in the graphic below) is intended to guide developers, deployers, and governments to set up standard operating procedures to build, use, and exercise oversight over AI in healthcare.

 

 

3.  Implementing the definition through “tenets of inclusive AI in healthcare”: The tenets are proposed as the various components which together can ensure the AI system does not leave any end user behind. That is, the AI system built is fit-for-purpose and allows for iterative improvement and customisation.

 

Tenets of Inclusive AI in Healthcare:
1. Proactive approach to problem solving: Embed inclusive AI principles into organizational decisions (team qualifications, research, governance, partnerships) and all product design choices (data sourcing, training protocols).
2. Defining the problem statement: Consumer needs a summary to enable easier access to healthcare. Doctor needs support to complete summaries in a timely and smart manner, that does not add to their workload. 
3. Addressing barriers to inclusion: Evaluate if the AI scribe exacerbates healthcare access barriers for vulnerable communities using qualitative methods like a digital health equity framework, including caregiver reliance, associated costs, and vernacular transcription capabilities. Also assess clinicians' IT infrastructure (e.g., fast internet to minimize summary upload lag, powerful mics/speakers for thorough consultation capture).
4. Participatory design with stakeholders: Conduct think-out loud sessions with clinicians to understand workflow, interviews with patients to understand help seeking behavuours and needs.
5. Inclusivity-use case  alignment: Developers can build a minimum viable product (e.g., AI Scribe) and then improve its inclusivity within the context of clinicians needing a smart solution for medical summaries and consumers needing accessible summaries for continuity of care. 
6. Inclusivity-use case  alignment: AI impact assessments and risk assessments can be done based on whether the AI is solving for the problem statement. For instance, if the AI tool is mishearing symptoms or incorrectly categorising information (e.g., considering important information as less relevant or important, or categorising information as clinical history instead of current state of health).  
7. Representative datasets: Ensure that that automatic speech recognition and natural language processing data sets and training instructions cover the accents, languages, and local slang used in the region.  
8. Continuous improvement: The AI continues to absorb new terminology and vernacular used by consumers and clinicians during its 
deployment, which the developers ensure are incorporated into the AI’s algorithms.

      C. Policy recommendations from Project BUILD:

We proposed five foundational policy requirements for developers and deployers to successfully co-design AI:

1. Policies must incentivise dataset sharing, break silos (e.g., disease-specific registries), and enforce ethical data stewardship benefiting data subjects.

2. Regulations should balance privacy with ethical data sharing from lived experiences to build inclusive AI models.

3. Base regulation and transparency on sector- and model-specific risks (e.g., lower scrutiny for patient support AI vs. mental health AI).

4.  Policymakers must enable feedback incorporation across the AI lifecycle via participatory design (e.g., think-aloud methods).

5. Government should fund skills training for clinicians and technicians (e.g., AI expert-partnered courses) to ensure safe "humans-in-the-loop" use.

 

      D.  Partnership ideas from Project BUILD:

We proposed the following national level and bi-lateral partnerships and collaborations to drive inclusive AI in healthcare:

1. Set up institutions for policymaking and evaluation of impact of AI used for public health (e.g., Centre of Excellence at the All India Institute of Medical Sciences New Delhi, the National Health Authority, the Healthdirect and the Validitron SimLab);

2. Provide training and upskilling across stakeholders on what inclusive AI in healthcare entails through the lifecycle (e.g., by Skill India Digital Hub’s MOOCs or by industry bodies such as Tech Council of Australia, Nasscom, NatHealth, and Confederation of Indian Industry);

3.  Build knowledge sharing repositories for Australia and India to share learnings on inclusive AI in healthcare (e.g., incident reporting databases built by India and Australia’s medical devices regulators – Central Drug Standards Control Organisation and Therapeutic Goods Administration respectively); and

4.  Build partnerships between Australian and Indian ministries and reputed academic institutions to evaluate inclusivity of specific AI tools and to design scoring or evaluation strategies for such evaluations (e.g., between Commonwealth Scientific and Industrial Research Organisation (CSIRO) and Indian Council of Medical Research (ICMR)).

 

       E. Overview of the Panel Discussion at the Summit:

We launched the reports at an official pre-summit event at the Australian High Commission on 28 January 2026. And conducted a panel discussion - “Towards global cooperation for equitable AI in healthcare” - at the India AI Impact Summit 2026[4] to unpack Project BUILD’s findings.[5] The panellists[6] focused on practical pathways towards implementing and enabling inclusive design and deployment. They highlighted participatory design approaches, policy and institutional mechanisms to enable participatory development and deployment of AI systems, and cross-border alignment to support equitable access and responsible implementation.

 

Key takeaways included:

  1.  Fit-for-purpose approach towards AI: Design should start with patients and providers, rather than starting with a product and searching/customising for users afterwards.
  2. Iteration and building ‘with’ communities: Building with intended uses and beneficiaries through the lifecycle will enable better AI solutions. The “curb-cut effect” was used to explain why centring disabled people improves innovation for all (e.g., closed captioning was a disability-driven innovation that has since become broadly useful). Additionally, the need to go beyond diversity as a numbers exercise, and more towards “epistemic diversity” was flagged. Epistemic diversity allows examination of who holds decision-making power. Similarly, it counters tokenisation (i.e., where intended beneficiaries/ users are included as a tick box exercise). Ableism and socioeconomic inequality were also identified as under-addressed dimensions of AI bias
  3. Team composition’s role in ensuring inclusion: It was argued that diverse teams catch bias early. For instance, having team members with chronic illnesses or disabilities or people from different skill sets (e.g., anthropology, sociology) will help in the design of research, development, deployment, and oversight of AI used in healthcare to ensure equity and trust.
  4. Incentivising participatory design through institutional guidance: WHO’s emphasis on multi-stakeholder engagement and cross-sector collaboration was described, with the observation that no single discipline (e.g., legal, clinical, economic) alone can deliver these outcomes. The UK’s AI assurance was floated as an implementation toolset. AI assurance was positioned as the “how” of responsible AI. It involves the use of methods to evaluate, measure, and communicate how risks are mitigated in AI systems.
  5. Improving dataset availability and accuracy: Efforts to improve the availability and benchmarking of medical datasets and taxonomies in India were described. Initiatives such as DBT-linked imaging biobank effort, and an ICMR-linked effort (which focusses on “gold standard” datasets and taxonomy benchmarking), and AI Kosh were flagged as examples of these efforts. Plans for challenge or contest formats were also noted. It was also observed that when startups and academia partner for innovation, (e.g., within an academia incubator such as the CMIE at AIIMS), the startup will have more seamless access to datasets for innovation. This is because the administrative overheads associated with privacy and data-sharing are reduced.
  6. Cross-border technical collaboration (federated learning and cultural/language variation): The need for cross-border learning in healthcare AI was underscored because diseases and patient needs are global, even where the context (e.g., type of healthcare) differs. Cultural, linguistic and geographic variations also shape symptom expression. Federated learning was suggested as an approach through which models can “travel” across centres to learn from varied datasets without sharing raw data. 

You can read the Project BUILD reports (report1 and, report2) and supplementary materials here.

 

Authors: Shambhavi Ravishankar (Counsel, Ikigai Law), Rutuja Pol (Partner, Ikigai Law), Nirmal Bhansali (Associate, Ikigai Law)

For any queries, reach out to us at contact@ikigailaw.com



[1] Project BUILD was completed with NALSAR University of Law, and the Centre for Artificial Intelligence and Digital Ethics, University of Melbourne. And funded by the Australian Department of Foreign Affairs and Trade under the Australia-India Cyber and Critical Technology Partnership. The Project team included – Rutuja Pol (Partner, Ikigai Law), Shambhavi Ravishankar (Counsel, Ikigai Law), Nirmal Bhansali (Associate, Ikigai Law), Prof. Srikrishna Deva Rao (Vice Chancellor, NALSAR University of Law), Professor Krishna Ravi Srinivas (Adjunct Professor, NALSAR University of Law), Professor Jeannie Paterson (Professor, CAIDE, University of Melbourne), and Piers Gooding (Associate Professor, La Trobe University). We conducted two mutual learning exchange tours where we took Australian and Indian technology, healthcare, and disability experts to India and Australia respectively, to meet with their counterparts. The learning tours were aimed at exploring how individuals from academia, civil society, government, and industry were approaching “inclusion” in the context of building, deploying, evaluating, and governing AI in healthcare.

[2] Project BUILD was funded by the Australia-India Cyber and Critical Technology Partnership Grant and completed with NALSAR University of Law and Centre for AI and Digital Ethics, University of Melbourne.

[3] The definition is a modification of the CSIRO Data61 team’s definition on “Diversity and Inclusion in AI”, modified to fit the context of using AI for healthcare. In our report “Co-Designing Inclusive AI in Healthcare: A Toolkit for Developers and Deployers” we proposed the addition of “equity” to the CSIRO definition, to affirm digital health equity which is the fair, just opportunity for all individuals to access, use, and benefit from digital health technologies. Data61 is led by Professor Didar Zowghi, whom we met with during the Australian Exchange Tour, and who later became an Australian cohort member for the India Learning Tour. Please see our Supplementary Materials document for more on our explorations on defining “inclusive AI in healthcare”.

[4] Discussion was organised with Microsoft and the Centre for Digital Transformation of Health, University of Melbourne (CDTH).

[5] You can see a recording here.

[6] The panellists were - Dr. Krithika Rangarajan, Associate Professor, AIIMS-New Delhi; Kanika Kalra, Technical Officer, AI for Health, World Health Organization; Tess Buckley, Senior Programme Manager, Digital Ethics and AI Safety, techUK; and Mohit Jain, Principal Researcher, TEM Group, Microsoft India


 

Challenge
the status quo

Sparking Curiosity...