The RBI wants digital lenders to follow ethical AI standards. But what does this mean in practice? In this article, we explore how digital lenders can adopt ethical AI standards before hardline regulation causes business disruption.
“In our early days at Upstart, we couldn’t know for certain whether our model would be biased. It wasn’t until loans were originated that we were able to demonstrate that our platform was fair. As an early-stage startup, this was a risk worth taking, but it’s not a risk a large bank would have considered.”
That’s the CEO of Upstart testifying before the US Congress in 2019. Upstart was the first digital lender to receive a ‘no-action letter’ from the Consumer Finance Protection Bureau (CFPB). The letter enabled Upstart to undertake AI based credit underwriting without risking penalties under anti-discrimination laws. Upstart, as a part of the arrangement, also agreed to regularly share data about its AI model with CFPB. This included a comparison of the AI model’s results with those of traditional processes. The data indicated that compared to traditional processes, everyone including disadvantaged groups were better off when Upstart’s AI model was used. This story serves as a useful reminder that while AI based decision-making is far from perfect, it may still be better than what we have at present. If AI models can at least improve the status quo, we shouldn’t let perfect become the enemy of good.
Closer home, the RBI may soon require digital lenders to comply with ethical AI standards. RBI indicated so in its 10 August press release on implementation of the digital lending guidelines. The ethical AI standards must ensure transparency, impartiality, inclusion, reliability, security and privacy. Though not immediately implementable, the RBI has adopted this guideline in-principle. And it’ll deliberate further before notifying more specific requirements.
Digital lenders currently use AI for processes like credit-risk assessment, loan application processing, collection and monitoring, customer support, and fraud prevention. With the Account Aggregator system gaining traction, the volume of data available for digital lending will only increase. And so will the use of AI to process this data. So, before regulators adopt a hardline stance, the digital lending industry must proactively set its house in order.
It’s difficult to retrofit ethical AI standards to an AI model after it has been designed and deployed. So, digital lenders must prepare for upcoming regulations. These are some best practices to help digital lenders understand how ethical AI principles can be implemented in their business.
- Use-case selection –
Not all use-cases may be suitable for AI based decision-making. Digital
lenders must assess if use of AI is appropriate based on the (a)
importance of the function, (b) consequences of errors, (c) usability of
available data sets, and (d) capability of AI models.
- Data quality –
AI models rely on a large number of data points ranging from device
operating system to font choices for creditworthiness assessment. It’s,
therefore, crucial to ensure the data fed into AI systems is accurate and
representative. The RBI has also emphasized on this requirement in its
digital lending guidelines. Alternative data may be inaccurate because
these data points may have been sourced from third parties who originally collected it for unrelated purposes. Further, data may
be unrepresentative if it doesn’t feature certain demographic groups. For e.g., since
women have historically lacked access to financial services, existing data
sets may provide far less information about women than men. This could in
turn affect the AI model’s ability to accurately make predictions about
women’s financial behaviour. Data may also be unrepresentative if it doesn’t account for different stages of the economic cycle
- Fairness –
AI models must not produce biased results which discriminate against
disadvantaged groups like women, religious minorities, oppressed castes,
persons with disabilities etc. This is easier said than done because even
if gender, religion, caste, disability status etc. aren’t directly used,
AI models may rely on other variables which act as proxies. For e.g., a person may be considered less
credit-worthy not because she’s a woman but because she reads an online
magazine which is mostly read by women. However, technical solutions like
pre-processing data used to train AI, evaluating counter-factual fairness,
making post-processing modifications and introducing an adversarial AI
system to counter bias are being explored to make AI systems fairer.
- Transparency -Explainability of AI made decisions is an ongoing effort. Even
creators of AI models struggle to explain how different inputs interact
with each other to generate an output. But it’s still important for
digital lenders to explain how their AI systems work, to the extent
possible. For e.g., some industry players publish model cards
which are similar to nutritional labels. These model cards provide
information about the AI model’s training process, evaluative factors,
uses and known biases.
- Human oversight –
Human oversight is essential to compensate for the inadequacies of any AI
model. This is especially important for edge cases which the AI hasn’t
been trained for. For e.g., if the data-set used to train the AI model had
less representation of women, human oversight should be greater when the
AI model assesses a female customer. Depending on the use-case, the AI
model’s role may be limited to offering recommendations which must be approved by a human. Such human approvers must also be
trained to guard against the tendency to over-rely on or misinterpret AI
- Impact assessment and audits – Before deploying an AI model, digital
lenders must conduct an impact assessment. This exercise must be conducted
periodically, even after deploying an AI model. Since AI models are
self-learning, regular audits are also necessary to avoid model drift and
identify the need for retraining. The RBI’s digital lending guidelines
also require AI models to be auditable so that minimum underwriting
standards and discriminatory factors can be identified.
- Governance – Digital lenders must create an internal system of checks and balances for use of AI. Individuals and/or committees must be assigned the responsibility of creating and implementing ethical AI standards which include key performance indicators. Cross-functional collaboration between technical, business and legal teams is also necessary for effective AI governance.
Have more questions about ethical AI standards and their relevance for your business? Contact our fintech team at firstname.lastname@example.org.
(This post has been authored by the fintech team at Ikigai Law.)