Abstract
The growing reliance on artificial intelligence (AI) has led to its rapid adoption across several functional domains in the financial services sector.
While bringing significant business benefits and potential for innovation, AI has also added new dimensions of risks. Besides data privacy and security-related vulnerabilities, the lack of transparency in AI systems demands increased accountability and governance on the part of financial institutions. With less-explainable and unfair outcomes that can create biases in models, global regulators recognize the black-box nature of risks posed by automated decision systems that can aggravate systemic impacts in the absence of sufficient governance mechanisms.
Regulatory formulations aimed to create governance standards and guidelines for managing AI risks are currently underway in different jurisdictions. As a part of their AI strategies, banks need to evaluate inherent risks and establish well-defined policies and governance frameworks to augment the transparency and reliability of AI systems. We outline how a robust governance framework can help banks harness the true potential of intended business benefits, while prudently managing the associated risks.
[1] The term ‘artificial intelligence (AI)’ in this paper recognizes all associated cognitive and self-learning technologies – including machine learning (ML), natural language processing (NLP), and voice recognition among others.
Business innovation
Given the prevalence of data and intelligence-driven ecosystems, financial institutions have been actively adopting AI and machine learning (ML) in diverse areas of business.
These include customer-focused products and services, customer profiling, risk management and capital allocation, investment and trading, regulatory compliance, as well as a few non-conventional areas such as data quality monitoring and document interpretation. When it comes to business innovations and productivity gains, AI is used in credit approval, product recommendation, product pricing, risk modeling, fraud detection, know your customer (KYC), anti-money laundering (AML), and regulatory data reporting among others.
Risks of AI
AI promises financial institutions significant opportunities to boost innovation and enhance efficiency.
However, it also brings new vulnerabilities and unconventional risks such as inaccurate credit scores, improper product recommendations, denial of service, discriminatory pricing, and so on. The nuances of AI risks are hard to predict, interpret, and control immediately. If not sufficiently addressed, these risks can lead to unexpected consequences, both at the bank and the financial system level. Self-learning and adaptive methods may introduce discriminatory bias driven by age, gender, race, or social behavior patterns in models, due to lack of proper design and validation. They can also cause unfair and inequitable outcomes. AI risks are less obvious, hard to audit, and have limited explainability. They can therefore exacerbate data privacy, security, model governance, and third-party risks.
As definitions and standards of risk categorization, risk mitigation, and governance norms evolve in the industry, newer dimensions of risks inherent to AI systems emerge across their lifecycle (see Figure 1).
Regulatory standards
The risks involved in AI systems are unpredictable.
The focus of regulators must primarily hinge on balancing the twin policy priorities of fostering innovation and technological advancements in the industry and ensuring sound and responsible adoption of technological capabilities to avoid potential systemic impact.
Ensuring customer protection through fair and transparent decisions and safeguarding data privacy and cybersecurity become equally important considerations.
In recent times, global regulators have been formulating standards to establish accountability and governance for the prudent use of AI systems. The EU’s harmonized rules on AI, the UK Information Commissioner's Office’s (ICO) guidance on the AI auditing framework, the US National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, the US Federal Trade Commission’s guidelines on truth, fairness, and equity in the use of AI, and Singapore’s Model AI Governance Framework are some of the notable regulatory frameworks. Their objective is to institutionalize a holistic governance framework encompassing the AI system and its functioning across the lifecycle—design, development, validation, and monitoring.
Financial institutions’ AI adoption journey is largely driven by their business model, customer segment focus, product portfolio, and regulatory status, besides innovation ambitions.
Proliferation of data (traditional and non-traditional) and the resultant mining of business insights and intelligence have accelerated the adoption of AI across a wide spectrum. Financial institutions must gain a broad-level understanding of AI-centric use cases in their specific business context, the adoption level of AI along with associated risks, and the degree of its impact across key risk dimensions (see Table 1).
AI risk governance
Regulatory standards in the financial services industry are evolving rapidly.
Therefore, an enhanced governance framework for oversight of potential risks from AI systems is crucial in order to improve accountability, auditability, and transparency levels.
As part of firms’ AI-centric programs, a few critical imperatives need to be addressed on priority to holistically fulfill regulatory obligations. The imperatives include establishing appropriate risk management and control measures, policy and procedures, data governance, documentation, and record-keeping norms, alongside procedures for human oversight and transparent information to users (see Figure 2).
Priorities of an AI risk governance framework
Financial institutions’ AI risk governance model and supervision framework rests on the key pillars of people, process, and technology with distinct focus areas (see Table 2).
AI risk governance model
Financial services firms must establish and enable a governance foundation with guiding principles and control thresholds for the optimized functioning of AI systems. A robust AI risk governance framework involves oversight of the AI model lifecycle, enterprise policies, and data governance norms across all AI programs. To align with regulatory frameworks, the governance framework should spell out clear objectives, guidelines, and standard criteria for model adoption processes, its monitoring, and control, besides data and model security aspects.
Resilient model governance and procedures covering functions such as model validation, model quality testing, model monitoring, model output analysis (explainable outcomes) and trends need to be outlined. To avoid known or unknown risks and mitigate their unwanted consequences, the model governance guidelines must consider business objectives, decision consequences, data patterns, testing adequacy, data privacy, and security aspects.
In terms of cybersecurity, the governance framework must consider risk scenarios across model and data protection, model extraction attacks, and training data poisoning. Additionally, firms must lay down overall assessment and audit processes to swiftly identify and remediate gaps. To drive well-tuned functioning of AI governance, business groups should closely work with data governance teams and ensure that data quality, metadata management, and data traceability norms are fully adhered with.
Augmented data governance for de-biasing and improved accountability
As data is the most critical element in the design of AI models and their training, the effectiveness of an AI system is completely dependent on a sound data quality and governance framework. Augmented data governance processes can boost the learning capability of AI models and eliminate data-induced biases to produce fair and consistent outcomes. This necessitates a drastic shift in key dimensions of data management processes for effective planning, provisioning, and accessibility of requisite datasets across the AI lifecycle. These dimensions include:
AI adoption is fast emerging as a strategic imperative for financial institutions given its ability to help tailor personalized customer experiences.
In addition, as banks move to ecosystem business models, reliance on AI to unlock the power of data insights from varied sources will only increase. However, AI risks pose complex challenges; to overcome them, financial institutions must ensure that AI systems comply with evolving regulatory guidelines as well as risk management standards and tolerance limits. To this end, financial institutions must establish a foundational AI framework with robust governance to exploit AI-led innovations, gain an edge in the market, and march ahead of the competition.