6 MINS READ
The need for AI governance has never been stronger as technology permeates all walks of business and everyday life.
AI seems to have taken the world by storm with organizations adopting the technology at a rapid pace. However, when it comes to AI governance, there is still a long way to go. There are some exceptions of course, especially when AI models are deployed by businesses that come under heavy regulatory scrutiny, such as lending or healthcare. However, in general, even as businesses build their AI capabilities, governance has not really been a top priority.
AI governance must go hand in hand with AI adoption. Companies today use AI to make decisions that impact their customers and employees, to measure key metrics like profitability and customer satisfaction, and to drive innovation and growth. Clearly, AI impacts an organization’s bottom line and brand. To build trust and get full value from AI investments, organizations need a comprehensive governance framework spanning every stage of AI development and usage.
Organizational objectives may be contextual, but there are common foundational purposes of AI governance that enterprises can consider.
On a high level, AI governance should:
Drive organizational alignment: AI as a technology is versatile and has many use cases. However, care should be exercised to ensure that the selected use cases align with the organization’s core values, goals, and overall strategy.
Ensure ethical AI: Biased data or human bias in labeling data could lead to AI model bias. These biases may, in turn, affect how a customer or employee is treated by an AI-powered process. For instance, consider an AI model for decisioning on consumer loan applications. Let’s assume that the model was trained using both prior loan application data and decisions taken by underwriters on loan application approval and rejection. If there were biases toward a specific demographic in any past decisions, the model would internalize the same bias again. Robust governance will help businesses exercise reasonable caution. This will ensure that any ethical gaps are mitigated throughout the AI life cycle and that all conclusions made by AI models are fair.
Establish transparency: AI models often end up becoming black boxes. How the models reach a certain conclusion or what factors are taken into consideration during this are often not widely understood and reviewed. AI governance should provide guidelines and supervision to ensure transparency and explainability of AI models prior to deployment. This may include documentation that captures the internal workings of an AI model (such as feature importance) and reviews by business, legal, and compliance stakeholders. This way, key stakeholders can determine how a model arrived at a certain conclusion or why it made a certain recommendation.
Drive risk management: Once transparency is established, financial, operational, and security risks can be evaluated and managed by the respective stakeholders. For example, the business can weigh in on how well exceptions are handled or data stewards could assess data usage permissibility.
There will be other mandates for AI governance too depending on the type of organization and its goals. The governance function might not be able to directly act on all these fronts. However, it can define processes, standards, and best practices for AI development and oversee adherence to the same. Let’s look at how organizations can start their AI governance journey
Essentials for setting up an effective AI governance foundation
First, sufficient funding needs to be made available for AI governance. The capital required will vary depending on the volume and nature of AI use cases. As a thumb rule, organizations can expect to spend approximately 5–10% of their AI budget on governance. This includes resource costs and documentation costs toward AI model design or to support governance processes for a particular use case.
Second, it is necessary to establish a cross-functional AI governance committee that includes business, technical, legal, and risk experts. This committee should define policies and guidelines for AI development as well as oversee the AI application life cycle. Depending on the organization size and level of AI adoption, the AI governance committee could either be standalone or be embedded within the data governance function.
For AI development process and approach, the committee can define the controls to be put in place by type, risk, or complexity of the AI application. For instance, it can define the type of applications where augmented AI can be used for risk mitigation. In this approach, AI applications are implemented with human oversight to ensure accuracy and corrections as appropriate. The corrections made through human intervention then become the basis for further AI model improvement.
Another key area that needs attention is the data that is being used as input for AI applications. Traditionally, when developing AI models, the focus has been on the modeling approach. Today, data-centric AI development, or carefully curating the data used for building models for accuracy and fairness, is emerging as a key theme. Governance should ensure that the data used in AI development is unbiased, representative of the population, and of the highest quality. Standards must be defined for sourcing, preparing, and labeling data that is used for training AI models.
AI governance shares some common traits with data governance, but there are differences that businesses need to keep in mind.
In some enterprises, AI governance could function as a sub-component of data governance. In other situations, it may need specialized skills to warrant a standalone design. At least in the initial stages, it may make more sense to augment existing data governance with the right capabilities and expertise to oversee AI development and deployment. Once this is done, the organization can begin defining the policies, processes, and standards to serve as the foundation for its AI governance.
When left unchecked, the outcomes of AI deployment may not always be favorable. To minimize the risks and maximize the benefits of AI, choosing use cases that align with the organization’s overall business needs is key. Transparency and security built into AI models will also result in better adoption and user trust.