Financial institutions are continuously assessing the adoption of latest technologies to remain competitive. Artificial intelligence (AI) and machine learning (ML) are now more widely available and are one innovative ways for entities to gain a deeper understanding of their businesses and their customers, elevate the customer experience and augment the way decisions are made.
While AI, from chatbots that auto-respond to general customer inquiries to those handling data collection and analysis, brings significant operational benefits, it also has associated risks. Any proposed technological implementation should take place only after thorough consideration of potential operational vulnerabilities and clear understanding of the AI’s decision-making algorithm. This is important because entities take full responsibility for the governance, operations and risk management of any new technology used in their operations, whether managed internally or through third parties.
Recent Real-Life Challenges
AI was one of the issues raised in the Financial Industry Regulatory Authority’s (FINRA) case against Robinhood in June 2021. FINRA claimed that Robinhood engaged in inappropriate use of computer algorithms and relied on bots to approve customers for options trading rather than depending on due diligence and oversight from Robinhood’s registered principal officers. It was noted that the algorithm used was based on illogical or inconsistent information resulting in inappropriate approvals. Robinhood neither consented nor denied such claims and agreed to settle for a $70 million penalty.
The Difficulty with Bias in AI Algorithms
It is important that organizations using AI and machine learning do not put blind reliance on the results of the process and ensure there is transparency within their AI systems. In a speech by U.S. Secretary of Commerce Gina Raimondo during a trade group discussion in June 2021, she reiterated the promise of AI, but also noted harmful results that can be produced, such as creating discriminatory outcomes. She is Co-Chair of the Trade and Technology Council (TTC) recently launched by the U.S. and the European Union to boost innovation and global trade with technology and AI being one of the key topics.
The National Institute of Standards and Technology (NIST) recently issued a draft proposal for identifying and managing bias within AI. It acknowledges the various modeling and predictive approaches embedded in machine-learning and data techniques. It focuses on how bias can be better addressed across various stages of the AI lifecycle, as follows:
The biases embedded in the decisions to make during this stage, who makes them, the quality of data and the ways in which limited points of view can affect the later stages and results.
Design and Development
The decisions on which models to choose should not focus solely on model accuracy but consider broader context in the selection process.
A risk exists that deployment is either not fully tested, potentially oversold, or based on questionable or non-existent science.
The NIST aims to identify opportunities and contribute to the development of key practices and tools in AI management.
Addressing AI Risk
Earlier this year, the U.S. federal regulatory agencies issued a Joint Request for Information and Comment on Financial Institutions’ Use of Artificial Intelligence, Including Machine Learning. While they recognize and support responsible innovation through the use of new technologies and techniques, the agencies seek to understand how financial institutions address the various challenges from AI through their risk management practices including the following key areas:
Some challenges around the use of AI include management’s ability to explain the outcome of technology-driven decisions and operating results. Management should have a clear understanding on the applicability of the algorithm used in order to reduce uncertainty and potential challenges during review and validation, as well as to manage governance and regulatory risk.
The use of data, whether traditional or alternative data not commonly used by financial institutions in its decision-making processes, is significant in AI development and deployment. Controls around data quality and reliability of data processing are top considerations for financial institutions. The use of data in the AI’s design and development stage dictates the outcome, as the algorithm learns from the dataset used. Biased or incomplete data affects the results, which
can be difficult to identify afterwards.
Certain AI approaches have been developed to continually learn and evolve as new data is fed into them. AI models can generate different results based on similar inputs over time or if the context in which the AI was developed changes, without an update. Organizations should monitor both input data drift and model performance understand the outcome and determine whether the model is still operating as intended or if changes need to be made.
Algorithms used in AI and machine learning can, in many ways, result in outcomes that are unwarranted, inaccurate or discriminatory. Further, financial institutions using AI developed by a third party, rather than one that is developed and managed internally, face further challenges in ensuring the explainability, reliability, safety and security of the AI technology. The outreach and discussions from public and private groups are intended to spark further action and we expect to see guidelines, standards or a risk-based framework for the use of AI soon. With the significant benefits from an AI execution, organizations should be forthcoming with robust, documented controls and processes and should have in place the necessary governance and oversight to understand and manage AI related implementation risks.
Disclaimer of Liability
The information provided here is for general guidance only, and does not constitute the provision of legal advice, tax advice, accounting services, investment advice or professional consulting of any kind. The information provided herein should not be used as a substitute for consultation with professional tax, accounting, legal or other competent advisers. Before making any decision or taking any action, you should consult a professional adviser who has been provided with all pertinent facts relevant to your particular situation.
Mazars USA LLP is an independent member firm of Mazars Group.