Identifying and managing AI risk is vital for all organizations

Photo by Loic Leray on Unsplash

Working on AI/ML initiatives is the dream of many individuals and companies. Stories of amazing AI initiatives are all over the web and those who claim ownership of these are sought after for speaking, offered handsome positions and command respect from their peers.

In reality, AI work is highly uncertain and there are many types of risks that are associated with AI/ML work.

“If you understand where risks may be lurking, ill-understood, or simply unidentified, you have a better chance of catching them before they catch up with you.” — McKinsey [1].

In this post I’ll summarize the 7 types of risks and how to mitigate their negative impact. For those who would like a compact view, here’s a slide that I put together.

7 Dimensions of AI Risk — Babar Bhatti
  1. Strategy Risk — As I wrote in an earlier post, it is not simple to craft AI strategy. A mistake at the early stage sets the stage for other downstream problems. Unfortunately strategy is often in the hands of those who don’t have a thorough understand of AI capabilities. This category includes risk from selecting wrong (infeasible) initiative (relative to the organization’s ground reality), lack of executive support, uncoordinated policies, friction among business groups, overestimation of AI/ML potential (contrary to the hype, ML is not the best answer for all analytics or prediction problems), lack of clarity or unclear goals or success metrics. The most obvious mitigation approach is to have alignment between the AI leadership and executive team on the strategy and what risks are associated with it. Understand exactly how AI will impact people and processes and what to do when things don’t go well.
  2. Financial Risk — One of the common, but not often talked about assumption is that the cost of model development is the main financial factor for AI/ML. Only if we had the best data/AI scientists then we would be all set. As discussed in this paper, the full lifecycle of AI is way more complex and includes data governance and management, human monitoring and infrastructure costs (cloud, containers, GPUs etc.) There’s always the uncertainty factor with AI work which means that compared with software development, the process will be more experimental and non-linear and even after all the expensive development work the end-result is not always positive. Make sure your finance leadership understand this and not treat AI/ML as just another tech project
  3. Technical Risk — This is the most visible challenge. The technology is not mature or ready but the technologists or the vendors want to push it. Or perhaps the business leadership want to impress the media or their competitors with the forward looking vision. Technical risks come in many forms: Data: Data quality, suitability, representativeness, rigid data infrastructure and more. Model risks: capabilities, learning approach, poor performance or reliability in real world. Concept Drift, that is, change in the market or environment over time or due to an unexpected event. Competence: lack of right skills or experience, implementation delays, errors, not thorough testing, lack of stable or mature environments, lack of DataOps, MLOps, IT team not up to speed with ML/AI deployment or scaling needs, security breach and IP theft. The best way to mitigate these is to work on your team’s skills, invest in modern data infrastructure and follow best practices for ML .
  4. People and Process Risk — It should go without saying but the right organizational structure, people and culture are critical to the success of your work. With competent people and supportive culture, you can catch problems and face challenges. Sign of a bad culture is that people hide problems and avoid ownership. When you’ve politics and tensions between teams, problems get amplified. Skills gap, rigid mindset, miscommunications, old school IT lacks operational knowledge to scale AI. Gaps in processes, coordination issues between IT and AI, fragmented or inconsistent practices/tools, vendor hype. Lack of data/knowledge sharing (org structure), missing input/review from domain experts, lack of oversight / policy controls & fallback plans, third-party model dependency, insufficient human oversight, learning feedback loop. Weak tech foundations.
  5. Trust and Explainability Risk — You did all the work but the end users of your AI-powered application are hesitant to use or adopt the model. IT is a common challenge. Reasons include poor performance of the model under certain conditions, opaqueness of the model (lack of explanation of results), lack of help when questions arise, poor user experience, lack of incentive alignment, major disruption to people’s workflow or daily routine. As ML/AI practitioners know, the best models such as deep neural networks are the least explainable. This leads to difficult questions such as, what is more important: model performance or its adoption by intended users?
  6. Compliance and Regulatory Risk — AI/ML can cause major headaches for use cases or verticals that need to comply with rules and regulations. There’s a fine line here — if you don’t take some action the competitors may get too far ahead. When you do take action, you must protect against unforeseen consequences and investigations or fines by regulators. Financial and healthcare industries are good examples of such tensions. The explainability factors discussed above are key here. Mitigation: ensure that risk management teams are well-aware of the AI/ML work and its implications. Allocate resources for human monitoring and corrective actions.
  7. Ethical Risk — Your AI/ML project has great data and a superstar technical team, benefits are clear and there are no legal issues — but is it ethical? Take the example of Facial Recognition for police work. Vendors pushed this as a revolutionary way to improve policing but the initial models lacked the robustness needed to make fair and accurate predictions and resulted in a clear bias against certain minority groups. Credit scoring and insurance models have suffered from bias for a long time — with the growth of ML-powered applications, it has become a much bigger problem.

Each of the above mentioned risk area is a huge domain by itself — and requires extensive reading and hands-on experience to become familiar with the topic. I encourage you to check out the references below to get additional perspective on handling AI risk.

Notes:

[1] Confronting the risks of artificial intelligence, McKinsey. https://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/confronting-the-risks-of-artificial-intelligence

[2] AI and risk management, Deloitte. https://www2.deloitte.com/global/en/pages/financial-services/articles/gx-ai-and-risk-management.html

[3] Ulrika Jagare. Data Science Strategy for Dummies, 2019.

[4] Derisking machine learning and artificial intelligence, McKinsey. https://www.mckinsey.com/business-functions/risk/our-insights/derisking-machine-learning-and-artificial-intelligence

[5] Understanding Model Risk Management for AI and Machine Learning, EY. https://www.ey.com/en_us/banking-capital-markets/understand-model-risk-management-for-ai-and-machine-learning