How to apply AI to risk management

4 min read
Jul 12, 2023

Leveraging AI to revolutionise risk management is a goal among progressive CROs and risk functions right now. These thought leaders are eager for risk management to provide genuine value to their business, and to elevate the practice from a tick-box compliance exercise to one that influences and informs business decision-making.

The generally agreed mission of these kinds of organisations? To better understand how to use AI to augment and enhance risk management processes in order to better exploit risk management data.

With better data analysis and intelligence insights comes stronger organisational resilience.

In a recent member meeting, risk leaders from global organisations shared the approaches and considerations they're taking to use AI in their risk management:

1. Build a language-based AI tool

One organisation described the language-based AI tool they're building in-house. Its purpose is to help explain (risk) concepts to employees, which strengthens stakeholder understanding of risk processes and documents. Other functions of the tool include:

  • Clarify key terms
  • Aggregate different definitions and descriptions (e.g. risk appetite) across the company
  • Summarise strategy documents and organisational charts for people in easy-to-understand ways
  • Searching company risks and presenting their interconnectivities (the member is at the alpha state for this function).

The risk function sees the tool predominantly helping with data gathering and doing a lot of groundwork.

2. Run predictive models

Another organisation has developed a proof of concept using a tool that allows the risk function to input decades worth of data, run predictive models and predict on risk exposures and costs.

3. Use AI for finance risk forecasting

AI is used by another member organisation to shape the probabilistic model and identify risks to increase better funds management. 

AI risk management
Collaborate with the world's risk leaders on AI
Find out how we're working with our members to solve their AI-related risk priorities.
Find out more

4. Bot development

One practitioner explained their "Risk Assistant" bot "joins" internal meetings. When it recognises jargon being spoken, it pops the meaning of those terms into a group chat. Future developments are looking to enable the bot to define risk themes and topics mentioned in meetings, and answer certain questions raised in meetings or locate the appropriate stakeholder in the business. The bot will have the functionality to then invite them to the meeting in real-time to answer those questions, based on that stakeholder's calendar availability. 

What happens at virtual meetings?

We facilitate bespoke meetings (small private groups) and network meetings (usually about 5-10 risk leaders) to allow our members to share practical experiences.

The meetings take place under the Chatham House Rule, so you can speak freely about the challenges that you face. Sometimes a member will share a case study, other times we operate a workshop-style format. But the focus is on sharing solutions, advising on lessons learned and working through new ideas.

Usually a meeting takes about 90 minutes, and we circulate a write-up of the key issues raised in the meeting afterwards.

Our members are requesting more collaboration on AI as it becomes a bigger priority for them. To find out more about harnessing the expertise of your peers on AI-related risk issues, and how you can join a meeting about a challenge that is a priority for you, fill in this form/


5. A central hub for learning operational codes

Another organisation has developed in-house a central hub which is learning the operational codes, standards and  thresholds of the organisation to then sends signals and alerts to the risk team in real-time. This requires bringing all relevant data from across the company (tagged appropriately) and teaching the system to recognise interconnectivities between these data points.

FAQs from risk leaders on the topic

What is the starting point for introducing AI into an organisation?
Practitioner thoughts: What data do I want to interrogate, and where is this located across the organisation? One practitioner described this challenge as "like a scratch card: once you start scratching the surface, there are actually many vast pockets of quality data across your organisation that most companies do not yet join up." For many organisations, the first step also involves cleaning up the data collected and making sure it's fit for purpose ("if you put rubbish in, you'll get rubbish out").

Will AI replace the GRC system?
Practitioner thoughts: The GRC is transactional, it brings together internal controls, risk assessment and governance data. It's an operational tool that ensures you've collected and used your data in a cohesive manner. AI tools are more about creating intelligence and predicting future trends from data.

Does AI replace a lot of our workforce? From a risk culture perspective, what People risks are ahead?
Practitioner thoughts: It's more about upskilling existing workforce and adapting ways of working rather than replacing humans. The messaging around AI's use and role within a company is vital to get right; many companies want to avoid new-generation staff thinking AI is a way to cut corners and a replacement for individual growth and development.

What's the future of AI?
Practitioner thoughts: Many AI tools currently focus on supporting risk management processes in the short to medium-term. Another lens to consider is, how can AI support the use of data and intelligence to influence long-term business strategy setting? How can we apply AI to long-term scenarios to help convince executives to make the right changes to their strategic decisions?

To request to join our discussions on AI, please
fill in this form.

Get new posts by email