Public awareness of artificial intelligence has exploded in 2023, as technologies that have bubbled away in the background for years are suddenly becoming the focus of international debate. Yet with disruptions comes opportunity. Artificial intelligence and machine learning are changing the world and there is plenty of upside risk for organisations to consider.
In a recent network meeting, risk leaders at some of the world's largest technology companies discussed how they are preparing for both the threats and opportunities presented by these risks.
AI and risk management meeting summary
7 technology CROS discuss how they're preparing for AI-related risks.
We've extracted five key actions risk leaders can take now to prepare for tomorrow.
1. Develop an ethical position with executive and board
One risk leader advised other members to step back and think about AI from an ethical standpoint:
“Instead of starting down in the weeds, trying to solve each operational issue as it arises, convene a session now with your board and executive and get ahead of the game."
Risk Leadership Network member
CRO at FTSE organisation
Decide as a company what you will stand for in this space. If you don't have in-house expertise, hire a facilitator to run a workshop. Be specific about the output you're looking for.
2. Define your AI risk appetite Companies that have defined their AI risk appetite are more likely to innovate in this space.
3/7 of the organisations in our discussion had already quantified their AI risk appetite and were actively going to embrace AI risk-taking in their business
One risk leader noticed that the AI appetite statement had also enabled him to say 'no' to a couple of projects, while offering practical guidelines around proposing alternative solutions.
3. Establish AI governance
There is no need to implement anything too advanced at this stage, but setting up a regular forum to discuss AI with key stakeholders can be a useful first step. This may help avoid 'broom cupboard' projects taking off beyond your control.
A few members stated they had published internal AI guardrails; one risk leader said they preside over a formal AI governance committee and another whose company was more advanced in developing AI capabilities said they have a full Responsible AI team.
Advice from experienced practising CROs and heads of risk at some of the world's largest tech firms
This insight came from a network discussion among seven of our members at large technology firms around the world.
Many of them had approached Risk Leadership Network for support in managing AI-related risk, so we arranged a collaborative virtual meeting so they could discuss how they're tackling AI and what lessons they've learned so far.
We'll continue to work with our members on AI-related issues. To get involved, please fill in this form.
5. Separate tactical from existential (but both could end your business)
Several members said that while their boards were focused on strategic, existential questions around the future of the company and AI, directors might be missing the fact that a tactical AI misstep could just as easily end the business. Some members proposed creating two separate work streams to adequately prioritise each component.
Collaborate with your peers on AI
Would you benefit from regular knowledge sharing, benchmarking and validation from progressive risk leaders at the world's largest organisations, on the issues that matter most to you? Find out more about how we work with members, take a look at the full meeting notes from our recent discussion on AI and risk management, or request to get involved with our next collaboration on AI.