The Bank of England’s Financial Policy Committee (FPC) is sounding the alarm on potential financial stability risks as artificial intelligence becomes increasingly embedded in trading, investment, and customer-facing functions across the financial sector. While acknowledging the transformative potential of AI, the FPC warns that the rapid adoption of the technology could introduce systemic vulnerabilities if not properly managed.
Among the committee’s chief concerns is the risk of hidden flaws in AI models or underlying data, which could lead to firms misjudging their exposures. Such errors might only surface during periods of market stress, potentially amplifying shocks and destabilising financial markets.
The FPC also highlights the danger of widespread dependence on a small number of open-source or vendor-supplied models. This concentration could lead to firms behaving similarly under stress, creating feedback loops that exacerbate volatility. Additionally, reliance on a handful of vendors for critical AI services could pose significant operational risks. A major outage, for example, could disrupt vital services like time-sensitive payments across the system.
Cybersecurity is another focal point. While AI offers tools to strengthen defence against cyber threats, it also introduces new attack vectors. Malicious actors may leverage AI to orchestrate sophisticated assaults on financial institutions and infrastructure.
The FPC stresses that active oversight and risk monitoring are essential. “The effective monitoring of AI-related risks is essential to understand whether additional risk mitigations might be warranted in support of safe innovation, what they might be, and at what point they may become appropriate,” the committee stated.
As financial institutions continue to invest heavily in AI, the FPC is urging regulators to maintain a balanced approach—fostering innovation while being prepared to act if these new technologies threaten the broader system.