The regulators in the U.K. are being cautioned that their existing approach to artificial intelligence in financial services may expose consumers to severe harmThe regulators in the U.K. are being cautioned that their existing approach to artificial intelligence in financial services may expose consumers to severe harm

UK Regulators “Exposing Consumers to Serious Harm” as AI Oversight Gaps Widen — Committee Warns

2026/01/20 20:22
3 min di lettura
Per feedback o dubbi su questo contenuto, contattateci all'indirizzo crypto.news@mexc.com.

The regulators in the U.K. are being cautioned that their existing approach to artificial intelligence in financial services may expose consumers to severe harm, as loopholes in regulation increase when AI is taking off more rapidly in the industry.

The Treasury Select Committee has issued this warning, saying the Bank of England, the Financial Conduct Authority, and HM Treasury have been over-reliant on a wait-and-see strategy when AI is already in the heart of financial decision-making.

In a report published on January 20, the committee said the pace of AI adoption has outstripped the regulators’ ability to manage its risks.

Approximately 75% of financial services companies in the UK are currently employing AI, with the most intense adoption amongst insurers and major global banks.

Although MPs admitted that AI is able to enhance efficiency, accelerate customer services, and enhance cyber defenses, they concluded that all that is being compromised by unaddressed risks to both consumers and financial stability.

Lawmakers Say UK’s AI Approach in Finance Is Too Reactive

Currently, there is no specific AI legislation for financial services in the UK. Rather, regulators use pre-existing rules and claim they are flexible enough to include new technologies.

The FCA has pointed to the Consumer Duty and the Senior Managers and Certification Regime as providing sufficient protection, while the Bank of England has said its role is to respond when problems arise rather than regulate AI in advance.

The committee rejected this position, saying it places too much responsibility on firms to interpret complex rules on their own.

AI-driven decisions in credit and insurance are often opaque, making it difficult for customers to understand or challenge outcomes.

Automated product tailoring could deepen financial exclusion, particularly for vulnerable groups. Unregulated financial advice generated by AI tools risks misleading users, while the use of AI by criminals could increase fraud.

The committee said these issues are not hypothetical and require more than monitoring after the fact.

Regulators have taken some steps, including the creation of an AI Consortium and voluntary testing schemes such as the FCA’s AI Live Testing and Supercharged Sandbox.

However, MPs said these initiatives reach only a small number of firms and do not provide the clarity the wider market needs.

Industry participants told the committee that the current approach is reactive, leaving firms uncertain about accountability, especially when AI systems behave in unpredictable ways.

AI Risks Rise as UK Regulators Lag on Testing and Oversight

The report also raised concerns about financial stability, as AI could amplify cyber risks, concentrate operational dependence on a small number of US-based cloud providers, and intensify herding behavior in markets.

Despite this, neither the FCA nor the Bank of England currently runs AI-specific stress tests. Members of the Bank’s Financial Policy Committee said such testing could be valuable, but no timetable has been set.

Reliance on third-party technology providers was another focus.

Although Parliament created the Critical Third Parties Regime in 2023 to give regulators oversight of firms providing essential services, no major AI or cloud provider has yet been designated.

This delay persists despite high-profile outages, including an Amazon Web Services disruption in October 2025 that affected major UK banks.

The committee said the slow rollout of the regime leaves the financial system exposed.

The findings land as the UK continues to promote a pro-innovation, principles-based AI strategy aimed at supporting growth while avoiding heavy-handed regulation.

The government has backed this stance through initiatives such as the AI Opportunities Action Plan and the AI Safety Institute.

However, MPs said ambition must be matched with action.

Disclaimer: gli articoli ripubblicati su questo sito provengono da piattaforme pubbliche e sono forniti esclusivamente a scopo informativo. Non riflettono necessariamente le opinioni di MEXC. Tutti i diritti rimangono agli autori originali. Se ritieni che un contenuto violi i diritti di terze parti, contatta crypto.news@mexc.com per la rimozione. MEXC non fornisce alcuna garanzia in merito all'accuratezza, completezza o tempestività del contenuto e non è responsabile per eventuali azioni intraprese sulla base delle informazioni fornite. Il contenuto non costituisce consulenza finanziaria, legale o professionale di altro tipo, né deve essere considerato una raccomandazione o un'approvazione da parte di MEXC.