UK Treasury Committee Criticises FCA and Bank of England Over Slow AI Regulatory Framework Development

The United Kingdom's Treasury Committee has issued a damning report criticising the Financial Conduct Authority and Bank of England for inadequate progress on developing artificial intelligence regulatory frameworks for financial services. MPs warn that regulatory uncertainty is hampering British competitiveness as banks deploy AI systems without clear governance guidelines.

Parliamentary Criticism of Regulatory Progress

The cross-party Treasury Committee report highlights significant delays in establishing clear AI governance frameworks for the financial services sector. Despite AI deployment accelerating rapidly across British banking, insurance, and investment management, regulators have failed to provide industry with definitive guidance on acceptable use cases, risk management requirements, and accountability structures.

Committee members expressed particular concern that whilst UK banks are deploying AI for credit decisioning, fraud detection, and customer service automation, the regulatory framework governing these systems remains fragmented and unclear. This creates both compliance uncertainty for financial institutions and potential consumer protection gaps.

Key Regulatory Concerns Identified

  • Regulatory delay: Inadequate progress on AI framework development
  • Agencies criticised: FCA and Bank of England
  • Risk area: Deployment proceeding faster than regulation
  • Impact: Competitiveness concerns and consumer protection gaps

The Regulatory Framework Gap

The FCA published initial guidance on AI governance in financial services in 2023, but MPs argue this has not been sufficiently updated to reflect rapid advances in AI capabilities, particularly the emergence of large language models and generative AI systems. The Bank of England has conducted research on AI risks to financial stability but has not translated this into comprehensive regulatory requirements.

This regulatory lag creates particular challenges for British financial institutions competing with international peers. US banks operate under sector-specific guidance from the Federal Reserve and OCC, whilst EU institutions must comply with the comprehensive AI Act. UK banks face uncertainty about which standards will ultimately apply, potentially disadvantaging them in AI deployment.

Specific Areas of Regulatory Uncertainty

The Treasury Committee report identifies several areas where regulatory clarity is urgently needed. Algorithmic bias and fairness testing requirements remain undefined, leaving banks uncertain about what testing and validation processes will satisfy regulators. Model explainability requirements for AI credit decisions are similarly unclear, particularly for complex machine learning systems.

Data governance requirements for AI training datasets have not been comprehensively addressed, creating tension between data protection obligations and the data-intensive nature of AI development. Liability frameworks for AI-driven decisions remain ambiguous, raising questions about accountability when automated systems make errors.

Industry Response and Competitive Concerns

UK financial services industry bodies have echoed the Treasury Committee's concerns, arguing that regulatory uncertainty is slowing AI adoption and investment. Several major British banks have delayed AI deployments pending clearer guidance, whilst others have proceeded with implementations that may require significant modification once detailed regulations emerge.

The competitiveness dimension is particularly acute. British banks face intense competition from US technology firms entering financial services with AI-powered products, and from Asian banks operating in regulatory environments more permissive of AI experimentation. Regulatory uncertainty may disadvantage UK institutions in this competitive landscape.

"The FCA and Bank of England have failed to provide industry with the regulatory clarity needed for responsible AI deployment. This uncertainty hampers UK competitiveness whilst creating potential consumer protection gaps."

FCA and Bank of England Response

In evidence to the Treasury Committee, FCA representatives defended their principles-based approach to AI regulation, arguing that overly prescriptive rules would quickly become outdated given the pace of technological change. The regulator emphasised its preference for outcome-based regulation focused on consumer protection and market integrity rather than specific technical requirements.

The Bank of England similarly defended its approach, noting that AI risks to financial stability are still emerging and that premature regulation could stifle beneficial innovation. The central bank highlighted ongoing work on AI governance frameworks within the international Basel Committee on Banking Supervision.

The Principles Versus Rules Debate

The disagreement reflects broader debates about regulatory approaches to emerging technology. The FCA and Bank of England favour principles-based frameworks that provide flexibility, whilst industry and parliamentarians increasingly call for more specific requirements that provide clearer compliance pathways.

This tension is complicated by the UK's post-Brexit position. Freed from EU regulatory alignment, Britain has the opportunity to develop bespoke AI frameworks suited to its financial services sector. However, excessive divergence from EU or US approaches could create compliance burdens for international banks operating in the UK market.

Workforce and Employment Implications

Whilst the Treasury Committee report focuses primarily on regulatory frameworks and competitiveness, it notes concerns about workforce impacts of AI deployment in financial services. Several committee members questioned whether regulators should require banks to assess employment impacts of AI systems as part of governance frameworks.

The FCA indicated this falls outside its traditional remit of consumer protection and market integrity, suggesting employment considerations are matters for broader government policy. This response drew criticism from MPs concerned that AI-driven workforce transformation in banking is proceeding without adequate regulatory oversight or worker protection.

International Regulatory Developments

The report places UK regulatory efforts in international context. The EU's AI Act establishes comprehensive requirements for AI systems in financial services, classified as high-risk applications requiring conformity assessment, human oversight, and extensive documentation. US regulators have issued sector-specific guidance through the Federal Reserve, OCC, and CFPB.

Asian financial centres including Singapore and Hong Kong have established AI regulatory sandboxes allowing supervised testing of AI systems. The Treasury Committee suggested the UK should consider similar approaches, though the FCA's existing innovation sandbox could potentially be adapted for this purpose.

Recommended Next Steps

The Treasury Committee report makes several specific recommendations. The FCA should publish updated AI governance guidance by mid-2026 addressing algorithmic bias, explainability, and accountability requirements. The Bank of England should accelerate work on AI risks to financial stability and establish clear expectations for AI risk management frameworks at systemically important institutions.

Both regulators should coordinate internationally to avoid fragmentation whilst developing frameworks suited to UK market characteristics. The committee also recommends establishing formal mechanisms for ongoing dialogue between regulators, industry, consumer groups, and worker representatives on AI governance challenges.

Read original source: UK Parliament Treasury Committee →