FCA Opens Applications for Second Cohort of AI Live Testing Service for UK Fintech Innovation

The Financial Conduct Authority has opened applications for the second cohort of its AI Live Testing service, allowing British fintech firms to test artificial intelligence systems under regulatory supervision. The initiative aims to accelerate responsible AI deployment whilst maintaining consumer protection standards across United Kingdom financial services.

The AI Live Testing Framework

The FCA's AI Live Testing service builds on the regulator's established Digital Sandbox programme, creating specific pathways for testing AI systems in controlled financial services environments. Participating firms can deploy AI applications with real customers under FCA supervision, receiving regulatory feedback and guidance throughout the testing process.

The second cohort announcement follows successful completion of the inaugural programme, which saw 12 fintech companies test AI applications ranging from algorithmic credit assessment to fraud detection systems. The FCA reports that the programme successfully balanced innovation encouragement with consumer protection maintenance.

AI Live Testing Programme Details

  • Programme: FCA AI Live Testing Service - Second Cohort
  • First cohort: 12 fintech companies successfully tested AI systems
  • Applications: Now open for second cohort
  • Approach: Supervised real-world AI testing with regulatory guidance

Eligible AI Applications and Use Cases

The FCA has broadened the scope of eligible AI applications for the second cohort, reflecting lessons learnt from the initial programme. Firms can apply to test AI systems for credit decisioning and lending automation, fraud detection and prevention, customer service chatbots and virtual assistants, and investment advice and portfolio management.

Additionally, the second cohort welcomes applications for regulatory compliance automation, including anti-money laundering systems and transaction monitoring platforms. This expansion acknowledges growing industry interest in using AI to manage regulatory obligations more efficiently.

Application Requirements and Selection Criteria

Applicants must demonstrate genuine innovation in their AI approach, clear consumer benefit propositions, and robust governance frameworks addressing algorithmic bias and fairness. The FCA particularly welcomes applications from firms addressing financial inclusion challenges or serving underserved customer segments.

Technical requirements include comprehensive testing documentation, explainability mechanisms for AI decisions, and data governance frameworks compliant with UK GDPR. Firms must also demonstrate adequate resources to complete the testing programme, including technical expertise and management commitment.

Regulatory Support and Guidance

Participating firms receive direct access to FCA supervisors throughout the testing period, typically lasting 6-12 months. This includes regular check-ins on AI performance, consumer outcomes, and governance effectiveness. The regulator provides feedback on whether proposed AI approaches align with regulatory expectations and consumer protection requirements.

Crucially, the programme offers regulatory forbearance for genuine mistakes made during testing, provided firms demonstrate good faith efforts at compliance and consumer protection. This creates safe space for experimentation that would be difficult in standard regulatory environments.

"The FCA's AI Live Testing service allows British fintech firms to test AI systems under regulatory supervision, accelerating responsible innovation whilst maintaining consumer protection standards."

Impact on UK Fintech Competitiveness

The AI Live Testing programme positions the United Kingdom favourably in global fintech competition. Whilst the EU's comprehensive AI Act establishes strict requirements, the FCA's sandbox approach provides flexibility and regulatory engagement that attracts innovative firms.

This matters particularly for startups and scale-ups that lack resources for extensive compliance teams. Direct regulatory access and feedback reduces uncertainty about whether AI approaches will pass scrutiny, enabling faster iteration and deployment decisions.

Comparison with International Approaches

Singapore's Monetary Authority has operated similar AI sandbox programmes with success, creating models that the FCA has studied. The United States lacks coordinated federal approach, with individual agencies taking varying stances on AI testing frameworks.

The FCA's model differs from EU approaches by emphasising principles-based regulation and regulatory dialogue over prescriptive technical requirements. This creates both opportunities and risks for firms that must eventually navigate multiple regulatory regimes.

Implications for Traditional Financial Services

Whilst the AI Live Testing programme targets fintech firms, traditional banks and insurers are watching closely. Several major British financial institutions have enquired about participating, though the programme's design favours smaller, more agile organisations.

The programme's learnings will inform broader FCA guidance on AI governance that will apply to all regulated firms. Traditional institutions are therefore studying successful sandbox participants to understand regulatory expectations before deploying their own AI systems at scale.

Workforce and Employment Considerations

The AI systems being tested often involve some degree of workforce automation. Customer service chatbots reduce demand for contact centre staff, whilst automated credit assessment systems may diminish roles for loan officers and underwriters.

The FCA has not explicitly incorporated workforce impact assessment into its AI testing criteria, focusing instead on consumer outcomes and market integrity. However, participating firms report that regulators ask questions about employment implications during supervision dialogues.

Consumer Protection Balance

The programme's fundamental challenge remains balancing innovation encouragement with consumer protection. The FCA must avoid stifling beneficial AI developments whilst ensuring that systems don't discriminate against vulnerable customers or make decisions that consumers cannot understand or challenge.

First cohort experiences suggest this balance is achievable but requires careful calibration. Some AI applications proceeded smoothly through testing, whilst others required significant modifications to address fairness or explainability concerns identified during supervised deployment.

Application Timeline and Next Steps

Applications for the second cohort close in March 2026, with successful firms notified by May 2026. Testing programmes typically commence in June-July 2026, running for 6-12 months depending on application complexity and testing scope.

The FCA plans to publish anonymised case studies from successful second cohort participants, providing broader industry guidance on regulatory expectations for AI governance in financial services. These case studies will supplement existing FCA guidance documents on AI and machine learning.

Read original source: Financial Conduct Authority →