MHRA Expands AI Airlock Funding To Support Safer AI in UK Care

MHRA Secures £3.6m Backing For AI Innovation Programme

The Medicines and Healthcare products Regulatory Agency (MHRA) has confirmed a significant expansion of its AI Airlock programme after securing £3.6 million in funding over the next three years.

Backed by the Department of Health and Social Care (DHSC), the investment will provide £1.2 million annually from 2026 to 2029. While relatively modest in financial terms, the shift to multi-year funding is notable. It removes some of the short-term constraints that have previously limited how regulatory innovation programmes can operate.

For organisations working in home care technology and digital health, this signals a more stable and predictable environment in which AI-enabled tools can be tested and refined before entering the market.

A Regulatory Sandbox Designed For Real World Care Challenges

First launched in 2024, the AI Airlock was set up as the UK’s initial regulatory sandbox focused specifically on Artificial Intelligence as a Medical Device. Its purpose is straightforward and ambitious, to allow regulators, developers and healthcare providers to work through the real-world implications of AI technologies before they are rolled out at scale.

The programme is delivered in collaboration with partners including the NHS AI Lab and Team AB, a consortium of UK Approved Bodies. Together, they create an environment where emerging technologies can be tested against regulatory expectations while still evolving.

This approach reflects a wider shift in how digital health innovation is handled. Rather than regulating products after they reach the market, the sandbox model brings oversight into the development phase. For the UK care sector, where new tools are increasingly used in people’s homes and community settings, that early engagement is becoming more important.

Early Findings Expose Complexity Of AI In Care 

The initial phases of AI Airlock have already exposed a number of challenges that are highly relevant to care providers and community health services.

One of the most pressing issues has been how to manage risks that are unique to AI systems. Unlike traditional medical devices, AI tools can evolve over time, particularly when they rely on machine learning models. This creates uncertainty around consistency and reliability, especially in environments such as home care where conditions are less controlled than in hospitals.

The programme has also drawn attention to the importance of grounding AI outputs in verified clinical information. Ensuring that systems produce accurate and evidence-based responses is essential, particularly when they are used to support frontline decision-making.

Another area of focus has been explainability. If care professionals cannot easily understand how an AI system has reached a recommendation, it becomes difficult to trust or challenge it. This is a critical issue not only for clinicians but also for social care staff who may be using digital tools without extensive medical training.

There is also growing recognition of the need for ongoing monitoring once technologies are deployed. AI systems do not remain static, and their performance can change over time. In community health technology and home-based care, where continuous oversight may be limited, this raises important questions about accountability and safety.

Collaboration On AI And Healthcare 

The collaborative nature of the AI Airlock programme has been widely welcomed by those involved in developing care technology.

James Pound, Executive Director for Innovation and Compliance at the MHRA, described the funding boost as “a pivotal moment” for both the programme and the wider adoption of AI in healthcare. He noted that the initiative has already demonstrated how real-world testing can reveal regulatory challenges early, helping to bring safe and effective technologies to patients more quickly.

From an industry standpoint, developers have emphasised the value of direct engagement with regulators. Dr Dom Pimenta, CEO and co-founder of TORTUS AI, highlighted how the programme facilitates shared learning at a time when AI capabilities are advancing rapidly. He pointed to the cross-sector dialogue as a key benefit, enabling companies to better understand regulatory expectations while contributing their own technical expertise.

This kind of interaction is particularly relevant for companies building tools for home care and community health services, where regulatory pathways have historically been less clearly defined than in acute care.

Policy Alignment And Implications For The UK Care Sector

The expansion of AI Airlock sits within a broader policy context, aligning with several major government strategies including the AI Opportunities Action Plan and longer-term health system reforms.

For local authorities, NHS community teams and care providers, the implications are practical rather than abstract. As more care is delivered outside hospital settings, there is increasing reliance on digital health tools to manage demand, support staff and improve outcomes.

AI has the potential to play a significant role in areas such as remote monitoring, early diagnosis and care coordination. However, without clear regulatory frameworks, adoption can be slow or inconsistent.

By shaping how AI medical devices are assessed and approved, the AI Airlock programme could help reduce uncertainty for organisations considering new technologies. This is particularly relevant for smaller care providers, who often lack the resources to navigate complex regulatory requirements on their own.

At the same time, the programme reinforces the importance of safety and oversight. In social care settings, where vulnerable individuals may rely on technology in their daily lives, ensuring reliability is essential.

Phase two of the programme has explored a diverse range of technologies, reflecting the breadth of AI applications in healthcare.

These include diagnostic tools for conditions such as cancer and rare diseases, as well as voice-based systems and large language models. The inclusion of these technologies highlights how quickly the field is evolving, and how regulatory approaches must adapt in response.

A particular area of focus has been how AI systems can change over time. Pre-determined change control plans, or PCCPs, are being explored as a way to manage updates and modifications without requiring full reapproval each time. This could prove important for maintaining innovation while ensuring safety.

For community health technology providers, this flexibility may make it easier to improve products based on real-world feedback without facing significant regulatory delays.