Skip to content
menu-icon-lq
close-icon-lq
All posts

Navigating the ‘Regulatory Lasagna’: How the EU’s New AI Act Impacts Healthcare and Medical Devices

Join us at Labquality Days to hear two interesting talks by legal expert Erik Vollebregt, who will demystify Europe’s AI Act and explain how it layers atop existing medical device regulations.

A New Layer of Regulation for AI in Healthcare

On 1 August 2024, the EU’s new Artificial Intelligence Act came into force. Over the next 6 to 36 months, its provisions will gradually apply to various industries – including healthcare and medical devices. If your device contains or relies on AI, you now face additional requirements on top of the already demanding Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR).

“Think of it as an extra approval step, on top of the already demanding MDR/IVDR,” explains Erik Vollebregt, a legal specialist who advises both government and industry on life science regulations.
“While it’s great that Europe wants safe and transparent AI, this ‘one-size-fits-all’ approach can complicate – and sometimes conflict with – existing medical tech regulations.”

He continues:

“We’re essentially piling legislation on legislation. The AI Act doesn’t replace MDR or IVDR; it sits on top of them, creating what I like to call a ‘regulatory lasagna’ of overlapping requirements.”

The Challenge of “Regulatory Lasagna”

Under the MDR and IVDR, manufacturers already navigate risk management, post-market surveillance, and CE marking. The AI Act introduces yet another set of these same obligations – technical documentation, risk management, and more – sometimes with slightly different definitions and processes.

“So, which version of ‘risk management’ or ‘CE marking’ applies?” Vollebregt asks.

“It’s like making a curry and the recipe says to use bay leaves and lime leaves – they might serve a similar function, but you end up confused because each regulation calls for a slightly different approach.”

Moreover, the AI Act fails to distinguish between “AI for good” – like clinical decision support tools – and AI that can harm, such as social-media algorithms that amplify dangerous content. This omission can slow the development of beneficial medical AI, precisely because every system faces extra layers of scrutiny.

“They tried to design a law that would regulate Facebook’s AI the same way it regulates AI that diagnoses cancer,” notes Vollebregt, “and that just doesn’t translate well to a healthcare environment where you want to foster innovation, not stifle it.”

In-House Devices: Even More Confusion

To encourage rapid innovation and research, in-house devices developed by hospitals or labs for their own use can often be exempt from full CE marking under the MDR and IVDR. However, the AI Act does not recognize this distinction. Initially, that meant an in-house AI might be exempt under MDR/IVDR but still subject to certification under the AI Act.

Legislators tried to address this by labelling in-house AI as “low risk,” but this can lead to illogical scenarios. Imagine an AI that diagnoses a life-threatening condition: simply because it’s used within one hospital, it might be deemed “low risk” under the AI Act.

“A great example is a test for Lassa virus,” says Vollebregt. “It’s extremely high risk, yet under the AI Act’s current wording, an in-house AI test for Lassa might skip crucial checks. It shows how the Act’s conflict mechanism is miscalibrated.”

Clinical Trials, Sandboxes, and Real-World Testing

One of the biggest oversights, according to Vollebregt, is how the AI Act treats clinical investigations or performance evaluations. Normally, to put a new medical device or IVD into practice, you conduct a clinical trial or performance evaluation under MDR or IVDR.

“The AI Act states you can only work with a non-CE-marked AI in a ‘regulatory sandbox’ or via ‘real-world testing,’” Vollebregt explains. “But the real-world testing provisions don’t align with clinical investigation or performance study requirements, leaving the sandbox as the only option.”

However, a regulatory sandbox can only be defined by the government. If no sandbox is created at the national level, hospitals or companies wanting to run valid clinical or performance studies for AI software find themselves in a legal grey area – unable to proceed or forced to improvise and hope for the best.

“It’s a classic case of regulatory oversight,” Vollebregt says. “Healthcare needs the flexibility to trial and refine AI in clinical settings. But as it stands, the new Act’s structure doesn’t align with existing medical regulations in a practical way.”

Balancing Innovation and Patient Safety

Vollebregt is quick to emphasize that oversight itself isn’t bad. Europe’s stringent regulations often set a global standard – just look at the success of the GDPR in shaping data privacy worldwide. Rigorous checks can keep harmful AI out of circulation.

However, the AI Act may overshoot, creating a “chilling effect” on medical innovation, just as the EU now admits happened with the MDR and IVDR. Small companies, universities, and hospital labs might find it too expensive or complicated to develop advanced clinical AI in Europe.

“We’ve seen this story before,” Vollebregt cautions. “When the EU drastically tightened rules for gene, cell and tissue therapies, many innovators went to the U.S. or adapted their business model to sell the company at an early stage rather than bring a product to market because the market access process became prohibitively complex and expensive.

We don’t want Europe turning into a museum of stalled clinical development without advanced AI, with all real progress happening elsewhere.”

Will the AI Act Give Europe a Competitive Edge?

The EU hopes the AI Act will do for artificial intelligence what the GDPR did for data protection – set a global gold standard. This could encourage other regions to adopt similar safeguards, making a European “CE-marked AI” a badge of worldwide quality.

However, Vollebregt warns that this outcome hinges on proper calibration:

  • Too cumbersome and developers will flee, harming Europe’s leadership in innovation.
  • Too lax and it fails to protect citizens from the risks and dangers of powerful AI.

“I can see it going both ways,” he says. “If we get the AI Act right in the implementation, Europe becomes a global leader in AI safety and trustworthiness. If we get it wrong, we’ll watch the world’s top AI innovators quietly slip away to friendlier markets.”

What to Expect from Erik Vollebregt’s Talks at Labquality Days

If your organization is building or using AI-powered devices – or even considering it – these new rules will affect you. In two sessions at Labquality Days, Erik Vollebregt will offer:

Clarity on a Complex Topic
-> Get an understandable breakdown of how the AI Act stacks atop MDR/IVDR and identify pitfalls most likely to derail healthcare innovators.

Insights on In-House Devices
-> Learn what the AI Act means for developing AI tools within your lab or hospital. Where do you really need a CE mark, and where do alternative regulatory options exist?

Strategy for Clinical Testing
-> Get practical tips on navigating sandboxes and real-world testing for AI-based diagnostics – and see how to align them with MDR/IVDR rules.

Risk Management Simplified
-> Discover how to harmonize slightly different CE marking requirements so you don’t end up duplicating effort.

Future-Proofing Your Innovation
-> Find out how to stay ahead of ever-shifting regulations, from the European Health Data Space to the Batteries Regulation.

“My goal is to empower attendees so that regulation becomes a tool, not a barrier. If you know the rules – or know who to ask – you suddenly have options instead of roadblocks," Vollebregt says.

Don’t Miss These Sessions at Labquality Days!

In an era when artificial intelligence is reshaping diagnostics, treatment, and patient outcomes, being informed and prepared is more critical than ever. Erik Vollebregt’s presentations promise practical tips, real-world examples, and lively discussions on the future of European healthcare innovation.

Come learn how to thrive under the new AI Act – and ensure your cutting-edge work stays both compliant and impactful.

 

AI was used to transcribe and help in writing this interview.

Share this article

Subscribe to our newsletter

Stay connected with Labquality Days, the premier international congress dedicated to advancing quality in laboratory medicine and health technology. Our newsletter brings you the latest updates, expert insights, and news related to the upcoming event.