
The EU AI Act was the world’s first benchmark for AI regulation – but recent decisions from Brussels to dilute the landmark regulation and simplify other data laws reveals a deeper tension between innovation and oversight.
Europe is no stranger to this dilemma. Mario Draghi’s EU competitiveness report published in 2025 highlights the challenges and opportunities well: “with the world on the cusp of an AI revolution, Europe cannot afford to remain stuck in the “middle technologies and industries of the previous century.” He also notes the importance of unlocking Europe’s innovative potential – with the aim to match the US. But when personal health data and privacy are on the line, is the pursuit of competitiveness worth the potential cost?
The potential of AI for drug discovery, diagnoses, screenings, and clinical trials is clear, but a recent report from the World Health Organisation (WHO) highlights the gap between ambition and reality: While 66% of countries have national cross-sector AI strategies, only 8% have health-specific ones.
Now, potential changes to the EU AI Act may add to the uncertainty for businesses and regulators alike – uncertainty that could stall innovation, compromise patient safety, and undermine trust in the digital ecosystem.
So, how can healthcare and life science organisations maintain patient safeguards while navigating this complex regulatory environment?
Mapping the current regulatory landscape
Europe has made positive steps to regulate the use of AI in healthcare. Under the Medical Device Regulation (MDR) & In Vitro Diagnostic Regulation (IVDR), companies developing AI-powered medical devices and diagnostics must provide strong clinical evidence, cybersecurity measures, and ongoing post-market surveillance. The proposed European Health Data Space (EHDS) also aims to facilitate secure sharing and secondary use of health data for research and innovation, underpinned by strict safeguards to protect patient rights.
But the rapid development of the technology is challenging regulators’ ability to keep pace, and with pressure from big tech companies to ease regulation, the EU has scaled back some of its existing rules.
The EU AI Act, adopted in August 2024, was the world’s first comprehensive AI regulation. It classifies AI systems by risk and imposes stringent requirements on high-risk applications, including those in healthcare and life sciences. For life sciences, this means that diagnostic tools, patient management systems, and drug discovery platforms may be designated as high-risk. In practice, this means providers must evidence robust risk management, data and model governance, transparency to users, and postmarket monitoring.
However, the EU has indicated that certain aspects may be softened and implementation delayed. In addition, the EU recently announced a package of rules including the digital omnibus and Data Union Strategy which will simplify digital rules and give tech companies more access to data for AI Systems. This includes changes to The General Data Protection Regulation (GDPR), a cornerstone for AI systems processing sensitive health data.
The EU has good intentions of reducing regulatory burden for companies to unlock digital innovation and growth. However, the lack of clarity and sector-specific guidance raises concerns around the safety and security of patient data.
Navigating murky waters
This regulatory uncertainty I’ve talked about so far is one of the biggest obstacles to AI adoption in health. Well-designed policies can offer clarity and confidence, providing a trusted framework for growth. But when mixed signals are sent, regulations are announced and retracted, or overlap with existing rules, it can only add to existing challenges.
Many sophisticated models such as deep learning algorithms operate as “black boxes,” making it difficult for clinicians and regulators to understand or trust their recommendations. To ensure investment in innovative tools and patient access to cutting-edge care, it’s also important for regulators to consider time-to-market and post-market monitoring.
Another key challenge lies in data quality and bias. AI models rely on large volumes of high-quality, representative data to function safely and effectively. In healthcare, gaps or biases in training data can lead to inaccurate predictions, unfair outcomes, and risks to patient safety, particularly for underrepresented groups.
A University of Oxford study looking at UK healthcare records found that 1/10 patients lack ethnicity records, while around 12% of patients had conflicting ethnicity codes in their patient records. Representative data is crucial because the prevalence and severity of diseases vary across different ethnicities. If AI systems are trained on unrepresentative data, it could increase the risk of misdiagnosis and contribute to inequitable healthcare outcomes.
This is why clear policies are crucial to ensure factors like transparency, market access, and bias, are accounted for in the development and implementation of AI systems in health.
Creating positive patient outcomes
Clear and consistent regulation can unlock innovation across Europe’s digital health landscape and help drive more positive patient outcomes. To do this, the EU must focus on creating stability and support not only for established players but small and medium-sized enterprises (SMEs) and startups. When innovators understand the requirements for compliance, they can invest more confidently in new AI-solutions, accelerating the pace at which promising technologies reach the market. We’ve seen this with the success of European health startups like Quantev, a platform for health
and life insurers to automate and streamline their claims operations. It raised €30 million in its last round of funding and is looking to expand its global footprint.
The push for harmonised standards and the development of the EHDS are also opening doors for more cross-border collaboration. Secure and standardised data sharing would allow researchers and companies to conduct pan-European studies, scale their products, and deploy AI tools across different healthcare systems. This collaborative approach is essential for tackling complex health challenges, adapting solutions to different cultures and regions, and putting patient outcomes at the centre of AI development.
Ultimately, this is where AI’s greatest opportunity lies. AI-powered tools can enhance diagnostic accuracy, enable more personalised treatment plans, and streamline clinical workflows, helping healthcare professionals deliver better, faster, and more efficient care. As regulatory frameworks mature, they will play a crucial role in ensuring that these benefits are realised safely and equitably for all patients.
Turning values into a competitive advantage
Europe’s strength lies in principled leadership and now is its moment to prove that those principles can drive real progress. The opportunities to drive real human impact using AI are huge, but Europe needs to pair trusted governance with a clear and supportive regulatory framework tailored to sector challenges.
Regulation and innovation are not at odds with each other. If Europe actively backs its innovators, it can drive innovation to deliver positive patient outcomes and shape a distinctive regulatory model that others can follow.
