With the entry into force of the AI Act, new rules are established for medical devices based on artificial intelligence (AI). Here are the implications for pharmaceutical companies regarding Software as Medical Device (SaMD).

What is the AI Act?

The AI Act, officially known as the EU Artificial Intelligence Act, is a regulation of the European Union (Regulation EU 2024/1689) that aims to establish a regulatory framework for the use of artificial intelligence (AI) within the EU to ensure safety, transparency, and compliance with fundamental rights.

This regulation was proposed by the European Commission on April 21, 2021, and approved by the European Parliament on March 13, 2024. The law came into effect on August 1, 2024, marking a significant step in the regulation of AI globally.

The regulation defines global requirements for the design, production, and marketing of AI systems to ensure safe and ethical use in highly regulated sectors such as healthcare. This new act particularly impacts Software as a Medical Device (SaMD), introducing a new level of responsibility for manufacturers, healthcare service providers, and regulatory bodies.

Which software is considered a Medical Device?

According to the MDR, a software falls into the category of a medical device (Software as Medical Device – SaMD) if it meets three criteria:

  • it has a specific intended use related to the diagnosis, prevention, or treatment of diseases;
  • it processes complex data, producing relevant outputs for medical purposes;
  • it is used on humans to generate diagnostic or therapeutic information.

The increasing complexity of software, especially those using AI, presents new challenges. Specifically, the integration of AI requires that SaMD also comply with the new AI Act regulations.

What is Artificial Intelligence according to the AI Act?

The AI Act defines artificial intelligence as “an automated system designed to operate with varying levels of autonomy and that may present adaptability after dissemination and that, for explicit or implicit objectives, deduces from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that may influence physical or virtual environments.” This definition is crucial for determining when a SaMD integrates artificial intelligence and must therefore comply with the application rules of the AI Act.

When does a SaMD need to comply with the AI Act?

SaMDs fall under the scope of the AI Act if they exhibit the following characteristics:

  • variable autonomy: the software can learn from different areas, such as computer vision, natural language processing, voice recognition, intelligent decision support systems, and intelligent robotic systems;
  • adaptability: the device can evolve through direct interaction (with inputs and data) post-market release;
  • definition of objectives:
    • explicit, set by humans, or implicit, from training data;
    • implicit, deriving from rules specified by humans;
    • not fully known in advance (recommendation systems using reinforcement learning to gradually narrow the model of individual user preferences);
    • related to generating outputs such as recommendations, predictions, and decisions.

Therefore, a SaMD falls under the application of the AI Act if it integrates AI with learning and adaptation capabilities. This pertains to systems capable of evolving post-commercialization, which use algorithms to make complex clinical decisions. If a SaMD is based on AI, it must comply with both the MDR and the AI Act, integrating the two regulations to ensure safety and regulatory compliance.

Risk classification in the AI Act

The AI Act introduces a regulatory framework that classifies artificial intelligence systems based on the level of risk assessed in terms of the probability and severity of negative impacts on people’s and society’s rights. The risk can be: minimal (not regulated by the AI Act), limited, unacceptable and high. Each category requires different control and compliance measures.

Limited risk. Refers to AI systems that do not pose a significant danger to safety, fundamental rights, or the health of individuals. Examples of limited-risk AI include chatbots. These systems require only minimal transparency requirements; thus, users must be clearly informed about how the AI operates and makes decisions, allowing users to make informed choices.

Unacceptable risk. Relates to AI systems that violate fundamental human rights or threaten safety. These systems include, for example, subliminal or manipulative techniques that influence human behavior without awareness, exploit personal vulnerabilities such as age or disability, use social scoring to unfairly discriminate against individuals or groups, profile crime risk solely on a predictive basis, or deduce emotions and personal characteristics from biometric data in unauthorized contexts. Such systems are deemed non-compliant for reasons of safety and protection of fundamental rights and are therefore prohibited by the AI Act.

High risk. Concerns AI systems that can significantly influence the health, safety, or fundamental rights of individuals. This includes systems intended to provide information used to make decisions for diagnostic or therapeutic purposes or to monitor physiological processes. In this case, there are additional obligations for users and manufacturers, including the requirement for evaluation by a Notified Body.

Implications for industry operators

If the Software as Medical Device based on AI is considered high risk (whether as a safety component of another product or as a stand-alone product), both the manufacturer and user must comply with the obligations of the AI Act.

Manufacturers are required to demonstrate compliance with the new regulations by involving a Notified Body for safety and quality verification. They must adopt and implement risk assessment and management processes, as well as provide detailed documentation on the operation of AI systems. They must also ensure ethical use of data and fulfill post-marketing monitoring, correction, and information obligations regarding the system’s operation.

Users of AI-based medical devices must be aware of the legal responsibilities associated with using such technologies. This involves adequate staff training to correctly use the system, constant monitoring of the AI system, and following operational protocols to prevent errors or malfunctions. They must also adhere to safety standards, maintain transparency, and document any issues related to the use of the device, managing any risks proactively.

Additionally, Notified Bodies must acquire specific expertise to assess AI systems and ensure compliance of high-risk AI systems before they are placed on the market. Providers and importers must ensure compliance with EU regulations, and distributors have the obligation to verify that suppliers comply with regulations and may have liability in case of non-compliance.

AI Act: key implementation dates

The AI Act, effective from August 1, 2024, has a phased implementation schedule with key milestones:

  • February 2, 2025: Entry into force of Chapters I and II of the AI Act. Chapter I promotes a human-centered and trustworthy AI to protect health, safety, and fundamental rights. It also requires the education of providers and deployers in AI risks. Companies must ensure that their staff, and anyone involved in operating AI systems, receive adequate training. Chapter II describes prohibited AI systems deemed dangerous. These include real-time biometric recognition technologies, which must cease to be marketed or used.
  • May 2, 2025: Deadline for completing codes of good practice, which will help companies comply with the AI Act guidelines.
  • August 2, 2025: Entry into force of Chapter V, applying governance rules and obligations for general-purpose AI systems, along with the enforcement of sanctions.
  • August 2, 2026: Full application of the AI Act. All high-risk AI systems must comply with the regulation. Companies will need to establish control systems and implement effective monitoring plans to continue operating in compliance.
  • August 2, 2027: Final deadline for SaMD classified as Class IIa, IIb, and III to conform to the AI Act.

Additionally, the regulation is scheduled for review by August 2, 2028, and every four years thereafter.