The AI Act introduces a framework aimed at regulating the deployment and usage of AI within the EU. It establishes a standardised process for single-purpose AI (SPAI) systems' market entry and operational activation, ensuring a cohesive approach across EU Member States. This regulation adopts a risk-based approach by categorising AI systems based on their use case, thereby establishing compliance requirements according to the level of risk they pose to users. This includes the introduction of bans on certain AI applications deemed unethical or harmful, along with detailed requirements for AI applications considered high-risk to manage potential threats effectively. Furthermore, it outlines transparent guidelines for AI technologies designated with limited risk.
The legislation places a strong focus on AI ethics, aiming to leave the Act adaptable to as yet unknown iterations of AI technologies. However, the public use of general-purpose AI technology prompted the legislator to differentiate between single-purpose AI and general-purpose AI. The AI Act regulates the market entry for general-purpose AI models, regardless of the risk-based categorisation of use cases, setting forth comprehensive rules for market oversight, governance, and enforcement to maintain integrity and public trust in AI innovations. Given its abstract nature, the legislation contains areas that are yet to be fully defined.
Download the report to learn more.