Understanding AI Literacy
With Article 4 of the EU Artificial Intelligence Act (AI Act) having entered into force on 2 February 2025, organisations are now required to implement measures to ensure all individuals in the AI value chain are equipped with AI literacy. AI literacy encompasses the essential skills that providers and deployers must ensure affected individuals possess to make informed decisions when deploying AI systems.
This article presents an overview of the AI literacy requirement, outlining the responsibilities it places on organisations and offering insights into achieving compliance.
Key Obligations and Initiatives Under the AI Act
Article 4 of the AI Act broadens its scope to include not only employees of a provider or deployer, but anyone directly using the system on their behalf, such as contractors, service providers and clients. Consequently, organisations must identify all persons covered by Article 4 and incorporate them into their AI literacy initiatives. This ensures that everyone involved has the necessary knowledge and skills required.
Rather than requiring everyone to be an AI expert, the purpose of Article 4 is to equip individuals with the necessary knowledge to engage with AI systems responsibly. As AI systems are utilised differently across organisations, the necessary skillsets may vary, making it challenging to establish a universal standard for AI literacy.
Measures to be Taken
To effectively address AI literacy requirements, AI literacy should not merely be regarded as a compliance obligation, but as a vital aspect of AI risk management and robust governance practices. Therefore, organisations need to integrate AI literacy into their broader governance frameworks. The following sections will detail our recommended key components of an AI literacy programme and detail what steps organisations should take to comply with Article 4 of the Act.
Identification: Organisations should begin by identifying the specific AI systems deployed within their operations, the context in which these systems are utilised, and the individuals who interact with them. Additionally, it is essential at this stage to assess risks associated with the AI systems deployed, especially when high-risk AI systems are involved. This assessment will determine the requisite level of AI literacy and guide the development of relevant skills and knowledge needed to ensure a safe and responsibly AI landscape.
Goal Setting: Once relevant aspects for understanding how AI literacy should be defined in the specific organisation have been specified, it’s essential to establish clear and measurable targets for AI literacy initiatives. These targets should be tailored to reflect the organisation's role as either a provider or deployer of AI systems, closely aligning with the identified risk profiles detected. Additionally, it is important to develop metrics to evaluate the success of these goals, ensuring they are specific, achievable, and directly address the identified risks and opportunities.
Implementation: Once risks have been identified and targets and metrics have been developed, organisations should proceed to create and implement strategies to achieve these objectives. Crafting training initiatives that are customised to the diverse levels of technical knowledge, experience and education amongst the organisation’s staff and other relevant individuals is essential. These training initiatives must be contextually relevant and precisely targeted to the sector and purpose for which the AI systems are employed.
Evaluation: Organisations should establish robust mechanisms to monitor and evaluate the effectiveness of AI literacy programmes. These mechanisms are crucial for assessing the impact of training initiatives and ensuring they achieve the set objectives. Regular reviews of feedback and performance metrics are critical for continuously refining and enhancing the programmes. The evaluation process should focus on verifying that stakeholders do acquire the necessary skills, knowledge, and understanding that were targeted in the initiatives. Further, maintaining comprehensive internal records of training and guidance initiatives allows for flexibility and adaptability to accommodate the diverse nature of AI systems and the varying expertise of those interacting with them.
Sanctions for Non-Compliance with Article 4 AI Act
Effective from 2 August 2025, providers and deployers may incur fees and penalties for non-compliance with the AI Act. The penalties vary depending on the severity and nature of the infringement and may include financial penalties, restrictions on AI system usage, or mandatory corrective measures. Although the AI Act does not specify penalties for non-compliance with Article 4, it indicates that the operator's responsibility, including technical and organisational measures, will influence the determination of fines. This indicates that AI literacy, as an organisational measure, will be evaluated to ascertain whether breaches stem from inadequately AI-literate staff or individual errors. Therefore, organisations must prioritise compliance with Article 4 to ensure not only a sustainable AI ecosystem, but also to mitigate potential legal and financial repercussions.
How Deloitte Can Assist
Deloitte Legal can help ensure AI literacy within your organisation. We assist in developing tailored AI literacy programmes, creating governance documents, and managing implementation and evaluation processes for compliance with Article 4 of the AI Act. Our team is committed to cultivating a responsible AI ecosystem, enabling your organisation to leverage AI technologies while maintaining ethical and legal standards.
Author: Hanna Folkunger