Hoppa till huvudinnehållet

Key Considerations When Implementing GenAI in Your Organisation

Generative artificial intelligence (“GenAI”), refers to artificial intelligence systems which by identifying patterns within existing data are capable of creating new content, such as text, images, or music. Advancing at an unprecedented rate, these models utilise advanced algorithms, including deep learning, to generate outputs that emulate human creativity and innovation, presenting significant opportunities for advancement, efficiency, and market leadership. According to Deloitte article “What's next for AI”, approximately 70% of organisations are exploring or implementing large language models (LLMs). However, organisational change can only occur at a certain pace, and 31% of respondents indicate that their organisations are not yet ready to deploy GenAI. As GenAI becomes integral to business strategy, organisations that are slow to priorities this technology risk falling behind those already leveraging its extensive benefits. 

To fully leverage GenAI, organisations must understand which GenAI systems to use, how to effectively create value, and stay informed of the regulatory landscape and associated risks. Deloitte’s survey “The State of Generative Intelligence in Nordics” show regulatory compliance is the greatest challenge when implementing and deploying GenAI technology, with 46% of Nordic organisations reporting it as their biggest concern. Other barriers include loss of trust due to bias and inaccuracies (29%), risk of mistakes with real-world consequences (30%), and not achieving expected value (34%). 

To prepare organisations for the implementation of GenAI, effective governance over its development, integration and use is essential. This overview highlights key aspects to prepare your organisation for GenAI, covering potential risks and mitigation strategies as well as approaches to fully leverage its potential.

Regulatory aspects to consider when using GenAI 

As a general starting point, the regulatory risks of GenAI stem from its development and ongoing function. GenAI is trained on vast amounts of data and continuously learns from new data to improve its performance. Given the operational nature of GenAI, what potential regulatory risks are associated with its use? 

Safeguarding of personal data

If personal data is processed during the use of GenAI, adherence to the General Data Protection Regulation (GDPR) is required. Both the concepts of “personal data” and “processing” in the GDPR are interpreted broadly, meaning that any input and output containing data that can identify a natural person is considered a processing activity. However, even if the organisation does not intend to use GenAI for the processing of personal data, adherence is still required if the model has been trained using personal data.    

One of GDPR’s core principles on purpose limitation stipulates that personal data must be processed in a way that ensures it is used for specific, explicit, and legitimate purposes. This implies that while GenAI is developed for its capacity to serve diverse purposes, its mere application cannot constitute a lawful purpose in itself. Hence, without clearly defined and lawful use cases, the utilisation might compromise GDPR’s principle of purpose limitation. Additionally, using a GenAI model developed externally typically involves transferring personal data to the provider or developer. In such instances, there might be a risk that data is transferred to countries outside the EU/EEA, particularly because many GenAI providers are based in the US. Such transfers are only permitted if certain conditions of the GDPR are met. 

Another risk associated with the utilisation of GenAI stems from the methods used to train the models. GenAI models are frequently trained using deep learning – a technique capable of processing large datasets in a manner that is challenging for humans to oversee. While the input and output are comprehensible, the internal workings and the rationales behind specific decisions might not always be transparent. Consequently, if the organisation cannot explain how personal data has been processed, such use of GenAI might later on compromise the data subjects’ right of information and access as protected under the GDPR.    

Copyright  

The use of GenAI also raises copyright questions when the model is trained and utilised with copyrighted material. As GenAI generates new content through generalizations of the learned data, it may reproduce training and input material. Without the applicability of certain copyright exceptions, their use might raise concerns regarding potential copyright infringement.    

Another key consideration is whether the generated output might be copyrighted. EU copyright law centres around human authorship, meaning a natural person must be involved in the creation process. With GenAI outputs, the question arises as to whether these can have an author, as the selection, disposition and compositions of elements in the output has been chosen by the model rather than being sufficiently concretized by the user of the GenAI. Due to the difficulty in determining whether the user has made a meaningful contribution to the output, ownership and usage rights need to be sought through contractual agreements with third parties and the provider of the GenAI model.  

Discrimination and biased outputs

 In addition, as outlined in the introduction, concerns regarding the potential loss of trust due to inaccurate and biased outcomes have emerged as barriers to implementing GenAI. These risks do not inherently originate from GenAI itself but are primarily a result of the biases present in the data used to train the model.   

If GenAI models are implemented into AI-systems which are classified as high-risk according to the EU AI Act – such as certain systems used for recruitment or employee evaluation processes – the Act requires the deployer, in certain circumstances, to ensure that the data used to train the model is relevant and sufficiently representative. Additionally, high-risk AI systems containing GenAI technology must be designed and developed in such a way that they can be effectively overseen by natural persons during the period in which they are in use. The deployer should ensure that individuals with the necessary competence, training and authority are available to mitigate potential automation biases. 

Strategies to minimise risks and maximise benefits

To ensure safe and responsible usage of GenAI – in other words to comply with the amongst applicable regulatory requirements – and at the same time maximise GenAI’s potential for business objectives, organisations must establish robust guidelines for its use by the business and employees. These guidelines can be organised into a set of security documentation, including an AI strategy, an AI governance plan, and an AI policy.

While these security documents may not be legally required, the AI Act does impose certain obligations on deployers of AI systems. These include ensuring that the systems are used correctly and according to instructions, assigning human oversight, ensuring that input data is relevant, and monitoring the system's operation. With robust security documentation in place, the deployer can ensure the responsible and effective integration of GenAI, aligning with operational needs while simultaneously assuring compliance with regulatory standards. 

AI strategy 

To fully harness GenAI's potential, organisations need to understand how it should be applied within their unique business environment. An AI strategy can act as a roadmap for AI adoption, ensuring alignment with broader business goals. An effective AI strategy should address the intricacies of integrating AI into workflows, developing essential skills among users, and aligning AI adoption with business objectives. It should clearly define organisational goals, pinpoint problems to be solved, and set metrics for improvement. Key components include data management, analysis, and utilisation, as well as identifying the talent needed for AI development. In other words, an AI strategy should embrace a holistic approach, enabling organisations to address and tackle unique challenges, uncover deeper insights from data, boost operational efficiency, and enhance customer and talent experiences.      

AI governance plan 

Once an AI strategy is established, an AI governance plan should be implemented to align with the strategy's objectives and aims. This plan should serve as a comprehensive framework for managing AI deployment, encompassing ethical considerations, risk management, compliance, and accountability. 

Assigning roles and responsibilities within the organisation, along with drafting policies and procedures for controlling and monitoring GenAI, is key to minimising risks and achieving organisational accountability. Central to this governance plan are ethical guidelines, which play a crucial role in addressing biases in AI outputs stemming from the data they are trained on. Without human oversight, these biases can perpetuate norms that may not reflect societal values. Therefore, the plan must tackle ethical risks head-on and ensure everyone is aware of the data being produced. The governance plan should also incorporate routines for monitoring and evaluating AI systems to ensure they comply with relevant laws, regulations, and industry standards. It's essential that users are informed about the system's functions, capabilities, and risks, thereby aligning with the organisation's goals and aspirations.

Lastly, it is vital to establish clear ownership and responsibilities for reviewing and updating AI systems. This is especially important since Nordic organisations, as stated above, have highlighted potential errors and their real-life consequences as a significant concern when deploying GenAI. This ensures all employees are well-informed about AI's risks and benefits. By implementing an AI governance framework, organisations can foster safe and responsible AI use, empowering themselves to thrive and swiftly adapt to technological advancements. 

AI policy  

A crucial component of the governance plan is the AI policy. This policy provides detailed operational rules and instructions for the use of AI by its users, similar to a handbook. The AI policy should clearly outline procedures for handling data, including privacy and security measures, protocols for reporting issues or concerns related to AI systems, and guidelines on which systems are allowed for use and which are not. Furthermore, the AI policy must specify the types of data that can be input into the AI systems and detail the necessary training required for personnel to use these systems effectively and safely. This ensures that all users are adequately prepared and informed about the proper utilisation of AI technologies within the organisation.

Essential participants in GenAI security documentation

Leadership plays a crucial role in crafting guidelines and rules to ensure ethical and regulatory compliance as well as smooth AI implementation. By collaborating with relevant stakeholders, management can establish a comprehensive set of AI security documentation that paves the way for successful integration and future adaptation. Involving key persons within the company is essential, as they possess a knowledge of the company's aspirations and hold the mandate to decide business goals and objectives. Their insights ensure that the AI strategy aligns with the organisation's vision and mission, ultimately driving success in leveraging AI technologies.

How can Deloitte assist?  

Do you need advice on implementing GenAI in your organisation? Deloitte Legal’s team are experts in navigating the current landscape surrounding the implementation and utilisation of GenAI, frequently assist clients in developing the appropriate governance documents tailored to your organisation and have a lot of experience with handling implementation processes for clients.

Authors: Hanna Folkunger and Elin Petersson

Did you find this useful?

Thanks for your feedback