There is currently lots of momentum regarding the regulation of AI, and how organisations should ensure the safe or Trustworthy use of an AI system. One of main talking points is about the use of personal data and privacy risks. However, another attention point that is not getting the same level of consideration, but is just as valuable, is cybersecurity. The security of the development and use of AI systems cannot be understated.
While there are many important topics regarding cybersecurity and AI, there are three important dimensions that are inherently related to both security and privacy. They are: 1) the security of AI systems, especially ones that process personal data; 2) the use of AI tools and systems by malicious actors to circumvent security measures; 3) the use of AI tools and systems for cybersecurity purposes.
Each of these topics present unique challenges that data protection officers, lawyers and cybersecurity professionals must overcome. Additionally, while there are unique challenges to each, the three topics are all interconnected. Therefore, an understanding of each topic independently and collectively together is required.
The novelness, power and inherent characteristics of AI systems increase the threat landscape of organisations. However, first it should be stressed that AI systems sit within an organisation’s larger IT ecosystem, and therefore, the security of an entire network is in scope when considering the security of AI systems. In fact, AI frameworks or international standards reflect this understanding as an organisation’s entire cyber ecosystem is captured. Therefore, organisations should reflect on their cybersecurity abilities before developing or deploying AI tools. For example, to better protect against data poisoning or model manipulation, organisations should ensure that they have an adequate privileged access management solution in place. In short, AI systems can increase the impact and likelihood of a risk, and need to match these increases with the correct security controls.
On top of that, AI systems bring their own unique risks such as vulnerabilities in their software or the lack of ability to migrate from legacy systems. It is known that the cost of retraining algorithms can require extensive resources in time and money. Therefore it is not surprising that organisations may decide to leave certain vulnerabilities in AI technologies after examining the cost of resolving the issue.
Furthermore, the use of new AI technologies inherently presents new methods for malicious actors to extract confidential information or manipulate individuals into sharing such information. For example, malicious actors can hide text or commands on websites to manipulate chatbots for desired purposes such as getting them to prompt naive end users to share certain information. There are unforeseen methods to exploit AI technologies that present serious cyber risks for organisations. Further, malicious actors have developed malware that is disguised as ChatGPT or similar tools. These new ways of attacking require organisations to be more vigilant in understanding how these tools can be leveraged against them and implement cybersecurity controls to mitigate their harms.
To better handle these risks, the European Union Agency for Cybersecurity (ENISA) has recently highlighted a three layered approach to ensure the protection of AI systems. 1) Cybersecurity Foundations, 2) AI Fundamentals and Cybersecurity, 3) Sector Specific Cybersecurity Good Practices. Considering the industry specific standards for the AI Act will not be published until 2025, organisations can currently take more active measures for the first two layers. However from ENISA’s report it is apparent that organisations must take an even more proactive approach to cybersecurity if AI systems are being leveraged.
The use of AI tools and systems by malicious actors
Malicious actors can not only use new techniques which leverage AI technologies to obtain access to information they would not normally have access to, but they can also leverage AI technologies to attack IT networks. While AI tools, specifically chatbots, are prone to making mistakes, they can be used to correctly write malicious code which now allows individuals lacking adequate coding skills to become a threat agent, and allows for mature malicious actors to focus more attention on more sophisticated aspects of an attack.
AI tools will also be leveraged to write more sophisticated code that will allow for attacks to remain hidden or be less noticeable to cybersecurity specialists. There have already been instances where cyber experts were able to get ChatGTP to help write “polymorphic” malware, a type of malware that can more easily avoid antivirus or antimalware solutions designed at detecting malicious file signatures.
However, it is not just malicious code that cybersecurity experts must be worried about. These newly introduced chatbots can all be leveraged to generate more convincing text for phishing emails. These chatbots help brainstorm the topic of the phishing email, rectify grammatic and spelling errors and can generate text that would customarily be used between the targeted audience and fictious sender. In other words, these chatbots can generate highly believable text that certain lack clues that customarily indicate an email is fictious or malicious.
The use of AI tools and systems for cybersecurity purposes
On the flipside, AI can also be used to protect information networks, and in fact, there are already several different types of tools that leverage AI technologies already on the market. These technologies cover a wide variety of cybersecurity technologies. They range from data discovery & data loss prevention to identity and access management and various detection and response technologies.
Essentially, there is now an AI component for every type of cybersecurity related technology.While these tools may bring increased protection necessary to handle AI leveraged cybersecurity attacks, they are currently not a silver bullet as sometimes promised. First, the organisations who have developed these tools are often not forthcoming in explaining how these tools were designed, trained and how they work beyond very high level explanations. Considering most of these tools are processing personal data, certain transparency obligations must also be met. Secondly, as seen with recent iterations of data loss prevention technologies, organisations must provide adequate resources for the implementation and the gradual expansion of the technology within the network to avoid disruptions to their organisation’s business operational landscape. While some of these cybersecurity technology developers promise their technology can easily and seamlessly be implemented, this is not always the case. Without proper investment from the forefront, these technologies can lead to numerous false positives or mischaracterize events which will distract cybersecurity experts.
With that being said, AI cybersecurity technologies have important potential and will likely be necessary to have adequate IT security in the future. Organisations cannot ignore the technological developments in cybersecurity and remain vigilant against modern threats and risks. Rather, organisations should develop or re-examine their information security management system (ISMS) and security measures in light of these new AI technologies.
Independently, these topics are nuanced and deserve their own separate attention. However, collectively, these topics highlight the utmost importance for AI in the cybersecurity space. AI will increase the cyber threat landscape and be used by both malicious actors and cybersecurity specialists. These transformations will revolutionize the cyber landscape and collectively change material aspects of how we understand data protection.
Given the lessons learned from the GDPR, there is not a better time than its anniversary to look forward and plan on how to treat the newest set of cyber risks and next major source of regulatory compliance obligations. It is likely that the technological advancements in AI will continue to gain speed and organisations must keep pace regarding their cybersecurity. While the AI Act will be adopted in the future, the security of these systems should already be managed. Therefore, to better ensure the security of not only personal data, but all important data, organisations should invest and plan how overcome risks related to AI technologies and how to leverage AI cybersecurity tools.
Deloitte offers a Trustworthy AI™ framework that incorporates cybersecurity for organisations to are seeking to leverage AI technologies. These AI systems are only as secure as the ecosystem they sit in. Therefore, organisations should lay down the foundation for their use of AI assess the overall resiliency of their cybersecurity measures and conduct a holistic penetration test. With this in mind, organisations should take distinct steps to incorporate AI malicious actors and AI cybersecurity technologies within their ISMS or develop an ISMS with these factors in mind.