The Board’s responsibility for the use of generative AI
Cornelia Diethelm is an expert in digital ethics and works on digital change at the interface between economics and society, identifying social expectations and advising on strategic trends. She is also an independent adviser, speaker, course leader and lecturer in digital ethics. Within the DACH region (Austria, Germany and Switzerland), Cornelia Diethelm is a pioneer in terms of responsible management of data and new technologies, including artificial intelligence (AI). She is Vice-President of Board of Directors of Metron and a member of the Board of Ethos and Sparkasse Schwyz and co-owner of the legal tech company Datenschutzpartner AG. She studied politics, management and economics as a mature student before going on to complete an MAS in Digital Business.
swissVR Monitor: You are an expert in digital ethics. How can a company make ethical and responsible use of generative AI tools?
Cornelia Diethelm: AI-generated content is based solely on probability or chance, depending on the data on which the AI has been trained. This means that results can be out of date, misleading or even inaccurate. Using AI tools responsibly means equipping staff and management to ensure compliance with legislation and always treating outputs with a degree of caution. I recommend companies to create a governance framework for the use of AI and invest in training so that they use AI tools where these tools genuinely add value, for example by providing inspiration, optimising text, or generating simple illustrations and automated subtitles.
swissVR Monitor: What responsibility do Boards have for the use of generative AI in the company?
Cornelia Diethelm: The Board’s responsibility is to ensure that the company complies with statutory and internal obligations, for example in relation to data protection or data security, and avoids financial and reputational risk. This is part of risk management and is not a responsibility the Board can delegate. It is also clear that over the medium to long term, generative AI is going to transform many business models, processes and job profiles. Ensuring that the Board has an adequate range of digital skills is increasingly crucial to a company’s success and its management strategy and oversight.
swissVR Monitor: What are the substantive and organisational requirements for tackling this issue within the Board?
Cornelia Diethelm: Substantively, it is important that the Board does not equate generative AI with AI more generally or with IT but recognises it as a crucial part of the puzzle in terms of digital transformation. The focus should not be on a specific technology but rather on establishing how a company can better meet the needs of its market. Or to put it another way, the fact that we have the ability to do something using generative AI does not mean that it is sensible to do so or that it is a good investment for the company. In organisational terms, the Board should discuss the opportunities and risks of AI – including generative AI – at least once a year, because AI now underpins pretty much everything the company does. It can be discussed as part of risk management or market analysis or within a dedicated Board meeting. And in some companies, it makes sense to have a Digitalisation Committee.
swissVR Monitor: Our survey findings show that around a quarter of Board members see ethical concerns as one of the major challenges of generative AI. What do you see as the major ethical challenges?
Cornelia Diethelm: One huge challenge is that AI generated content sounds plausible but may be completely inaccurate or inappropriate in content terms. Outputs therefore always need to be checked for quality, accuracy and possible bias. Images, audio and video content created by AI are open to being digitally altered or manipulated: for example, an image can be made to look like a real photo; voices and appearances can be manipulated to sound or look like an individual and deepfakes can be posted without consent from those involved. And sub-standard working conditions and environmental concerns are not given enough consideration. Providers have work to do here. Unfortunately, criminals are making wide use of AI tools for fraud, personalised cyber-attacks and identity theft.
swissVR Monitor: Around two-thirds of the Swiss Board members we surveyed reported that their Board has not yet discussed its own values in relation to the use of generative AI. How can Boards start to tackle these issues in line with their ethical principles?
Cornelia Diethelm: I know of Boards that discuss this at a dedicated meeting, and I think that is a good way of addressing the importance of this issue. And the upside is that everyone benefits from some basic expertise in generative AI. Another way of tackling the issue would be for management to draft an overview to inform a strategic discussion within the Board. The outcomes of the Board’s discussion could then underpin changes to strategy and risk management or form the basis for an AI governance structure.
swissVR Monitor: What examples of best practice do you recommend when companies are implementing ethical governance of the use of generative AI?
Cornelia Diethelm: The first requirement is that the governance structure is as specific as possible and easy to understand: that is the best way of ensuring that everyone in the company understands it and will implement it. I also recommend identifying an individual or a team to be responsible for questions and suggestions. But the most important things are training, setting a good example within the company, and regular communication to ensure that companies take their staff with them, minimise concerns and build valuable knowledge within the company. Responsible use of AI tools is ultimately an investment in the company’s future – and in its staff!