WEBVTT

NOTE
{
    "characteristics": "public.accessibility.transcribes-spoken-dialog",
    "language": "en-US",
    "language-name": "English (US)"
}

00:00:07.080 --> 00:00:13.800
So we're facing an era where AI is dramatically present in everything we do, in everything we see,

00:00:13.800 --> 00:00:20.159
in everything we read. According to our initial research, our survey, 95% of the executives are

00:00:20.160 --> 00:00:25.720
currently concerned about the accuracy of the data they gather on candidates’ skills and

00:00:25.720 --> 00:00:29.880
capabilities. So I would like to hear your perspective in terms of what’s your experience

00:00:29.880 --> 00:00:34.960
in terms of that we tend to trust data a lot, especially in the past. Stefano, that's an

00:00:35.200 --> 00:00:41.160
excellent question, and I absolutely agree with you that it's top of mind for all the business

00:00:41.160 --> 00:00:48.040
leaders I work with, is the integrity of the data, especially when it comes to acquisition

00:00:48.040 --> 00:00:54.600
and also the long-term use of employee and skills data as well. Nearly half of executives believe

00:00:54.600 --> 00:01:00.760
that their use of AI, especially around employees and hiring, will be significantly impacted

00:01:00.760 --> 00:01:07.019
if they can't trust the basic underlying data and if there are biases and

00:01:07.019 --> 00:01:12.460
inaccuracies that kind of pop in. And one of the things that we are encouraging our clients to

00:01:12.460 --> 00:01:19.459
think about is rethinking how data is sourced, how data is curated, and what is the

00:01:19.460 --> 00:01:24.299
overall trust in governance process around managing the data, not only coming in, but also

00:01:24.300 --> 00:01:28.940
maintaining the data on their employees and their third-party contractors. One of the elements that

00:01:28.940 --> 00:01:33.660
we underline in the chapter, the Human Capital Trends chapter, that is dealing with the

00:01:33.660 --> 00:01:38.540
staffing, is the fact that we need probably to shift from cybersecurity also to disinformation

00:01:38.540 --> 00:01:44.139
security because it's essential to protect the authenticity of work, of worker data. So I don't

00:01:44.140 --> 00:01:50.979
know if you have any kind of tips or any kind of actions that we need to work on to ensure that

00:01:50.980 --> 00:01:56.779
our companies and the companies we work with, they are potentially protected by what we see. I mean,

00:01:56.779 --> 00:02:03.019
cybersecurity is the foundation for trust for a business process. And I think there is a real

00:02:03.019 --> 00:02:09.720
concern, especially with the advent of AI, on how that trust can be

00:02:09.720 --> 00:02:16.640
undermined by disinformation. So things that we talk to organizations about are

00:02:16.640 --> 00:02:22.120
in two broad buckets. One is making sure there's governance around what you're

00:02:22.120 --> 00:02:27.839
going use AI for, right? And the second element, is making sure that we have a governance

00:02:27.840 --> 00:02:34.840
framework, specifically around trust, specifically around how data is being curated and managed

00:02:34.840 --> 00:02:39.640
and what the sources of data are, to make sure that the authenticity of that process and the

00:02:39.640 --> 00:02:44.840
basic trust in that process is present. And then lastly, I mean, one of the things

00:02:44.840 --> 00:02:51.440
that we say in our space is it’s not about human-in-the-loop, it's the human on the loop, right? Because

00:02:51.440 --> 00:02:57.440
you still need someone to manage and kind of make sure that as you reap the benefits

00:02:57.440 --> 00:03:00.440
of AI, you’re also managing the risks around it.
