Stela Solar is an expert on artificial intelligence (AI) business model transformation. As Director of Australia’s National Artificial Intelligence Centre hosted by CSIRO, she helps put AI on the agenda for corporations and the broader AI ecosystem.
It’s hard to find a topic as polarising and heated as the subject of trust in AI. Even the word ‘artificial’ infers a separation from ethical human values, and the click bait media often talks of a bleak future for human workers. Unsurprisingly, the subject captures our imagination while stoking our fears.
So, it is such a gift to spend time with Stela, who not only sheds light on this fascinating topic but does so with an openness that in many ways could be the key to AI’s ultimate success.
But first, a definition of AI. Or better yet, three definitions, to help separate fact from fiction.
The first is ‘Narrow AI', which is AI technology able to learn to do something specific, like how to forecast supply and demand, translate language or detect objects.
The second is ‘General AI’. This is when AI is supposed to be on par with human intelligence, so the ability to transfer knowledge from one domain to another using inference and extrapolation.
The third is ‘Super AI’, where AI is more intelligent than us, and exhibits a higher capacity to assimilate knowledge, interconnect things and find new deductions that people can’t. This is what we see in movies.
Despite the level of public debate, the reality is we are only at Narrow AI. General AI and Super AI are not only some time away, according to Stela they may never happen. But this envisioning is affecting trust today.
"A lot of the objections or concerns that the community might have around AI is when they don’t know how an AI system is operating, what its capability is, or its intent. Trust becomes a real factor as the AI system can seem like a ‘black box’."
The epicenter of AI mistrust is around the potential of mass job redundancy. To counter these fears, Stela turns to not a scientist, futurist, or technologist but a humorist. None other than comedian Jerry Seinfeld’s quote, ‘The thing that's important in innovation is knowing what you're really sick of doing’. The comedian’s insight taps into the reality that AI is not about replacing people, it’s about replacing the dull, dirty, dangerous or difficult functions, known as the 4 Ds. Additionally, in our world of increasing complexity and scale, AI technology presents a way to navigate this complex landscape.
“I don't think many of us are leaving work on time and feeling like we've completed everything. So how can AI help us do some of those tasks or even address the skills and employee shortage?”
She explains that part of the confusion is in how we talk about AI. We often read ‘humans and AI working together’. It's this idea that AI is some ‘being’. But it’s actually a set of tools that people create and use. And as such, businesses also choose that tool and how to use it, how to implement it and design it.
"Ed Santow, Professor at University Technology Sydney and Co-Director of the Human Technology Institute, describes AI as ‘not helping us do new things, it's helping us do the same things but differently.’"
The ‘differently’ becomes more different every day. The context, the data and interconnections are always changing, so the outcomes are never fixed. As such, governance models need to be agile and fluid, always sensing and assessing the outcome so that they can reevaluate, reiterate and redesign.
Stela makes a critical point that accountability lies with those designing and implementing the AI systems, rather than the tools themselves. Organisations in conjunction with communities are the ultimate partners to guide how best to use technology to benefit customers, stakeholders and the communities they serve. When done well, AI can deliver great results in areas such as customer service, improved workplace productivity and better health outcomes.
Stela references an example in Western Australia where a data centric remote health monitoring service is leveraging AI.
“A community advisory group was established to examine the insights the AI model would infer and how the insights would be actioned. The AI system had an agile governance approach where changes in its function throughout the lifecycle would deliberately trigger a community consultation.”
One of those triggers was data drift. Models are based on data available to date. But as new data appears and insights change, this would trigger the community advisory group to reconvene. This level of community consultation, empowerment and understanding creates a trusted environment for AI and removes the barriers to adoption for business.
WA’s success is a lesson for all. Robust results, regulatory compliance and a collaborative process of re-evaluation. Fear is replaced with knowledge, trepidation with trust. Innovation is encouraged not stifled. All for the purpose of providing positive patient outcomes.
Positive adoption by community is further strengthened with more perspectives. “We are all on the AI journey, there is no blueprint. But more eyes, diverse perspectives and multidisciplinary skillsets help us navigate the AI opportunity more effectively. Because it's never just the AI technology that we're talking about, it's part of a much bigger whole.”
Australia punches above its weight in leading edge technology. Our lists of achievements are impressive and for many, quite surprising – WIFI, the bionic eye, two decades of Quantum AI development, world-leading remote operations and computer vision capability, a close second place in the DARPA world robotics challenge and recently Queensland was named by the World Economic Forum as an Advanced Manufacturing and Robotics Hub. Even in the creative fields, Australia won first place in the AI Eurovision Song Contest. Stela is both proud and frustrated.
"Australian business and community are not aware of the nation’s great capability and strengths. Or even that we were in the first wave of countries to launch an AI ethics framework to help organisations deliver positive outcomes with AI technology to create responsible and inclusive opportunities for every person, for every business and for the country."
This knowledge gap means a gap in business impact, a gap in perception versus reality. And importantly, a gap in how families see AI technology.
Why are families so important? This is where decisions are being made on future careers. Young STEM students who can go on to create tomorrow’s solutions. Stela believes that a lack of understanding of Australia’s success combined with media fear mongering is causing parents to think twice or even dissuade their children from pursuing AI careers.
Thankfully, Australia has people like Stela who is focused on closing the knowledge gaps, tackling fear with openness and transparency, sharing the remarkable benefits of co-design and co-creation to enable positive adoption of AI across business and communities. She is also clearly articulating where trust lies in AI and how to best cultivate trust through those developing and implementing the AI tools rather than deflect trust to the technology itself.
“In Australia, our core value is that everyone deserves ‘a fair go’. Responsible and inclusive AI is the digital version of ‘a fair go’. It’s something we can all rally behind so that we shape AI technology based on visions of a better future, rather than replicating or scaling biases of the past.”
The field may be evolving at speed, however when agile governance approaches are implemented, together with clear intention, transparency and co-creation, red flags of concern soon become green shoots of trust.