tech-trends-2023-nyitas-a-mesterseges-intelligencia-fele

Article

Can we trust our new digital colleagues? Opening up to Artificial Intelligence

The value of AI-driven decision-making is now unquestionable, but the question often arises: how can its potential be exploited most efficiently? The maximum utilisation of this potential often depends on how much trust we have in artificial intelligence. According to Deloitte’s Tech Trends survey, one of the main trends of 2023 will be reliable AI.

Deloitte has been analysing global developments in the technology industry for the eleventh year now. Deloitte Tech Trends is now presenting current trends that will define the direction of development in the coming years and shape the market. One of the main themes of the 2023 publication addresses questions of trust in systems with artificial intelligence.

Computer technology has long gone beyond the rudimentary solutions that support decision-making by evaluating simple, logical conditions. Using databases of appropriate quality, artificial intelligence offers the solution of complex tasks requiring cognitive abilities – such as image processing e.g. for self-driving cars, or the processing of language elements for chatbots – by discovering correlations that are often incomprehensible even for the human brain.

 

One of the unavoidable drawbacks of AI-driven systems is that the system outputs can often only be traced back with great effort and yet may often prove wrong.

— said Dr. Barta Gergő, senior manager at
Deloitte Hungary’s Risk Advisory. 

 

Several hundreds of AI solutions are available daily on the market and there is only little difference in performance between the AI systems of the largest providers. The most professional language apps are all already using the GPT-3 model, which legitimately raises the question who can utilise AI potential most efficiently. According to Deloitte’s research, it is today no longer the algorithm/model that defines satisfactory performance, but the fact whether end users understand the decisions made by AI and agree with it. 

 

It is human nature not to accept what we do not understand, so even the most powerful tool is a simple hand axe in the hands of a distrustful employee. The biggest driver of development for the successful AI of the future will be the fact whether developers deliver them to users in a transparent and explainable way. If users are wary and do not believe that AI will support them in their daily tasks, they will not make use of it and its introduction will not pay off.

— added Deloitte Hungary’s
Technology Consulting partner Kiss Dániel.

 

Let’s think of AI as a new colleague. Only an open-minded colleague who is capable of honest communication and appears willing to give feedback in the interest of improving performance etc. can efficiently support a team. If the new AI colleague is transparent, easy to understand, flexible and reliable, it can become a natural part of the workflow, just like a newly hired colleague. According to Deloitte’s analysis, reliable AI is based on three distinct pillars.

 

Transparent data gathering

Transparent data gathering ensures that data subjects understand why it is necessary to use certain data and how they are processed. As a result, users can make informed decisions in an appropriately controlled environment
about whether AI can bring real value to them.

 

Explainability

One of the biggest challenges of AI is its black box nature. This means that many families of algorithms can only be interpreted using considerable resources, which leads to natural distrust. However, the black boxes can be whitened if end-users are present at the problem definition stage and are involved in system development, even providing their own expertise on the solution of each task, testing it and then interpreting the outputs.

 

‘Perfect’ does not exist

In our day-to-day work, we have become accustomed to the applications we use working more or less as we expect them to. For instance, the new vendor entered in the CRM system will automatically pop up the next time an invoice is prepared, or after entering the right password you have access to corporate resources. Since AI mainly deals with probabilities, the question is how accurately a given AI model can resolve a particular issue, and this will very rarely be 100%. As with artificial intelligence, there are no wrong decisions, only estimates based on historical data, which then requires interpretation by the users. For now, AI supports, rather than makes decisions, which users need to understand and use AI in that spirit.

AI-enabled applications and technologies have emerging capabilities that most users think are the exclusive domain of humans. As organisations implement these capabilities, it is of paramount importance that they carefully consider user interactions and assess the impact that intelligent applications will have on trust at the design stage. For many companies, AI can lead to radical positive change and therefore revenue growth or cost reductions, but a lack of trust can be a barrier to ambitious projects

— said Dr. Barta Gergő

 

Did you find this useful?