With deep learning techniques headlining today’s news and commercial applications being powered by ever more complex models, organizations may be tempted to try to solve their use cases with state of the art AI models. However, whether you should use such complex methods or are likely to benefit more from simpler approaches depends on a variety of factors. So what method do you pick to REASON with AI?
By Sjors Broersen, Daniël Rennings & Naser Bakhshi
As humans, we sense our environment, we reason about it, use conclusions to take action and augment our interactions with our environment whilst continuously learning to improve this process over time. Likewise, the Deloitte AI Loop provides a framework that resembles this human approach in the space of artificial intelligence. Based on our experience in bringing cognitive solutions to our clients, we have lined out DAIL as a blueprint for all aspects that should be covered in a successful AI solution, as we explained in the introductory blog. Following a deep-dive on SENSE in our previous blog, this blog focuses on the reason component, consisting of algorithms based on (tacit) knowledge and patterns in data in order to reason.
Reasoning might be the most elusive human skill for computers or artificial intelligence to learn. The ability to take into account different pieces of information, weigh them against each other, envisioning different scenarios and finally coming to a grounded decision is something that seems like a typical human skill. However, it very well sets us apart from computers.
Still, with new advancements in deep learning, AI is getting more intelligent and capable of making better decision based on more types of data. Recent examples include the language generating capabilities of GPT-3, the image generating capabilities of DALL-E or the protein folding solved by DeepMind. What these examples all have in common is that they take enormous amounts of data and through meticulous and lengthy training on vast computing resources, they learn to do something that makes sense. In doing so, they are able to generate tremendous (potential) value for business and society.
Is AI really intelligent?
Even though the above mentioned capabilities are mind-blowing and better than humans could perform on these tasks, not many people would call the algorithms actually intelligent. Gary Marcus argues, for example, that GPT-3 is not grounded in reality, and does not actually think. While this may be true, for now, it does not need to stop organizations from generating value from them.
Due to the scientific breakthroughs and the ever-growing commercial applications of these advanced techniques over the last-decade, these deep learning techniques have become top of mind. However, if you want to obtain value from AI for your organization, this is not a silver bullet. Rather, you should follow a structured process, before you select your tool to reason.
Before you can start to REASON, there are two preconditions: data and knowledge. As discussed in our previous blog, having a proper data foundation in place is a strict precondition for enabling reasoning. In addition, knowledge in the form of domain expertise is required to properly reason about data and its context. Such knowledge can be located in the minds of business liaisons, but can also be expressed in machine readable formats, such as ontologies, embeddings or knowledge graphs. If data and knowledge are available, we can effectively start a reasoning process.
As displayed by the toy examples below, reasoning can come in different flavors: based on deductive, inductive and abductive logic.
In deductive reasoning, one can draw precise conclusions from a given situation. As you can imagine, AI can easily outperform humans in crisp logic (think of computers crushing humans in a variety of board and digital games). However, in real-world applications, there is no generic rulebook that contains a section on “When does a player win”. Instead, humans define what constitutes to a win for them, for instance via concrete KPI’s. For example, in assortment optimization, one could identify KPI’s based on a product margin and customer reach. In addition, one could define rules-of-thumb that should be considered on these KPI’s, leading to so-called heuristics, such as “the margin of a product should be positive in each quarter”.
With inductive logic, this changes as it considers reasoning by induction from a set of examples, which comes very close to the definition of machine learning. Based on a training set that should represent all possible cases as good as possible, a machine can learn to predict the outcome for new examples by induction. It does so by looking at specific features, which are defined by humans based on their domain knowledge, hereby they try to ensure computers have a proper overview of relevant parts of a real-world.
In abduction, we take this one step further, where we not only need to infer conclusions on new examples based on given examples, bust also have to come up with hypotheses. While this is beyond the scope of machine learning, it can be found in deep learning. For example, in the deep question answering techniques that made IBM Watson win Jeopardy. Now that we have familiarized ourselves with some background on reasoning, it is time to look at how to apply this theory in practice.
Having connected the different types of reasoning to different techniques of AI, we will now look at each of these techniques and when to apply them in organizations, following the framework below. Read further in the full version of the blog.
This blog is part of a series in which we deep dive in the different components of DAIL and describe them in a more in-depth fashion. Next up, we will discuss AUGMENT, which will bring us from the models we employ to the value they bring to organizations.