Welcome to our new Chief AI Officer blog series, which will offer a window into the mind of our Deloitte UK Chief AI Officer, Sulabh Soral. Join Sulabh as he delves into the world of AI, beginning with an exploration of how generative AI is driving the evolution of new device form factors.
It is hard to ignore the buzz surrounding Generative AI (GenAI) and its practical impact on our daily lives. Since the introduction of ChatGPT in 2022, GenAI has revolutionized various aspects of technology and communication. However, beyond the excitement around GenAI’s potential to enhance functionalities, it is crucial to consider how GenAI will influence the devices we use and how it will disrupt our interactions with them.
Smartphones, laptops and PCs must adapt to fully leverage advancements including GenAI and ensure they are a product-market fit for the technology. Are we maximizing the potential of our current devices? How will their form factors need to evolve with the advent of GenAI? By addressing these questions, we can better understand the future landscape of device technology and prepare for the transformative changes that GenAI will bring.
Since the invention of the Apple iPhone, device form factors have largely remained unchanged1. However, the arrival of Gen AI is poised to revolutionize how we interact with our devices. While GenAI’s first act came from the ‘technology-out,’ act two will be from the ‘customer-back’, focusing on multi-modal models to solve problems end to end.2
At the heart of this transformation is declarative computing, which is the primary use case for GenAI. Traditional computing systems require manual inputs like code, data and compute resources to produce outputs such as cost, runtime and latency. Declarative computing on the other hand, uses outputs as inputs, signalling a shift in computing paradigms.3 This means users can simply describe what they want their computer infrastructure to do, and the compiler will determine how to achieve this and identify the necessary hardware requirements.2
Declarative computing's emphasis on specifying the desired outcome rather than the procedure to achieve it aligns with multi-modal GenAI, which leverages diverse data inputs to instinctively generate comprehensive solutions. Multi-modal GenAI represents a new standard in artificial intelligence, where various modality types (image, text, speech, numerical data) are combined with multiple intelligence processing algorithms to achieve superior performance. These multi-modal systems will redefine our how we experience and use our devices, prompting us to reconsider traditional interfaces like the keyboard and reducing our dependence on them.4 By integrating different data types and processing methods, multi-modal AI will enable more intuitive and seamless user experiences.
The combination of GenAI and multi-modal algorithms will be a catalyst for the reimagining and development of new device form factors, which will help us more effectively utilise AI capabilities and services. Future devices will adapt to individual users and offer personalised services tailored to their specific needs and preferences. AI-powered 'Agents' or personalised assistants will play a crucial role in this evolution.3 Bill Gates predicted this, suggesting that rather than navigating between different apps, users will instead communicate with their devices in everyday language with AI-agents that understand and respond to individual needs based on a holistic understanding of their lives.3 These digital companions will interact with users naturally and seamlessly, acting as skilled virtual co-workers.
Imagine a virtual assistant capable of planning and organising a complex, daily schedule, managing logistics and tasks across multiple platforms. Such reimagined devices will not be confined to specific applications or platforms. Tim Russell, the UK Chief Technologist of CDW, suggests we are moving towards an “application-independent” approach where an overarching AI capability will be able to navigate all your applications simultaneously.5 This capability will enhance productivity, offering improvements and personalised advice across various platforms.4
The integration of GenAI and multi-modal algorithms will not only transform our interactions with technology but will also drive the creation of new device form factors. In the next decade, as we step into offices around the world, we may not interact with our laptops and screens in the same way as we do today. We may not even be using traditional laptops at all, as they may have given way to more innovative devices that harness GenAI to deliver a more efficient, intuitive and omnipresent computing experience.
These advancements are driving transformation in the tech industry, prompting a race among technology companies and venture capitalists to find the ideal form factor to deploy AI capabilities effectively. For example, Samsung is actively researching new form factors to better accommodate GenAI applications. According to Roh Tae-moon, President of Samsung's Mobile Experience unit, the next generation of "AI phones" will look “radically different” from today's smartphones.6 A significant portion of Samsung's research and development is now focused on these cutting-edge designs, signalling a shift from the conventional form that has dominated since the invention of the iPhone.5 Midjourney, a company best known for its AI-image generation tool, has also hinted at foraying into hardware and setting up a designated hardware team.7
Additionally, OpenAI’s Sam Altman and Jony Ive, the renowned ex-Apple designer, are developing a prototype for a new AI-powered device.8 Backed by up to $1 billion in funding from a major technology investment bank and venture capital funds, this device aims to transcend traditional smartphone and PC designs, creating a disruptive impact on the market.9 Investments in inventive new form factors are also evident in companies like Humane, which recently launched an AI-driven lapel pin. Humane have developed a device controlled by voice commands, touchpad taps, or laser display projections, exemplifying the capability of multimodal AI.10 Similarly, the startup Rabbit has introduced the Rabbit r1, a portable AI-powered device designed to use app-based operating systems and AI agents to interact with users via natural language and perform actions for them.11 Funded by Khosla Ventures, Rabbit’s founder Jesse Lyu outlined that the device uses "large action models" to execute real-world tasks, showcasing another innovative form factor in this burgeoning market landscape.12
Despite the hype surrounding new AI devices these innovations face scepticism. People have challenged the functionality and performance of these products, questioning whether they have tangible use. Additionally, there are risks posed by increasingly powerful agential AIs.13 These risks include potential dependency on a network of AI agents, the creation of unavoidable feedback loops and the possibility of AI agents pursuing more menacing goals. These challenges underscore the need to mitigate and safeguard against risks during the development process of AI devices, and also shows us that we don’t know exactly how this technology will behave.
Technology companies and venture capital firms continue to explore and invest in new form factors, making it clear that the future of technology will be shaped by devices that offer more natural, flexible and productive interactions, driven by the advancements in GenAI. As the competition to build the ’smartphone of artificial intelligence’ era intensifies, the industry is poised for transformative changes that will redefine how we interact with devices.14 These new form factors will not only enhance user experience but will also integrate GenAI more deeply into our everyday routines.
_______________________________________________________