Skip to main content

The edge of cloud

A discussion with Mahadev Satyanarayanan, the Carnegie Group University professor of computer science

In this episode, professor Mahadev Satyanarayanan talks to Tanya Ott about edge computing, AI, and how together they can solve myriad issues, from reducing air travel delay to helping the elderly, though wearable tech.

"A system that would be ideal from a user and also even from a software developer’s point of view, it would be as if you didn’t see these tiers. Your work just happens where it has to happen. Your data gets transported where it has to get transported. Of course, it’s never that easy. There are real boundaries, there are real networks in between." — Mahadev Satyanarayanan, Carnegie Group professor of computer science, Carnegie Mellon University

Tanya: Charles Babbage had the idea for a computing machine back in the mid-19th century. He drew it out in great detail. But it took 100 years before the technology caught up to deliver the vision. Today my guest is going to give us a look into a possible future. 

Tanya: This is the Press Room and I’m Tanya Ott. If you’ve got a fitness tracker on your wrist, there’s a good chance it is built on technology that was developed by today’s guest. Mahadev Satyanarayanan—he goes by Satya—is an experimental computer scientist and professor at Carnegie Mellon University. He is truly a pioneer in the field.

Mahadev Satyanarayanan (Satya): I’ve been working in the space of distributed systems, mobile computing, and the Internet of Things for almost four decades now. My work in collaboration with IBM at Carnegie Mellon in the early 1980s was one of the earliest systems to combine the cloud with the edge, long before even the word cloud had been invented in the context of computing. In fact, that work led to the Andrew File System, was the inspiration for Dropbox. So about 10 years ago, [the] 2008, 2009 time frame, I did the work that led to what is now edge computing. So that has been my focus for almost a decade now. 

Tanya: I’m sure that most of our audience is very familiar with the concept of cloud. You mentioned Dropbox and there are other applications like that that they’ve used. But edge computing is one that some people don’t quite get yet. So when you talk about edge computing, what does that mean? 

Satya: We assume that in the hands of a mobile user is a smartphone or a wearable device. If you were a drone, then by definition, that is a mobile device. If you are an IoT device, then there’s a small amount of compute on the IOT device, but not a lot. In all these cases, you leverage the much more substantial compute resources of the cloud to do any kind of compute-intensive work that needs to be done. So this combination of the device at the hands of the user or the IOT edge and the cloud is a very basic and well understood paradigm. However, it is the case that if my IOT device is a high bandwidth sensor, like a video camera, a 4K or an 8K video camera, and I have a hundred of these deployed in some area, the cumulative demand for bandwidth from these devices would overwhelm most ingress networks into the cloud. 

Similarly, if you close the loop, if you had any kind of cyber-physical or cyber-human system that relied on this real-time data in very time-critical settings, then the total round-trip time to the cloud and back would make it very difficult to meet real-time deadlines. In all these settings, it becomes important to have cloud-like computing, except located much closer to where the data is being generated. That is what we refer to as edge computing and the compute resources that are cloud-like, but located at the edge, are referred to as a cloudlet, because it’s a small cloud close by. 

Tanya: When you are talking about edge, you’ve got several subcategories. You mentioned mobile. There’s also intelligent edge and cloud tier, as you just mentioned. Maybe you could explain for us sort of edge architecture. What does it look like and how have you seen those patterns changing over time? 

Satya: What we call a cloudlet and what we call a device can take many different form factors. So, for purposes of thinking about the different kinds of levels of computing, I evolved a tiered architecture model in which the cloud is tier one, a mobile device or an IOT device, which typically has limitations on its computer capabilities, is tier three. And close to tier three are the cloudlets that I just mentioned, which would be tier two. So we have the cloud at tier one, cloudlet at tier two, and devices at tier three. There also happens to be tier four which are battery-less devices, things like RFID tags, which have no onboard compute. They harvest energy and then reflect back information based on that harvested energy. So this is a four-tier architecture that turns out to be a very useful way to simplify very complex architectures and to think about them and also to compare product offerings from different vendors. In many other ways, it’s a useful intellectual tool to have. 

Tanya: So the data flows then across these tiers, across these edges, back and forth. Does it flow well? 

Satya: From the viewpoint of an ideal system, a system that would be ideal from a user and also even from a software developer’s point of view, it would be as if you didn’t see these tiers. Your work just happens where it has to happen. Your data gets transported where it has to get transported. Of course, it’s never that easy. There are real boundaries, there are real networks in between. So software implementations that make things transparent, that dynamically, at execution time for example, pick the right spot at which to perform an operation—these are the kinds of tools that are emerging to blur the boundaries between tiers and to simplify the conceptual use of this multitier model. 

Tanya: A little earlier, you mentioned critical uses where things have to move quickly or seamlessly. What would be a concrete example of that? 

Satya: Let me give you two examples just to see how the same idea manifests itself in different forms. One very simple idea is a kind of augmented reality in which I have some kind of wearable device, maybe a heads-up display, Google Glass, Microsoft HoloLens, Magic Leap, or something. And on it are sensors. There’s a video camera, microphone, accelerometer, gyroscope, et cetera, and the signals from them are transmitted to a cloudlet nearby which does compute intensive processing, for example, scene analysis, object detection, et cetera, and then returns, for example, an audio signal that whispers in my ear an important and helpful guidance for some task that I’m doing. A simple example would be, imagine I’m growing old and starting to forget the names of people. Imagine a system like this whispering in my ear the name of the person who’s in front of me and maybe even a few cues as to how I know that person. That kind of real-time cognitive assistance, we call this wearable cognitive assistance because it’s happening on a wearable device. But what you’re getting is AI at the edge through the kind of seamless spanning of multiple tiers in an architecture like this. So that’s one example. All right. Very cool stuff. 

As another example, consider a drone. Imagine an autonomous drone, a drone that’s flying by itself under AI guidance on board. It’s continuously analyzing what its camera is seeing. And imagine that because the drone is so small and so light, it cannot carry on board all of the compute needed to do real time video analytics. That’s very, very compute intensive. So the way you handle this is you use wireless communication, 4G LTE today, possibly 5G soon, to communicate with a ground-based cloudlet. That cloudlet doesn’t have to fly. It’s on the ground, so it can be heavier, larger. It can have a large GPU (graphics processing unit). It’s plugged into the wall so it has no battery limitations. And this system, the drone plus the cloudlet, is able to do the analytics that would simply be impossible to do on the drone alone. Now, here’s where the beauty of low latency comes in. If the real-time video analytics reveals some important object or scene in the current view. If you get the results back fast enough you can, for example, ask the drone to take a closer look by dropping down in altitude and getting a zoomed-in view of the same scene. Imagine doing all of this purely under software control without human intervention. That is the kind of sensing and actuation in a cyber/physical system within very tight time bounds that is possible through an approach such as the one I described. 

Tanya: Those are such cool examples, and I have to say that I could totally benefit from the eye glasses or whatever that would whisper in my ear about who I’m talking to. It would cut down on the number of times I have to say, “hey, you!” instead of someone’s name. 

Satya: Indeed. This is not so far-fetched. Let me give you a real-world example. Every one of your listeners to this podcast uses something which works this way. They just don’t think about it that way. So not that long ago, if you wanted to find your way through a new city or new part of the country that you just moved to or were visiting, you needed paper maps. And they were somewhat annoying to use. You first tried to figure out where you were, where your destination was, where they will point to the right way, etc., and it’s easy to get lost. Today, [almost] nobody uses paper maps. You have a GPS navigation app on your smartphone or in your car or both. It tells you what to do next. If you make a mistake, you don’t take the exit that it asks you to, it detects it quickly, recovers and guides you to your destination. So what was a difficult task has been dramatically simplified. And the way it works is your smartphone is using the GPS satellites to send information about location. So the kind of wearable device analyzing video data and audio data in real time and guiding you is like GPS navigation on steroids. It’s guiding you through life in a task-specific manner. 

Tanya: That gives us a sense of where edge of cloud stands today. What’s the uptake been like in terms of companies deploying it? 

Satya: So, there are two halves to this story. One half of the story is the deployment of infrastructure, the actual rolling out by companies like telcos or other companies of edge infrastructure, which are these cloudlets and, possibly also because they’re quite well suited to each other 5G wireless communication, because the lower latency and higher bandwidth of 5G is especially valuable and it’s amplified by the use of edge computing. So that’s half the story. The other half of the story is translating this into new applications which deliver end-user value like the ones I just described. Imagine being able to help your elderly parents stay at home six months longer, rather than having to move to a nursing home, because through a cognitive assistance of this kind, it is able to remind him or her of daily tasks they need to do, helps to remind them when they’re forgetful and in many ways helps them function in the current environment just a bit longer. That’s the kind of real-world, end-user value that is incredibly valuable and the creation of those applications—we call this class of applications edge-native applications. These are applications you could never create just using cloud computing. They are critically dependent on the edge. The creation of edge-native applications is the other half of the equation. These two halves go hand in hand. You can’t have edge-native applications without edge infrastructure. And by itself, the edge infrastructure isn’t very interesting to most people. Who cares where my compute is done? What I care is what that compute can do for me. So the path forward is an interweaving of the rollout of the infrastructure and the creation of these new life-changing, game-changing applications that are edge-native applications. 

Tanya: What advice do you have for organizations that are looking to bring AI to the edge, for instance, to take advantage of the technologies? What should they be thinking about? 

Satya: There are a number of things, of course. One is to ask the question from an enterprise-specific viewpoint: Where are the choke points in our daily workflows? What is the kind of functionality that doesn’t exist today, but would dramatically change matters? Let me simply give you one example of a hypothetical. This is not yet built, but it’s an example of how to think about this. In air travel, we’ve all been in this situation where you board a plane, it’s ready to take off, it’s about to push off from the gate, and then there’s some mechanical problem involving the jetway or something. And you sit there for half an hour because the right mechanic has to come from the far corner of the field because he or she is currently working on something else. That’s half an hour. It’s estimated that a typical airliner filled with people runs up $600 per minute when delayed like this.1 Of course, that doesn’t include people’s time, delays’ consequence, missed flights, all this other stuff. Imagine through the use of tools of the kind that I mentioned, wearable cognitive assistants, one of the mechanics who’s right there, right beside the aircraft right now, can be given step-by-step guidance to troubleshoot and fix the problem and solve in five minutes rather than waiting for the other person to arrive in half an hour. That’s an example of a dramatic change for that business, in this case the airline, which is huge. Similarly, you can think of any industrial setting where there’s troubleshooting; expertise by definition in any domain is rare. That’s why the person is called an expert. Software and wearable hardware of this kind, combined with edge computing, can amplify the value of experts and thereby make the productivity improve the productivity of the whole organization. So these are ways in which I believe companies can think as they think about AI at the edge. 

Tanya: You’ve just published a new piece on AI at the edge. Give us a preview for those who haven’t had a chance to see it yet. 

Satya: It’s a paper that I just published a couple of weeks ago, and it asks the following question: Today, it is the case that specialized chips are being built for purposes such as machine learning, inferencing aspects of machine learning, and so on. So, for example, the Apple iPhone has got a neural engine, the Qualcomm Snapdragon, the latest Android, has equivalent specialized chips. So on the one hand, one way to speed up lightweight devices such as smartphones is not just to have general-purpose processors in them, but to use very specialized chips. 

A different way to accomplish the same task is using edge computing. You offload, using wireless to the edge and then use resources that are much heavier, much more energy hungry, generate much more heat than you would ever want to carry in a smartphone that you put in your pocket or in a wearable device that you’re willing to put against your skin. These are two completely different approaches. One of the interesting questions that faces technologists today is, which is the right path forward? Should we be betting on edge computing or should we be betting on the emergence of specialized hardware? This piece of work suggests that the way to think about this is not either/or, but both. They have complementary strengths, and what each computing allows you to do is to accomplish in a very agile and rapid manner changes that would be taking much longer and be much more expensive to achieve in specialized hardware. However, once a system has matured and the software that is being offloaded is stable, then capturing that in hardware so that the next generation of mobile devices no longer has to offload that function, it can do it on board, and it is free to use wireless and edge computing to do other functions for new tasks that have not yet been addressed. This kind of dialectic between hardware specialization and edge computing is going to be a very exciting chapter of the evolution of computing. 

Tanya: You just said exciting chapter. My last question for you is going to be: of all of the landscape that you’re looking out over, things already deployed, things in development, or things that you’re just imagining that can come down the line, what are you most excited about? 

Satya: Very interesting question. That’s a question that would take a long time to think carefully, to answer what I’m most excited about. But very excited about ... there are many, many things. Certainly, the entire field of quantum computing is interesting. I can honestly say I don’t understand it because quantum physics is very counterintuitive. But the notion that very difficult problems such as prime factorization, which are difficult for normal computers can be solved in much shorter periods of time with quantum computers, that’s game changing. Actually making it work, actually extracting value from them looks like a very difficult challenge. So that’s one whole area of work. A second area, and people have thought about this and speculated it’s not clear exactly when and where, but biological systems, DNA, RNA, all of what we have seen with COVID, with immune systems, immune responses; you see the battle being played out in the biological world just as it is in the cyber world. There are important parallels between the two. So the informing of biological research from computing and in the other direction, the use of biological metaphors in computer science are also equally exciting areas. So those I see as the really, really exciting areas. 

Tanya: If you were able to see me just a moment ago, I smiled when you said you don’t understand quantum fully either, because I’ve done so many interviews on quantum and there have been some really good examples that kind of start to capture and start to help me understand what it is. So the fact that you said that makes me feel a lot better, sort of. 

Satya: Well we are in very good company because Albert Einstein never believed in quantum computing. 

Tanya: Really? 

Satya: Absolutely. The phrase that he used was God does not play dice. 

Tanya: OK. 

Satya: So if Albert Einstein didn’t quite get it, I don’t feel so bad. 

Tanya: This is funny. I’m looking over my shoulder in my space that you can’t see since I turned the video off, but I have Einstein on the wall behind me in the office. 

Satya: So do I. I have a similar picture in my home office with the caption, “Imagination is more important than knowledge.” That’s a famous quote of his. 

Tanya: That speaks really well to what we’ve been talking about today, which is imagining how you can apply these concepts in the future, because it’s not just about the technology, it’s about the vision. 

Satya: Absolutely. Here’s a number that is astounding. I didn’t realize this until I was looking into this. It is estimated that if the entry into a nursing home could be delayed by one month, on average, for each senior citizen who eventually goes into a nursing home in the United States alone per year, the savings would be close to $1 billion—that’s B—dollars. That’s an astounding number. The world as a whole is graying and people are living longer. So solutions like the one I described, I mean, when you can’t see very well you get glasses. When you can’t hear very well, you can get a hearing aid. When you start being forgetful, when you start forgetting how to do basic things, today we have nothing to help you. So cognitive assistance of the kind I described is a very powerful tool to help with the fact that the world as a whole is going to have to face these challenges.

Tanya: Mahadev Satyanarayanan (muh-hah-dev-sah-tee-yah-nuh-rie-gyehnan)—or simply Satya —is an experimental computer scientist and professor at Carnegie Mellon University. So many of the topic[s] we covered today are also on our website … much more in depth. Go to Deloitte.com/insights

We’re on Twitter at @DeloitteInsight and I’m on Twitter at @tanyaott1

And don’t forget to follow the podcast on your favorite podcatcher so new episodes are delivered directly to your device. That way you won’t miss anything. 

I’m Tanya Ott. Have a wonderful day! 

Deloitte Insights is an independent publication and has not been authorized, sponsored, or otherwise approved by Apple Inc.  iPhone is a trademark of Apple Inc., registered in the United States and other countries.

This podcast is produced by Deloitte. The views and opinions expressed by podcast speakers and guests are solely their own and do not reflect the opinions of Deloitte. This podcast provides general information only and is not intended to constitute advice or services of any kind. For additional information about Deloitte, go to Deloitte.com/about.

Deloitte Cloud Consulting Service

 

Cloud is more than a place, a journey, or a technology. It's an opportunity to reimagine everything. It is the power to transform. It is a catalyst for continuous reinvention—and the pathway to help organizations confidently discover their possible and make it actual. Cloud is your pathway to possible.

Learn more

  1. Andrew Cook, Graham Tanner, and Adrian Lawes, “The hidden cost of airline unpunctuality,” Journal of Transport Economics and Policy 46, no. 2 (2012): pp. 157–173.

    View in Article

Cover image by: Jaime Austin

Did you find this useful?

Thanks for your feedback

If you would like to help improve Deloitte.com further, please complete a 3-minute survey