This site uses cookies to provide you with a more responsive and personalized service. By using this site you agree to our use of cookies. Please read our cookie notice for more information on the cookies we use and how to delete or block them.

Bookmark Email Print this page

The Innovator's Manifesto

A Problem of prediction
An excerpt from The Innovator’s Manifesto


DOWNLOAD  

Deloitte Review - Beyond the Numbers: Analytics as a Strategic CapabilityBy Michael Raynor > Illustrations By Anthony Freda

If the purpose of a theory is to inform our choices today, we must demand more than compelling explanations of the past. For a theory to have a legitimate claim on our allegiance there must be evidence that it improves our ability to predict future outcomes.

Creating and backing winning businesses is by all accounts a low-probability endeavor. Far more new businesses fail, or at least do little better than limp along mired in mediocrity, than actually break away from the pack and create real wealth. There is more to this statement than simply the necessary truth that only 10 percent of all businesses can be in the top 10 percent: the best businesses tend to do fabulously well, while most of the rest, if they survive at all, generate returns that are embarrassingly small in comparison.1 We have become collectively resigned, it seems, to the notion that successful innovation is unavoidably unpredictable. 

Despite the challenges and the long odds, there is no shortage of players in this great game. Hedge funds and venture capital partnerships channel capital into the businesses they feel will succeed. Many corporations maintain internal venture functions for strategic purposes, some seeking to create ecosystems around a core business or to stake a claim to possible new growth opportunities in adjacent markets or to establish a line of defense against possible usurpers of a valuable entrenched position, to name only three possible objectives.

Take, for example, Intel Corporation, best known for its significant role over the last thirty years in the global microprocessor industry. In 1998 Intel launched the New Business Group (NBG) in order to coordinate and more effectively manage the company’s attempts to diversify beyond the microprocessor industry.2 Within NBG, approximately $20 million was earmarked for the New Business Initiatives (NBI) group, which had the remit to identify, fund, and develop new businesses that were especially far afield, such as Internet-based businesses and consumer products. NBI’s mandate included exploring new technologies, new products, new markets, and new distribution channels and had an investment horizon of five to ten years.

NBI operated as a largely autonomous unit within NBG. Unlike the relatively formal and structured annual planning and budgeting processes that drove sustained success in the microprocessor segment, NBI typically committed only seed capital to new business ventures, ramping up its level of commitment as various strategic and financial mileposts were reached. In addition, leadership explicitly accepted the inherent unpredictability of incubating new businesses along with an unavoidable implication of that uncertainty: that some and perhaps many of the ventures that were launched could fail.

Intel Optical Links (IOL) was one of NBI’s investments. Thomas Thurston, an attorney in his midtwenties with an MBA and law degree, joined IOL in 2005, excited at the prospect of helping launch a new venture inside an established company. Although successfully incubated, IOL was sold off following Intel’s broader divestiture of optical component and communications businesses. However, Thurston’s curiosity was piqued by this initial exposure to the internal venturing process: he wanted to understand better how Intel decided which initiatives to support and why.

Something in excess of seventy business proposals are explored by NBI’s investment directors each year. They work with a range of people and sources, both inside and outside Intel, to determine the potential of a given idea. The constant challenge is to find the “diamonds in the rough” – the concepts that have within them the seeds of sustainable success and perhaps greatness. It is an inherently risky undertaking, and the only way to avoid failure entirely is to do nothing, which of course reduces one’s chance of success to zero as well.

It is this unavoidable uncertainty that leads many observers to prescribe an investment strategy based on “rapid failure”: the willingness to attempt as many different initiatives as possible with an eye to learning what does not work as the inevitable prerequisite to discovering what does. In Intel’s world, however, bona fide initiatives—the kinds of efforts that actually teach you something useful—can get very expensive very quickly. NBI executives are therefore forced to make difficult trade-offs between the need to husband their investment capital and the risk of overlooking the next blockbuster product or service.

For present purposes, the salient features of NBI’s investment process were the Seed Investment Meeting (SAM) and Business Investment Meeting (BAM). Proposals that were approved at the SAM received funding of several hundred thousand dollars to typically less than $1 million with an upper range that rarely exceeded $2 million. This allowed a team to get beyond the idea stage and flesh out a business plan, perhaps by developing a prototype, collaborating with potential customers, doing market research, and so on. BAM funding was contingent on having demonstrated an increased level of viability and brought with it investment capital that ranged from several million dollars to in some cases as much as $20 million. Ultimately, NBI’s goal was to transition or graduate one new business opportunity per year to an existing or new business unit within Intel. (Not every venture had to pass through both stages of approval: some ventures were graduated directly from SAM to an operating division in light of their strong performance.)

Intel takes a very rigorous approach to understanding competitors, technology, customers, market structure, and a host of other variables when analyzing opportunities for growth. Unfortunately for Intel, and everyone else who seeks to innovate in order to grow, there are no data about the future, and so there often remained many important but unanswered questions. Consequently, well-informed, experienced executives could look at the same opportunity and come to different conclusions about that venture’s challenges, financial potential, and so on. Worse, only when a venture was funded could the merits of the decision-making process employed be assessed, since if something was turned down, it rarely got funded via other channels, and so the opportunity cost of passing on what would have been a winner was almost always incalculable.

Thurston undertook a forced march through the popular management research into innovation in search of a more nearly rules-based approach in the belief that, given the importance of the subject and the wealth at stake, any framework holding even a scintilla of advantage over the others would be readily identified. Yet Thurston discovered that instead of a vibrant marketplace of ideas populated by challengers seeking to unseat the reigning champion, the agora where theoretical dominance is established is characterized by general disarray. There were a great many frameworks supported by compelling evidence, yet when they conflicted and counseled different courses of action, there was little basis in the evidence to guide someone in choosing one approach over the others. When different approaches did not conflict, it was difficult to treat them as cumulative and attempt to follow the sum total of their collective advice, since doing so resulted in a paralyzingly long to-do list.3

In light of this theoretical cacophony, in all likelihood, NBI executives made their choices in largely the same way most early-stage investors make their choices: do the best you can with the data you have available, while necessarily relying on your experience and your wits to fill in the sometimes significant gaps. The very best practitioners typically do all they can to create a solid fact base, but personal judgment generally figures prominently in making the final choice.4 It is simply the nature of the beast that evaluation criteria differ from person to person and project to project. Thurston recounts that at NBI, this meant that sometimes the emphasis was on technology, sometimes on management expertise, sometimes on the promise of the market opportunity, sometimes on the strength of linkages with Intel’s core business. It is a process that seems to have served Intel well, for there is no reason to think that its achievements are anything other than representative of the very best efforts in this space.

The prevalence of this sort of approach is an understandable consequence of the reliance of popular management research into innovation on post hoc case-study evidence to support its claims. What Thurston was looking for was evidence supporting predictive accuracy in addition to the requisite explanatory power. And no theory he could find provided both.

Close, but no cigar

Christensen’s first book, The Innovator’s Dilemma, introduced the world to the notion of “disruptive technology.” Christensen described how large, successful incumbent organizations in all types of industries were toppled by much smaller start-ups. Entrants typically succeeded by developing solutions for relatively small and unattractive markets that were of essentially no interest to successful incumbents. These constituted the entrants’ “foothold” markets. Sometimes customers in these foothold markets were quite happy with inferior but much less expensive solutions; sometimes they required solutions with a vastly different performance profile. Either way, entrenched players, focused on the needs of their established customers, proved systemically unable to devote investment funds to those markets. In contrast, driven by their desire to grow, the entrants were strongly motivated to improve their initial offerings in ways that would allow them to compete effectively for the larger, more lucrative mainstream markets. This was the entrants’ “upmarket march,” and entrants that marched upmarket successfully eventually captured the customers that had been the incumbents’ lifeblood.

Christensen observed that when entrants attacked successful incumbents by adopting the incumbents’ models and technological solutions, they tended to fail. They tended to succeed by combining a business model suitable for a relatively less attractive market—the entrants’ foothold—with an ability to improve their original solutions in ways that allowed them to provide superior performance in a manner incumbents were unable to replicate – the upmarket march. Christensen called the union of these two elements a disruptive strategy.

The archetypal illustration of this phenomenon is Christensen’s all-inclusive study of innovation and competition in the U.S. disk drive industry from 1976 to 1994. In the midseventies, companies such as Storage Tech and Control Data were making fourteen-inch disk drives for mainframe computer makers. These companies, among them Amdahl and Unisys, wanted Storage Tech and Control Data to innovate: greater storage capacity, faster data-retrieval times, and lower costs per megabyte.

When minicomputers were first brought to market by start-ups such as Sun Microsystems and Hewlett-Packard, they required very different disk drives: smaller, more modular, and less expensive. To achieve these outcomes, disk-drive makers found they would have to reduce storage capacity, increase data-retrieval times, and accept higher costs per megabyte. The result, the eight-inch disk drive, was close to the antithesis of what Storage Tech and Control Data would countenance as an innovation; it was, if anything, a technological step backward in the interest of serving a small and highly uncertain new market. That opened the door for start-up drive makers such as Micropolis and Maxtor to develop something that was technologically trivial to Storage Tech and Control Data but strategically impossible for them to launch.

In the short run, no harm done: Storage Tech and Control Data went on printing money in the fourteen-inch disk-drive market while Micropolis and Maxtor eked out a living selling technically inferior eight-inch disk drives to small minicomputer makers. But then Kryder’s law—the disk-drive equivalent of Moore’s law in microprocessors—asserted itself: the areal density of disk-drive storage space was doubling annually thanks to improvements in recording media, software correction codes, and other key technologies. In addition, other dimensions of minicomputer performance were improving rapidly, fueled in large part by advances in microprocessor technology and software design. As minicomputers began to encroach on the mainframe market, and ultimately pushed mainframes into decline, the fourteen-inch disk drive makers cast about for new markets but found only the minicomputer makers buying, and they wanted eight-inch drives. Thanks to their relative unfamiliarity with the innovations first commercialized by the eight-inch disk drive makers (e.g., greater modularity and smaller size), the companies making fourteen-inch disk drives were at an insuperable disadvantage. Most went out of business, and none was able to maintain its market dominance in the disk-drive industry.

The start-up eight-inch disk drive makers found a foothold by first exploiting trade-offs among different dimensions of performance and appealing to the needs of an economically unattractive market. They Disrupted the fourteen-inch disk drive makers by ultimately breaking those trade-offs and remaining the primary disk drive suppliers to the newly dominant minicomputer companies. In other words, as the most lucrative and largest end customers for computers switched from mainframes to minis, the fourteen-inch disk drive makers ended up going down with their chip. (Sorry.)

Accept for the moment that Disruption is a good explanation for a specific phenomenon: the seemingly unlikely ability of entrants to topple well-resourced and well-managed incumbents on their home turf. Still more remarkably, however, Christensen observed that over the eighteen years of competition in disk drives that he documented, Disruptive strategies had a much higher frequency of success, and when successful were much more successful than sustaining strategies.

On the strength of this, Thurston felt that Disruption was among the most promising of the frameworks he had studied. He was particularly encouraged by the fact that Disruption lent itself to fairly straightforward predictions of what would work and what would not. And then Thurston ran into a brick wall. There were no data to support any claims of predictive accuracy for Disruption. Christensen and others had developed a robust library of literally hundreds of cases across dozens of industries that were explained by Disruption – but the same was true of many other theories out there. Worse, for just about every case study explained by Disruption there were competing explanations that drew on entirely different sets of concepts. (Academic journals continue to debate whether Disruption is the best explanation of the disk-drive industry’s evolution.) And even if it were possible to win the battle for explanatory-power bragging rights, until there was some evidence in support of Disruption’s predictive power it could not claim to be the right theory to use for making decisions about the future. Thurston could have no more confidence in the prescriptions of Disruption than he could in any other theory.

The frequency of success of distuptive and sustaining strategies

Everyone complains about the weather

Intel has worked with Christensen for some years, and the company has used Disruption theory in its own strategic planning processes. In fact, Christensen and former Intel CEO Andy Grove appeared together on the cover of Forbes magazine in January 1999 under the headline “Andy Grove’s Big Thinker.” Consequently, when Thurston approached NBI’s leadership about exploring whether or not Disruption might have predictive power when applied to NBI’s portfolio of investments, divisional leadership provided Thurston the latitude and support necessary to conduct some preliminary investigations.

Thurston began by stating Disruption’s predictions. Specifically, Disruptive innovations are defined as products or services that appeal to markets or market segments that are economically unattractive to incumbents, typically because the solution is “worse” from the perspective of mainstream, profitable markets or market segments. Disruption predicts that leading incumbents with so-called sustaining innovations—innovations targeted at their most important customers—typically succeed. New entrants with sustaining innovations typically fail.

Disruptions typically succeed, whether launched by incumbents or entrants, but only when the ventures launching them are highly autonomous and able to design strategic planning processes and control systems and financial metrics, among other characteristics, independently of systems built for incumbent organizations. This element is important and hardly unique to Disruption: established, successful businesses can and should be held to very different measures of performance and expectations for future performance than start-up organizations, and for at least two reasons. First, a start-up typically has a trajectory of growth and profitability that is very different from that of an established business. Second, start-ups typically must change, sometimes dramatically, material elements of their strategy as they grapple with the unpredictable nature of customer reaction, competitive response, and the performance of key technologies. Consequently, start-ups must find their own way, and that is possible only when they enjoy the requisite autonomy to do so.

In short, Thurston inferred that Disruption predicts that success awaits sustaining initiatives launched by successful incumbent organizations and Disruptive initiatives launched by autonomous organizations. Everything else is predicted to fail. (See Figure 2 for a summary of Thurston’s hypothesis.)

Now Thurston needed data with which to test those predictions. Fortunately, NBI had retained a robust archive of the materials supporting many of its previous efforts. This allowed Thurston to compile a portfolio of forty-eight ventures that had received at least SAM-level funding over the ten-year period ending in 2007. SAM funding, recall, was very early-stage support, analogous perhaps to “angel” investing. Using the “pitch decks” that were used to explain each business to NBI executives as part of its funding process, Thurston assessed these SAM-approved businesses for “incumbent” or “entrant” status based on the degree of Intel’s participation in the market targeted by the start-up and assessed the start-up’s product or service as sustaining or Disruptive based on how it compared to existing solutions in that targeted market.

These decks were typically exemplars of business planning and communication. They began with a summary of the technology involved and the benefits to Intel of commercializing it. The most optimistic projections were usually for devices or services that were demonstrably superior to existing solutions offered by competitors. The growth opportunity was often argued to be greatest when Intel did not already compete in that market.

A review of the management team’s expertise then followed. It was not uncommon for ventures to be run by an impressive cross section of Intel veterans, new hires with experience in the target market, and others with deep expertise in functions such as marketing or design, depending on what was seen as critical to long-term success.

Then came a detailed description of the value proposition. This was the team “making good” on its claims of superiority, often including endorsements of prototypes by customers the team was targeting as early adopters. This was followed by an implementation plan: which market segments would be targeted in what sequence, with specific descriptions of how Intel would be successful in each, often accompanied by a multigenerational product road map. Finally, financial projections, complete with sensitivity analysis, described the anticipated economic value of the business to Intel, usually over three to five years.

Thurston’s hypothesis

To keep things as simple as possible, he defined “success” as survival—that is, the venture was still functioning as a going-concern venture, whether or not it was still controlled by Intel—and “failure” as “dead”—that is, no longer a commercial going concern. Without knowing the actual outcomes for these ventures, if Thurston could assess the relevant characteristics of the NBI-backed ventures and predict subsequent “success” and “failure” more accurately than chance alone, he would have solid evidence supporting Disruption’s predictive power.

Here is how it worked with Image Illusions, a disguised NBI-backed venture. Image-processing technologies, such as printers or photocopiers, typically use a large number of application-specific integrated circuits (ASICs) to handle different elements of image manipulation, such as shrinking or rotating an image, prior to printing. ASICs are very efficient, but this efficiency brings with it two drawbacks. First, because each ASIC is highly customized, manufacturing economies of scale are limited, which keeps costs up. Second, ASICs are not programmable, so changing the features of a product typically requires designing and sourcing an entirely new chip, which is costly and slows down development times.

Alternatives to ASICs, such as media processors, digital signal processors, and central processing units, provided vastly increased economies of scale and programmability but sacrificed performance to such an extent that they were rarely viable. In other words, there was a sharp trade-off among performance, flexibility, and cost. Manufacturers of image-processing technology—for example, the folks who make printers and photocopiers—would find it very valuable to break that trade-off, for then they could introduce a greater range of more powerful new products faster and at lower cost.

Intel is an incumbent in one of these three alternative technologies mentioned above. Image Illusions sought to leverage this position to create a new solution that provided both efficiency and flexibility. By competing with ASICs, Image Illusions would be leveraging one of Intel’s core competencies to expand into a “white space” opportunity to generate new, innovation-driven growth.

In collaboration with a key potential customer—a large, successful manufacturer of digital imaging technology—the Image Illusions team developed a highly sophisticated and demonstrably superior solution based on proprietary intellectual capital. It cost almost twice as much per unit as ASICs, but the team felt (and the customer corroborated) that the higher price was more than offset by the increased performance and flexibility. In other words, the team had broken the critical trade-off that was limiting the performance, cost, and pace of innovation in image-processing technology.

There were, of course, challenges. The largest companies that made image processors—including the one that Image Illusions had collaborated with and all of the targeted early adopters—had their own in-house ASICs design staffs. Many of these people were also on the internal committees that assessed new technologies. To adopt a non-ASICs solution was effectively to put themselves out of a job. That meant Image Illusions would likely have to be vastly superior before customers would switch in volume, since the in-house ASICs design teams would be strongly motivated to show that they could up their game and match the new technology.

The Image Illusions team had reason for optimism. The image-processing market was fiercely competitive, and the vast performance improvements Image Illusions could provide meant that all the team needed was one major player to adopt its solution and the rest would follow suit. The ability to leverage Intel’s strong brand and customer access made the odds of getting one domino to fall seem very favorable. The cash-flow projections for Image Illusions estimated a net present value (NPV) between $9 million and $100 million over five years, a range that reflected both the team’s confidence and the unavoidable uncertainty that comes with launching a new business.

Assessing the prospects of such a venture is reasonably seen as a complex and challenging task. Is the technology really that much better? Is it “better enough” to overcome the entrenched interests of the customers’ in-house design functions? Is the management team at Image Illusions up to the challenge of overcoming the inevitable and unforeseeable twists and turns on the road to success? Is Intel sufficiently committed to this venture to support it for the one, two, or three years needed to make it to positive cash flow? It would appear that to predict with any confidence what will happen one must have deep experience and expertise in the relevant technologies and markets, strong familiarity with the management processes at Intel, and an intuitive but accurate take on the abilities of the leadership team.

Not if you are Thomas Thurston trying to test the predictive accuracy of Disruption. For him, the only questions that mattered were the following:

  1. Is Intel an incumbent in this market; that is, does Intel already sell this sort of product to this sort of customer?
  2. Is Intel’s innovation sustaining or Disruptive in nature? A Disruptive solution makes materially different trade-offs than the existing solutions purchased by mainstream customers; a sustaining solution is straightforwardly better.
  3. If the innovation is Disruptive, does the new business launching it enjoy operational and strategic autonomy from Intel’s established processes?

In the Image Illusions case the answers were pretty clear. Intel was a new entrant: it did not sell image processors. The Image Illusions technology was sustaining: it promised better performance than ASICs, as defined by the largest and most profitable customers. According to Disruption, an entrant with a sustaining innovation can expect to fail.

So that is what Thurston predicted. DR

Michael Raynor is a director in the Strategy & Operations practice of Deloitte Consulting LLP.
Chapter reprinted by permission of Crown Business, an imprint of the Crown Publishing Group, New York.

 

Endnotes

1. See Michael E. Raynor, Mumtaz Ahmed, and James Guszcza, “Survival of the Fattest,” Deloitte Review, 2010. (Available at www.deloittereview.com.)

2. See Harvard Business School case “Intel NBI: Intel Corporation’s New Business Initiatives (A),” No. N9–609–043, product number 609043, October 6, 2008.

3. Thurston is not the only one frustrated by this state of affairs. Two books have of late criticized the general state of popular management science, see Jeffrey Pfeffer and Robert I. Sutton, Hard Facts Dangerous Half- Truths & Total Nonsense: Profiting from Evidence-Based Management, 2006. Philip M. Rozensweig, The Halo Effect: . . . and the Eight Other Business Delusions That Deceive Managers, 2007. In my conversations with other researchers I find that they invariably agree with these agents provocateurs, while invariably exempting their own work from such criticisms.

4. Atul Gawande, The Checklist Manifesto. (New York: Metropolitan Books, 2009), pp. 162–70.

Related links

Share this page

Email this Send to LinkedIn Send to Facebook Tweet this More sharing options

Stay connected