The semiconductor industry is navigating a high-stakes paradox in 2026. While soaring artificial intelligence-driven demand is pushing revenues to unprecedented levels, this boom has its risks. The industry seems to have placed all its eggs in the AI basket, which may be fine if the AI boom continues. But the industry should also consider planning for scenarios in which AI demand slows or shrinks.
The global semiconductor industry is expected to reach US$975 billion in annual sales in 2026, a historic peak fueled by an intensifying AI infrastructure boom (figure 1).1 Growth reached 22% in 2025 and is projected to accelerate to 26% in 2026, and even if growth moderates thereafter, annual sales of US$2 trillion seem likely by 2036. However, this record growth masks a stark structural divergence. While high-value AI chips now drive roughly half of total revenue, they represent less than 0.2% of total unit volume.2 Another divergence is that, as AI chips are booming, chips for automotive, computers, smartphones, and non–data center communications applications are seeing relatively slower growth.3
The stock market is often a leading indicator of industry performance. As of mid-December 2025, the combined market capitalization of the top 10 global chip companies was US$9.5 trillion, up 46% from US$6.5 trillion in mid-December 2024 and 181% from US$3.4 trillion in mid-December 2023.4 Further, the market cap is highly concentrated, with the top three chip stocks accounting for 80% of that total.
At the time of publication, Deloitte predicts that generative AI chips will approach US$500 billion in revenue in 2026, or roughly half of global chip sales.5 Further, AMD CEO Lisa Su has raised her estimate for the total addressable market of AI accelerator chips for data centers to US$1 trillion by 2030.6
In 2025, an estimated 1.05 trillion chips were sold at an average selling price of US$0.74 per chip.7 At a rough estimate, although gen AI chips are likely to account for about 50% of industry revenues in 2026, they are less than 20 million chips, or roughly 0.2% of total volume.8 Even though global chip revenues in 2025 are expected to rise 22%, silicon-wafer shipments increased by only an estimated 5.4% for the year.9
In terms of key end markets, personal computing device and smartphone sales, which were anticipated to grow in 2025,10 are now expected to decline in 2026 due to rising memory prices.11
Deloitte’s 2026 global semiconductor industry outlook seeks to identify the strategic issues and opportunities for semiconductor companies and other parts of the semiconductor supply chain to consider in the coming year, including their impacts, key actions to consider, and critical questions to ask. The goal is to help equip companies across the semiconductor ecosystem with information and foresight to better position themselves for a robust and resilient future.
Revenues for memory in 2026 are likely to be about US$200 billion, or 25% of total semiconductor revenues for the year.12 Memory is notoriously cyclical, and makers appear cautious about overbuilding. As a result, they are increasing capital expenditures only modestly, with much of that going to research and development for new products rather than massively ramping capacity.13 Therefore, the growth in demand for HBM3 (High Bandwidth Memory 3), HBM4, and DDR7 memory for AI inference and training solutions has caused shortages of consumer memory, such as DDR4 and DDR5; prices for these products were up about 4x between September and November 2025.14 Predicting memory supply, demand, and pricing is hard, but some suggest that the current tightness in consumer memory could last a decade.15 Further price increases are likely in quarters one and two of 2026, perhaps as much as another 50%, with, for example, one popular memory configuration reaching US$700 by March 2026, up from US$250 in October 2025.16
This concentration of value appears to have contributed to a shift in market dynamics. As manufacturers prioritize the specialized hardware required for AI training and inference, the resulting “zero-sum” competition for wafer and packaging capacity is already disrupting downstream sectors. For leadership, the 2026 mandate moves beyond simply capturing AI demand to managing the systemic risks of a high-margin, low-volume paradigm, where severe shortages in essential components such as memory are projected to drive 50% price spikes by mid-year and redraw the global supply chain map.
The chip market is heavily exposed to AI chips for data centers, with up to roughly half of industry revenues expected to come from that market in 2026.17 But what might prevent that from happening? And what could that mean for the semiconductor industry, especially if non–data center markets such as the PC, smartphone, and automotive sectors remain weak?
First, those expectations are unlikely to change in 2026. The chips have already been ordered and are in backlog, data centers are under construction, and the numbers for the next 12 months are likely solid. But 2027 and 2028 could diverge sharply from current expectations for reasons noted below:
What could some or all of the above mean for the chip industry over the next one to three years?
Money and market impact: Chip designers and manufacturers that currently benefit from AI tailwinds could face headwinds. Revenue growth could decrease or turn negative. Earnings could be lower. Price-to-earnings and price-to-sales multiples could fall, and market caps could decline.
Fabs, tools, design tools, and more: Since AI chips are high in value but low in volume, a decline in revenues would likely have relatively little impact on companies that manufacture chips or the tools used to make them. Even if AI-chip volumes fall, because AI chips make up a small part of manufacturing capacity, it likely wouldn’t mean that fabs go idle. That said, companies producing certain types of packaging, memory, power, and communications semiconductors could be affected.
With AI data center workloads forecast to triple or quadruple annually between 2026 and 2030,22 chip- and system-level integration will be required to enable system performance in hyperscale data centers. As Deloitte has predicted, chiplets are addressing chip-level performance needs in AI data centers, delivering yield, bandwidth, and energy-efficiency benefits.23 Chip manufacturers in 2026 are likely to increasingly integrate HBM closer to logic chiplets, either on silicon interposers or in 3D stacks, allowing data to move much faster between processors—graphics processing units (GPUs) and neural processing units (NPUs)—and memory (HBM stacks), at multiple terabytes per second, while being more energy efficient (lower joules per bit and lower watts per token).24 Additionally, co-packaged optics (CPO) will likely gain traction in data center switches, enabling higher aggregate bandwidth per rack with a lower Ethernet/InfiniBand switch footprint.25 High-bandwidth flash, which can support faster scale-up (within a server rack) and scale-out (across multiple racks and systems), will likely experience more demand in 2026, especially as AI workloads shift from training to inference.26
However, as traditional copper Ethernet network designs are unable to meet AI workloads, which generate massive east-west traffic between GPUs, optical interconnects (both CPO and linear pluggable optics, or LPO) are likely to see greater adoption in 2026.27 AI network fabric spending is expected to grow at a compound annual growth rate of 38% between 2024 and 2029 (figure 2).28 As AI data center networks scale to 51.2 terabits per second and above switching capacities—within and across racks and clusters—it’s not only critical to integrate the various components (memory stacks, compute systems, and rack-scale networks) but also to re-assess the use of copper or traditional pluggables, which could adversely impact power consumption and bandwidth or take up too much space. This is where CPO and LPO can address those gaps in 2026, as they help shorten electrical paths, reduce power consumption by 30% to 50%, and offer higher bandwidth and better total cost of ownership. 29
Some hyperscalers are using advanced network chips from merchant silicon vendors and disaggregated hardware models in order to develop their own custom topologies on top of those solutions.30 However, in 2026, the industry could increasingly pivot to software-defined network fabrics that integrate compute and networking into a single, vertically integrated solution, given the benefits of superior performance, better orchestration, and lower total cost of ownership.
Even as cloud hyperscalers, AI network companies, foundries, and outsourced semiconductor assembly and test (OSAT) facilities race to address complex heterogeneous system integration challenges, they need to contend with the difficulties involved in next-generation back-end assembly and test processes. For instance, every chip product needs to pass through specific process steps such as modeling, simulation, thermal management, and bumping. These steps require specialized packaging expertise and statistical process control skills that are scarce in the United States and Europe.31 As a result, talent constraints in advanced packaging may continue to hinder regional goals of achieving greater semiconductor autonomy, even as volume-based back-end capacity expands further in Asia.32
Strategic alliances among the broader AI, semiconductor, and cloud infrastructure providers have heralded a new AI computing capital cycle. Investments made in 2025 will likely continue or accelerate in 2026, creating a funding and demand ecosystem where capital and computing resources flow back and forth among companies mainly involved in AI model development, AI accelerator design, production, packaging, and data center infrastructure.33 For instance, an investing company (typically a chip hardware, platform, or cloud infrastructure provider) may invest billions of dollars in an AI startup to accelerate the development of solutions. In return, the AI startup would rapidly incubate and accelerate new product development and, in turn, buy the investing company’s computing resources and infrastructure offerings. These moves have become a way for chip companies to achieve vertical integration across the AI data center stack.
Besides AI training and inference workloads,34 another factor driving the surge in the semiconductor industry’s investment activity is the geopolitical imperative, even as governments and businesses want to influence regional technology infrastructure.35 Many governments consider AI models, chip design intellectual property, and leading AI accelerators to be critical to national security, supply chain resilience, and tech sovereignty.36 Increasingly, governments are seeking to secure these capabilities through export control measures to help bolster local and regional availability of leading-edge AI chip manufacturing capabilities,37 so that homegrown chipmakers can expand their market presence. Concurrently, they’re seeking to find a balance between limiting the export of strategic AI and technology products by allowing some of the advanced chips to be exported. For example, the US government in December 2025 approved NVIDIA to sell H200 AI chips to a set of approved customers in China, in return for a 25% share in NVIDIA’s chip sales.38 In the midst of these developments, Europe appears caught between US export controls (restricting advanced chip sales to China) and China’s countermeasures.
As tech and chip majors continue to pursue this new form of vertical integration (referred to as circular financing by some industry analysts), the semiconductor industry’s capital allocation strategies may need to shift from capacity- to capability-driven models, with an emphasis on achieving AI system-level differentiation. In 2026 and beyond, chip companies should not only consider expanding the breadth and scope of their operations by establishing more AI fabs or developing new AI chip platforms, but also foster strategic partnerships and make direct investments to build an ecosystem around their fab or chip platforms.
Traditional volume-based foundries may want to integrate advanced packaging capabilities. OSATs could codesign chiplets with integrated device manufacturers and design players, while electronic design automation companies and foundries could benefit from collaborating closely with wafer-fab front-end equipment providers. As chip industry executives look for ways to deploy their cash strategically, they should consider assessing talent needs and skill availability, core competencies, and partner models that are more region- or country-specific. This assessment should also include non-AI market opportunities by focusing on mature chip nodes to address automotive and electric vehicle, aerospace and defense, manufacturing, and power infrastructure markets—many of which could be specific to the countries in which they operate.
For 2026, semiconductor industry executives should be mindful of the following signposts: