SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksThe end of Moore's law - Poet Technologies


Previous 10 Next 10 
From: toccodolce5/16/2024 2:23:48 PM
1 Recommendation   of 1026
 

Share RecommendKeepReplyMark as Last Read


From: toccodolce5/16/2024 2:29:22 PM
1 Recommendation   of 1026
 
As of yesterday here is the current share structure of POET:

POET share count from the SEC

Share RecommendKeepReplyMark as Last Read


From: toccodolce5/16/2024 2:35:30 PM
3 Recommendations   of 1026
 
Surging demand in the rapidly growing optical transceiver market.

How Many Optical Transceivers Does ChatGPT Require?

FiberMall has extrapolated the AI infrastructure including optical transceivers that ChatGPT brings to the table.

The difference from a traditional data center is that with the InfiniBand fat tree structure common to AI, more switches are used and the number of ports upstream and downstream at each node is identical.



One of the basic units corresponding to the AI clustering model used by NVIDIA is the SuperPOD.

A standard SuperPOD is built with 140 DGX A100 GPU servers, HDR InfiniBand 200G NICs, and 170 NVIDIA Quantum QM8790 switches with 200G and 40 ports each.

Based on the NVIDIA solution, a SuperPOD with 170 switches, each switch has 40 ports, and the simplest way is to interconnect 70 servers each, and the corresponding cable requirement is 40×170/2=3400, considering the actual deployment situation up to 4000 cables. Among them, the ratio of copper cable: AOC: optical module = 4:4:2, corresponding to the number of optical transceivers required = 4000 * 0.2 * 2 = 1600, that is, for a SuperPod, the ratio of server: switch: optical module usage = 140: 170: 1600 = 1: 1.2: 11.4

A requirement similar to GPT4.0 entry-level requirements requires approximately 3750 NVIDIA DGX A100 servers. The requirements of optical transceivers under this condition are listed in the following table.



According to IDC, the global AI server market is $15.6 billion in 2021 and is expected to reach $35.5 billion by 2026. The market size of China’s AI server industry is $6.4 billion in 2021. According to IDC data, 200/400G port shipments are expected to increase rapidly in data center scenarios, with a compound growth rate of 62% from 22-26 years. Global switch port shipments are expected to exceed 870 million in 2026, with a market size of over $44 billion.

FiberMall extrapolates the demand for servers, switches, and optical transceivers from AI data center architecture. In this extrapolation process, FiberMall uses the ratio of 4:4:2. The use of optical modules in the data center is ultimately directly related to traffic demand. This ratio is likely to exist only at full capacity, and it is still worthwhile to study in depth how the service traffic within the AI data center is now.

Share RecommendKeepReplyMark as Last Read


From: toccodolce5/20/2024 6:51:35 AM
1 Recommendation   of 1026
 
Photonic Integrated Circuit Market Size & Share Analysis - Growth Trends & Forecasts (2024 - 2029) Source: mordorintelligence.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: toccodolce who wrote (852)5/20/2024 7:03:54 AM
From: toccodolce
1 Recommendation   of 1026
 
Celestial AI News from CEO

Share RecommendKeepReplyMark as Last Read


From: toccodolce5/27/2024 12:06:09 PM
1 Recommendation   of 1026
 
How Nvidia, TSMC, Broadcom and Qualcomm will lead a trillion-dollar silicon boom

We believe the artificial intelligence wave will bring profound changes, not only to the technology industry but to society as a whole. These changes will perhaps be as significant to the world as the agricultural and industrial revolutions, both of which had drastic economic consequences.

Although the exact progression and timing of these changes are unpredictable, one thing is clear: The AI wave will not be possible without advancements in – and a stable supply of – hardware and software generally, and silicon specifically. The complexity of semiconductor design and manufacturing combined with rapid innovation and the vulnerability of the supply chain creates unique and challenging dynamics that in our view are reshaping leadership in the semiconductor industry.

Our forecast shows that the combined revenues of: 1) companies that supply manufacturing equipment, components and software to build fabrication facilities; 2) chip manufacturers; and 3) chip and AI software designers will approach $1T this decade. Our research suggests that four companies, Nvidia Corp., Taiwan Semiconductor Manufacturing Corp., Broadcom Inc. and Qualcomm Inc., will account for almost half of that trillion-dollar opportunity.

In this Breaking Analysis, we bring in theCUBE Research analyst emeritus David Floyer to quantify and forecast the dynamic semiconductor ecosystem. We compare market shares from 2010 with those of 2023 and provide a five-year outlook for more than a dozen of the top players in the industry. We also provide a view of where we see the overall market headed, our assumptions for the market and the top players, which firms we see winning and losing and why, with a bit of survey data from Enterprise Technology Research.

We’ll also address the following five items:

  1. How sustainable is Nvidia’s moat?
  2. What’s the impact of competition on Nvidia, including from hyperscalers, Intel Corp., Advanced Micro Devices Inc. and others?
  3. The challenges faced by two companies that both design and manufacture semiconductors – Intel and Samsung Electronics Co. Ltd.
  4. What the opportunities at the edge mean for firms and competition.
  5. Risks to our scenarios including geopolitical, technological and energy risks.


AI capturing budget momentumLet’s start with the macro impact that the generative AI awakening has had on information technology spending in the last two years. The data below from ETR shows the 19 sectors the company tracks each quarter. The vertical axis is spending velocity or Net Score and the horizontal plane is the Pervasion, or penetration of the sector within the survey.

We’ve shown this many times before, but note where AI was one month prior to ChatGPT’s launch in the October 2022 survey. It dropped just below the 40% magic line that month and since then has been up and to the right. Consequently, other sectors have been suppressed. As we’ve reported, 42% of customers indicate they’re funding AI by stealing from other budgets. And we know that generally enterprise AI return on investment is coming in small productivity wins at this time and for most organizations is not yet self-funding.



The point is that AI is consuming not only the conversation but also the spending momentum.

We believe there are three significant impacts for all organizations.

  • The first is across the board productivity gains. Our expectation is that significant adopters of AI will initially see initially 20% and eventually up to 50% improvements over the next few years.
  • Second quality of service. For example, a contact center representative should be able to answer any customer or prospects question correctly, accurately and immediately. Or the customer can self-serve the answer on the company’s website by voice in the language of their choice.
  • Perhaps the most important value of AI to organizations is the potential automation of business processes. Specifically, the elimination of people within a business processes that leads to a simplification of the business processes and the company as a whole.
As such, our guidance to clients is a combination of all three is ideal. If you are not planning for a tenfold productivity improvement over the next five to 10 years, there are startups and competitors that will and risk taking your business.

Nvidia’s moat is wide and deepOK, let’s cut right to the chase. Nvidia momentum is simply remarkable and has caught the attention of everyone in the industry. The pace of innovation coming out of the AI ecosystem generally and Nvidia specifically is astounding. Here’s a diagram that underscores the new era in computing that we’re in, catalyzed by large language models and the AI breakthroughs.



This chart shows the teraflop progression Nvidia has made since 2016. We’ve overlaid a depiction of the Moore’s Law progression. The comparison is remarkable with Nvidia demonstrating a 1000-times improvement in parallel/matrix computing (what Nvidia calls accelerated computing) in eight years versus a 100-times improvement from Moore’s Law in a decade.

It’s important to understand that in this episode where we’re forecasting the semiconductor industry ecosystem and we’re taking liberties with the scope. And by that we mean we’re modeling Nvidia as a full platform solution and a company that is building end-to-end AI data centers – what it calls the AI Factory. And its selling that capability through partners.

One of the key aspects of Nvidia’s moat is it builds entire AI systems and then disaggregate and sell them in pieces. As such, when it sells AI offerings, be they chips, networking, software and the like, it knows where the bottlenecks are and can assist customers in fine-tuning their architectures.

Nvidia’s moat is deep and wide in our view. It has an ecosystem and are driving innovation hard. Chief Executive Jensen Huang has announced that there’s another big chip behind Blackwell – no surprise – and it’s on a one-year “cadence”rhythm” for systems and networking and the new systems will run CUDA. Nvidia claims to be “all-in” on Ethernet, the company will continue to extend NVLink for homogenous AI Factories, and Infiniband’s roadmap continues.

Huang’s claim and bet is the more you spend with Nvidia, the more you save and the more revenue you can drive.

In addition:

  • We think Nvidia’s performance gains will continue. It means we’ll see 1 million teraflops in five years time.
  • But the important thing is Nvidia is not just a chip, it’s an entire AI platform. It has specialized graphics processing units, central processing units, networking, cooling and software – it’s a complete systems software. CUDA is by far the best software in the industry. It’s the key AI software platform.
  • Nvidia can deliver an entire AI data center. Nothing has been introduced that’s this revolutionary since IBM introduced the System 360 in 1964, which changed the computer industry.
In addition, the goal is to crank it up and introduce a new system every year. In our opinion, the value to the users, to hyperscalers and to anyone using these technologies is so high that, combined with the cost of creating alternatives, it means to us that for at least the next five years, Nvidia will be the dominant supplier in the AI data center.

A trillion-dollar semiconductor ecosystemLet’s get to the meat of this research and our five-year outlook for the ecosystem. The table below lays out how we see the semiconductor industry evolving. In the first column we show the players in the ecosystem comprising the chip designers such as Qualcomm, the chip manufacturers such as TSMC, three leading firms that do both — Intel, Samsung and Micron Technology Inc., equipment manufacturers such as ASML Holding NV and Applied Materials Inc., and software providers such as Cadence Design Systems Inc., which is in the “other” category.

Of course we’re also including Nvidia, which we believe has become and will continue to be the most important player in the market. Again we’ve pushed the envelope a bit in terms of the forecast and are forecasting Nvidia’s entire revenue stream beyond just chips.



For each company we’re showing their related revenue in 2010, 2023 and our forecast for each firm in 2028 with a CAGR for the relevant time period.

MethodologyWe ingested a series of relevant financial data for each firm and we combined this data with our fundamental assumptions to create a top down model of the industry as we describe here. We tested this data with a two external data points and added a third dimension, including: 1) company strategic forecasts based on their long-term financial frameworks; 2) inputs from various financial analysts that have made long-term projections for these companies; and 3) applying our own assumptions about how we see the market playing out.

We’ll share that our assumptions and the resulting forecasts deviate quite widely from generally accepted market narratives. In particular, the broad consensus when you take into account the publicly available data essential says that everyone wins and the disruption to existing firms will be modest. We don’t see it this way. Rather, we forecast a dramatic shift to matrix computing or so-called accelerated computing; and we see meaningful spending shifts causing market dislocation, particularly to traditional x86 markets.

Key findingsThe high-level findings in our market assessment are as follows:

  • The global semi ecosystem with our expanded scope, surpasses $900 billion by 2028 and will approximate $1 trillion by 2030.
  • We forecast a 10% compound annual growth rate from 2023 to 2028.
  • There is a massive market shift away from general-purpose x86 toward parallel AI computing architectures or matrix computing to support AI.
  • Four companies will account for approximately 40% of the revenue in this forecast by 2028: Nvidia, TSMC, Broadcom and Qualcomm.
  • Samsung and Intel are bucking the trend by vertically integrating design and manufacturing and are facing similar challenges.
  • AI PCs will cause PC lifecycles to shorten. They will go mainstream and not only participate in a Windows refresh but will change the dynamics of the useful life of PCs.
  • Arm-based designs dominate the market volume and will confer significant cost advantages to those firms up the Arm curve
  • High bandwidth memory or HBM drives unprecedented demand for memory suppliers and creates a tailwind for those companies that can produce them.
Let’s look at the data in more detail by company, sorted by our 2028 projections in descending order. We’ll show the company, our projected CAGR and our revenue forecast for 2028.

Nvidia: 25% CAGR, $160BIn our view, Nvidia essentially has a monopoly somewhat similar to Wintel’s duopoly of the 1990s with the core GPU dominance and the AI operating system all within in the same company. We believe Nvidia’s growth rate will actually accelerate as it penetrates new markets and will surpass $160 billion in revenue by 2028.

Importantly, we’re including more than just chips in this forecast. Specifically, we’re assuming Nvidia’s full platform and portfolio revenue; and the assumption is that Nvidia’s continues to execute across its portfolio on a rapid cadence.

Nvidia has executed brilliantly. It has bet on very large chips and invested in GPUs, CPUs, networking and software, offering a complete solution and a complete data center that can be disaggregated. Our assumption and belief is Nvidia will sustain this cadence for at least the next five years.

TSMC: 14% CAGR, $135BTSM has become the go-to manufacturer for advanced chips. We have TSMC almost doubling in size over the next five years. Our core assumption is that volume economics will confer major strategic advantage to TSM and it will remain the world’s No. 1 foundry by far.

It’s important to note how TSMC is investing. The company just announced the A16, 1.6nm process targeted for 2026. We believe this will be a significant milestone in its manufacturing, with nanosheet transistors and the backside power delivery. TSMC calls this Super Power Rail. These innovations in our view are industry-leading and the company’s track record of execution and delivering volume at high yields will allow it to maintain leadership.

Broadcom Semiconductor: 10% CAGR, $58BNext on the list is Broadcom and we’re only including its semiconductor revenue. As such we think that though the company’s CAGR slows to 10%, it’s really because in 2010 it was very small. Much of Broadcom’s 2023 revenue was dispersed in our model under the “other” category.

Broadcom has done a remarkable job through acquisitions and engineering. It doesn’t compete head-on with Nvidia in GPUs, although it is a major provider of silicon and AI chip IP for Google LLC, Meta Platforms Inc. and we think ByteDance Ltd. via its custom silicon group. We see Broadcom’s semiconductor business growing at a CAGR of 10% over the next five years, taking the division to 1.6 times its current size. It solves really difficult problems to connect all the GPUs, CPUs, NPUs, accelerators and high-bandwidth memories together. It is uniquely positioned to continue to win in the market. Broadcom plays in virtually all sectors, consumer, enterprise, mobile, cloud, edge.

Broadcom in our view is a very well-positioned and well-run company. Its focus particularly on networking is vital. High-speed networking of all types is going to be absolutely essential for AI processing and it’s entrenched in this market. In particular, it’s very well set up with major internet players that will be AI leaders. As such, Broadcom has early visibility on the most critical market trends.

Qualcomm: 9% CAGR, $55BQualcomm is very well-positioned both in mobile and now in AI PCs. We see it getting a huge tailwind from the recently introduced Windows AI PC stack from Microsoft. We have Qualcomm on a similar trajectory as Broadcom in terms of its growth. Essentially, Microsoft, with its Windows Copilot in release 11, is following Apple Inc.’s moves from several years ago and that will be a big benefit for Qualcomm, which provides core silicon for AI PCs. This is more bad news for x86-based PCs.

Microsoft announced full support for Arm-based PCs based on Qualcomm. Now Dell, Lenovo and others have announced Arm-based PCs and suddenly you’ve got a whole plethora of these initiatives and they’re selling them on the basis of improved performance and a 24-hour battery life going directly after Intel’s PC installed base.

So you can see our forecast indicates that the four companies at the top, Nvidia, TSMC, Broadcom and Qualcomm, comprise around 45% of a $900 billion-plus market by 2028.

Intel: 2% CAGR, $54BWe’re forecasting Intel’s Foundry revenue to comprise about $22 billion of a $54 billion business in 2028. So unlike many, we’re forecasting no growth for Intel over the time period. We see the rise in foundry revenue unable to offset the decline in x86. Combined with our assumption that AMD continues to gain share in x86 markets, we have Intel data center and client revenue dropping from $45 billion in 2023 to $26 billion in 2028.

Intel is late with support for AI PCs and we’re projecting a 12- to 18-month delay in its 14A process, which is its big bet. It combines gate-all-around technology, what it calls RibbonFET, and backside power delivery, which Intel refers to as Power Via. The company hopes to be the first to use High NA EUV technology, which combined with these other innovations is extremely bold, but also likely to be delayed. Hence our assumption that 14A gets pushed.

Intel needs all three of these innovations to be successful and differentiate from the rest of the industry. We believe Intel has a very good chance of executing on two of the three simultaneously, but even that is risky. Our assumption is Intel’s 14A gets to volume production and high yields in 2028 (best case) or 2029 (likely case) but perhaps even 2030 (worst case).

The key to understanding Intel, in our view, is that it has lost the volume lead. Apple and TSMC have taken the lead and its Arm-based phones and PCs have given it a significant learning curve moat in our view, to Intel’s detriment.

If, however, Intel is able to succeed and deliver 14A in volume production as it plans in 2026 and can follow with its 10A 1nm node in late 2027, then our forecasts will be incorrect and Intel will in a much better position than we project.

ASML: 7% CAGR, $41BASML has unique differentiation that is going to remain unmatched. Essentially we see ASML as a monopoly that’s going to continue and it will be able to command whatever pricing it wants.

SK Hynix: 10% CAGR, $40BHigh-bandwidth memory has become a new enabler for AI. It’s in high demand and short supply and that is going to propel SK Hynix. We have SK Hynix growth actually accelerating with revenue growing from $25 billion today to $40 billion by 2028. High-speed memory is incredibly important and the company has multiple options in this space.

Samsung Semiconductor: -1% CAGR, $38BWe think Samsung is going to struggle to get its advanced process working. We think it’s going to continue to face challenges and we think that constricts volumes and puts them in a cost dilemma. We’ve got Samsung basically flat from its $40 billion today.

Intel has said that it intends to be the No. 2 foundry by 2030. Given Samsung’s struggles, we think it is the right target for Intel. It’s just a matter as to whether Intel can get there. So, in that sense going after Samsung is the right move.

AMD: 10% CAGR, $37BCEO Lisa Su has done an amazing job with this company. A key turning point was when AMD shed its fabs, despite co-founder Jerry Sanders once famously remarking, “Real men have fabs.” That didn’t really prove out for AMD in the long run. It took several years for the company to get back on track, but its persistence has paid off.

AMD is still very much tied still to x86. By 2028, the end of our forecast period, we still have 45% of AMD’s revenue coming from x86, which puts downward pressure on a big part of the company’s total available market. The good news is our assumptions call for AMD to continue to steal share from Intel and at the same time make progress in AI hardware.

Of course, Intel’s going to fight like crazy for its x86 data center share, but we’re more sanguine on AMD’s outlook as a chip designer. It’s not saddled by foundry, and though that x86 pressure is a negative, we believe AMD will continue to take share. It is just faster to market and actually has a quality product. For example, Oracle Corp. has just gone all AMD-based chips for their new Exadata systems, which was a big win for AMD.

Applied Materials: 6% CAGR, $35BWe think Applied Materials continues to execute. It’s in a really good position. It has more competition than does ASML, but we’ve got it doing pretty well here, growing from $27 billion in 2023 to $35 billion with a 6% CAGR. We’re basically forecasting ASML, SK Hynix, Samsung, AMD and AMAT all around that $35 billion to $40 billion range.

Apple semiconductor value: 12% CAGR, $33BEssentially what we’ve done here is model the value contribution within Apple’s hardware to the silicon and made some assumptions around its value contribution in the chain. We saw that Apple, based on our assumptions, grew at a 15% CAGR from 2010 to 2023 and we’ve got it at 12% from 2023 to 2028. We’re assuming a $33 billion contribution from silicon.

There have been ongoing reports that Apple’s going to sell silicon as a merchant supplier. We do not make that assumption in our figures. Nonetheless, Apple getting into the business of manufacturing its own chips was profound. It started with its A series in smartphones and now of course the M series in its newest laptops and iPads. It was the first to ship neural processing units both in iPhone and in PCs. Now it has to make major step-up as the AI PC competition heats up.

Apple quietly led the AI PC wave. It introduced large chips many years ago on iPhones and integrated the CPUs, NPUs and GPUs on the same chip. It has a large shared SRAM, which architecturally is a leading example and well-positioned for AI. Apple has a proven track record in silicon, for example evolving its M series, M1, M2, M3 and and now M4.

We believe Apple is a leader in designing silicon architectures required to go into AI and we assume it will quickly respond to the Qualcomm AI PC trend. In our view, Apple is a main reason why Microsoft is pushing support for Arm-based designs, because it was under pressure from Apple.

Micron: 14% CAGR, $31BWe believe Micron can accelerate its growth rate, propelled by high-bandwidth memory. Similar to SK Hynix, demand is way outstripping supply for Micron’s HBM. Micron has executed very well. We see an acceleration in their CAGR to 14% and nearly doubling revenue by 2028 from $16 billion in 2023 to $31 billion. Micron not only designs chips, it has been a successful manufacturer for years.

Hyperscaler silicon — AWS, Google, Microsoft, Meta, Alibaba, ByteDance: 15% CAGR, $12BWe grouped hyperscaler cloud providers into a single category. Hyperscalers design their own silicon and partner with merchant suppliers such as Broadcom and others. Our forecast for hyperscalers excludes Broadcom’s contribution of custom chips, for example. We’re not double counting here.

We think hyperscaler general-purpose, training and inference chips will be used for cost-sensitive applications such as inferencing at the edge. We assume they’re not going to keep pace with Nvidia at the high end, but they will get their fair share. We assume AWS Graviton accounted for about 20% of AWS workloads in 2023. Inferentia and Trainium were a smaller portion of AI workloads in 2023, as were the counterparts at Google and Microsoft. We assume a healthy contribution from hyperscalers, but they will not be a dominant factor in terms of disrupting Nvidia in our view.

Hyperscalers are introducing Nvidia IP. They really have to take Nvidia because they can’t make a comparable platform themselves. We assume it’s going to be cheaper for the next five years and as such they will continue to be large customers of Nvidia.

Other silicon ecosystem players: 4% CAGR, $175BOther includes a long tail of suppliers across the value chain. You have Texas Instruments, GlobalFoundries, Chinese players such as Yangtze, CXMT, startups such as Cerebras, and many more.

We assume in our forecast that China doesn’t invade Taiwan and that hot wars don’t completely disrupt the market. And it also assumes that the AI PC market generally follows Apple’s trends from x86 to Arm. We show x86 at about 13% of the market revenue in 2010 dropping to 11% in 2023. And it’s projected to be 5% in 2028.

Visualizing the leaders – 2010 to 2028Here’s a visual of what we just went through. In the interest of time, we’ll just say that the two companies bucking the trend among the leaders are Intel and Samsung. Micron is in a different business and has uniquely figured out the combined model. And AI is brining new investment to a market that was always considered risky by investors. News flash: It still is.



AI PCs will shorten lifecyclesHere’s our forecast for PCs going back to 2009. When PC volumes peaked in 2011, that was the beginning of Intel’s descent from the mountaintop, even though most people didn’t see it. David Floyer made the call in 2013. And the key points here are:

  • Consumer volumes from the iPhone are what enabled innovation in AI PCs.
  • Specifically the first true inference came out in 2017 with Apple using neural processing units – NPUs – to do facial recognition and that innovation led to the first NPUs in laptops and the early example of AI PCs.
  • While PC volumes picked up during COVID, they’ve been constricting. But we believe AI PCs are a game-changer.
  • Microsoft just reset the windows stack for AI around Arm – WinArm – following Apple’s moves. PC makers such as Dell Technologies Inc. and HP Inc. are adopting and Qualcomm is seizing the day. And you can see in the green below what we think it means for AI PCs powered by Arm and what happens to x86 PCs – it follows the Apple path. Not as severe but pretty much a managed-decline market.




While we forecast a surge in PC volumes, it is important to understand that this does not signal a return to the dominance once enjoyed by PC chip manufacturers like Intel. The landscape is now heavily influenced by Arm-based chips, whose wafer volumes are ten times that of x86. Companies such as Nvidia, Apple and Tesla recognized this shift early and have leveraged Wright’s Law to gain significant cost and time-to-market advantages in Arm-based chip design and manufacturing.

This shift underscores the increasing value of Arm technology in reducing design costs and highlights the challenges faced by x86. The market dynamics have fundamentally changed, and Arm’s advancements have made it a dominant force, fundamentally altering the competitive landscape.

Final thoughts on key topicsLet’s close on some of the key issues we haven’t hit.



The future of AI and its market dynamics are evolving rapidly, with significant implications for key players and emerging technologies. Our analysis highlights the pivotal trends and forecasts that will shape the AI landscape over the next decade, focusing on AI inference at the edge, energy needs, geopolitical risks and the potential shifts in semiconductor manufacturing.

Key points
  • AI inference at the edge:
    • By 2034, 80% of AI spending is projected to be on inference at the edge.
    • This workload is expected to dominate the AI market.
    • While Nvidia currently holds a strong position in AI inference, driven largely by ChatGPT, the competitive landscape for high-volume, low-cost, low-power inference at the edge remains wide open and will challenge Nvidia’s dominance.
  • Energy needs:
    • Future energy requirements will drive the adoption of nuclear, solar, wind and large local batteries.
    • Innovative energy solutions will be critical to support the growing AI infrastructure.
    • Local power generation will be an emerging trend.
  • Geopolitical risks:
    • Potential disruptions from geopolitical tensions, particularly involving China and Taiwan, pose significant risks.
    • Our forecast assumes a frictionless environment, but there’s a 35% to 40% probability of supply chain disruptions affecting the market within our forecast period.
  • Intel foundry and semiconductor innovations:
    • Intel’s future positioning depends on the successful implementation of gate-all-around, backside power and NA EUV technologies by 2026-2027.
    • Achieving high yields in these areas could significantly enhance Intel’s competitiveness by the 2030s and alter our scenario for the company.
  • Market disruptors:
    • Well-funded startups and major mergers and acquisitions by hyperscalers could introduce alternative approaches.
    • Despite potential disruptions, Nvidia and the other three silicon giants with momentum – TSM, Broadcom and Qualcomm – are expected to maintain their velocity.
Bottom lineThe AI market is set for significant transformation, with AI inference at the edge poised to become the dominant workload. Energy innovations and geopolitical stability are crucial for sustaining this growth. Though Nvidia currently leads, the competitive landscape remains fluid, with potential shifts driven by technological advancements and market disruptors. Our analysis underscores the need to monitor these developments closely as they will shape the future of AI.

As always, we’ll be watching.

How do you see the market playing out over the next five years? What do you think of our assumptions and forecasts? Let us know your thoughts and thanks for being part of the community.

Keep in touch

Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.

Remember we publish each week on theCUBE Research and SiliconANGLE. These episodes are all available as podcasts wherever you listen.

Email david.vellante@siliconangle.com, DM @dvellante on Twitter and comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail.

Note: ETR is a separate company from theCUBE Research and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at legal@etr.ai or research@siliconangle.com.

All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.

Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of theCUBE Research. None of these firms or other companies have any editorial control over or advanced viewing of what’s published in Breaking Analysis.

Image: theCUBE Research/DALL-E 3

Share RecommendKeepReplyMark as Last Read


From: longz5/27/2024 12:11:30 PM
   of 1026
 
These guys got there own Optical device?... wonder if its POET'S?... wonder if its as good or better than Poet's Optical device???????

Hyper Photonix Unveils General Availability of 800G DR8 Optical Transceivers, Addresses Shortage of Next-Generation Interconnects

Share RecommendKeepReplyMark as Last Read


From: longz5/28/2024 11:25:33 PM
   of 1026
 
FIRST THIS---->>> POET Announces Design Win and Collaboration with Foxconn Interconnect Technology for High-speed AI Systems.May 14, 2024

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: longz who wrote (856)5/28/2024 11:28:00 PM
From: longz
3 Recommendations   of 1026
 
and now this---->>>> hmmmmmmm====>>> Foxconn to flex AI muscle at Computex, with Nvidia-powered next-gen servers to begin shipping in 3Q24 (digitimes.com)


Share RecommendKeepReplyMark as Last Read


From: longz5/29/2024 9:33:59 PM
1 Recommendation   of 1026
 
Poet Technologies Announces the Amendment and Acceleration of Warrants (ceo.ca)

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10