|Nvidia Can Go to $250 on All the Data Center Opportunities, Says Needham |
Nvidia's business in data centers has several avenues to tens of billions in revenue, including "inferencing," an emerging area of machine learning, but also selling chips to Uber and other "transportation as a service" companies, according to Rajvindra Gill of Neeedham & Co.
By Tiernan Ray
Oct. 13, 2017 11:19 a.m. ET \
Another day, another Nvidia ( NVDA) price target increase, this one from Needham & Co.’s Rajvindra Gill, who reiterates a Buy rating, and raises his price target to $250 from $200, after attending the company’s “GTC” conference in Munich, Germany, and coming away upbeat about the prospects for the company’s data center market.
Gill’s new target beats the $220 that RBC Capital’s Mitch Steves offered yesterday on his own enthusiasm for Nvidia’s markets.
Gill talked with Nvidia CEO Jen-Hsun Huang at the event, along with other attendees, and the discussion mostly “centered around the growth drivers in data center,” he writes.
The market could be worth $21 billion to $35 billion over five years, writes Gill, in three buckets.
One big area is the current “training” market in machine learning:
Nearly all the hyperscalers, cloud and server vendors (Google, Alibaba, Cisco, Huawei, AWS, Microsoft Azure, IBM, Lenovo, Tencent) along with several A.I. startups will train on GPUs in the cloud — both internally and for their customers.
Inference, acting on the results of training, is another one, though “we are waiting to see evidence” of the GPU take up there, he writes:
The second major growth driver is inference. We estimate there are 20 million CPU nodes that will be accelerated over the next five years to support AI applications (live video on Internet, video surveillance cameras). At $500-$1,000 ASPs, we forecast the inference TAM at $10 billion to $20 billion.
And yet another part is spreading GPUs to new areas, including the “transportation as a service" companies such as Uber:
For example, Lyft or Uber could possibly deploy supercomputing GPUs to process the innumerable driving decisions needed to support AVs along with SQL databases being accelerated with AI-GPUs. Moreover, 15 of the top 500 supercomputers have GPUs. We believe over the next five years, 100% of those supercomputers will be accelerated. In a typical supercomputer node, we estimate NVDA receives $64k (8 GPUs X $8k each). This would translate to an HPC GPU TAM of ~$10BN.