Technology Stocks : Investing in Exponential Growth

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Paul H. Christiansen11/13/2017 2:03:04 PM
   of 682
Nvidia Breaks $2 Billion Datacenter Run Rate

If GPU acceleration had not been conceived of by academics and researchers at companies like Nvidia more than a decade ago, how much richer would Intel be today? How many more datacenters would have had to be expanded or built? Would HPC have stretched to try to reach exascale, and would machine learning have fulfilled the long-sought promise of artificial intelligence, or at least something that looks like it?

These are big questions, and relevant ones, as Nvidia’s datacenter business has just broken through the $2 billion run rate barrier. With something on the order of a 10X speedup across a wide variety of parallel applications, and given how the latest “Skylake” Xeon SP processors, particularly the top-end Platinum models with fat memory, are priced, if you assume something like price parity for a teraflops of computing oomph, then this implies that $20 billion worth of CPU compute that might have otherwise been consumed in a year at the same run rate did not get consumed. That is roughly the size of Intel’s Data Center Group for the trailing four quarters and its current run rate.

To put that another way, GPUs have taken a tremendous bite out of CPU computing – maybe something on the order of $40 billion during the period since the Great Recession ended, a span of time when Intel’s Data Center Group raked in about $115 billion and took about half of that money as gross profits. The GPU bite is getting deeper with each passing quarter and year – this is much more of an impact than we we scratching out on the back of an envelope back in March – and that is because the speedup gap between GPUs and CPUs is increasing, and the new use cases are leaning towards GPUs from the get-go. CUDA is now a safe bet for distributed, parallel computing.

. . .

But the secret to Nvidia’s success is that it has one GPU architecture and one CUDA programming environment that binds all of these parallel workloads together. Jensen Huang, co-founder and chief executive officer at Nvidia, reminded Wall Street of this repeatedly in a call going over the numbers.

“We have one architecture,” Huang said, “and people know that our commitment to our GPUs, our commitment to CUDA, our commitment to all of the software stacks that run on top of our GPUs, every single one of the 500 applications, every numerical solver, every CUDA compiler, every tool chain across every single operating system in every single computing platform – we are completely dedicated to it. We support the software first long as we shall live. And as a result of that the benefits to their investment in CUDA just continues to accrue.”

“When you have four or five different architectures, you ask customers to pick the one that they like the best, and you are essentially saying that you are not sure which one is the best. And we all know that nobody is going to be able to support five architectures forever. And as a result, something has to give and it would be really unfortunate for a customer to have chosen the wrong one. And if there are five architectures, surely, over time, 80 percent of them will be wrong. And so, I think that our advantage is that we are singularly focused.”

To read the entire article, select the following URL:

Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  

Copyright © 1995-2018 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.