|The Datacenter Has an Appetite for GPU Compute |
It is not inconceivable, but probably also not very likely, that the datacenter business at GPU juggernaut Nvidia could at some point in the next one, two, or three years equal that of the core and foundational gaming sector. It is hard to tell based on current trends, and it all depends on how you extrapolate the two revenue streams from their current points and slopes and reconcile that against longer term data for the past six years.
The datacenter business at Nvidia was much smaller than the company’s OEM and IP businesses only a few years ago, and on par with its automotive segment until 2017, when GPU-accelerated HPC first really took off after a decade of heavy investment by the company and also when various kinds of machine learning had matured enough to for it to go into production at many of the hyperscalers and to be deployed as a compute engine on the large public clouds. Only four years ago, HPC represented about two-thirds of the accelerated compute sales for Nvidia’s datacenter products with the remainder largely dominated by early AI systems, mostly for machine learning training but also for some inference and also for experimentation with hybrid HPC-AI workloads. Now, as fiscal 2020 comes to a close in January, we infer from what Nvidia is saying about hyperscalers that AI probably represents well north of half of the datacenter revenue stream.
Read More – The Next Platform