|1Q18 for deep learning Xavier with Volta’s Tensor cores.....................................................................|
The Mother of all Paradigm Shift is being swift and brutal with 21st AI. Ditto 5G IoE.
This is NOT the 20th HDD hotbox PC era.
First movers on EUV/ArF 21st able chips will win big.
Those flogging 20th stuff like HDDs and (lagging edge discrete chip bloated) motherboards are doomed.
Very, very exciting times.
Earlier today at a keynote presentation for their GPU Technology Conference (GTC) China 2017, NVIDIA’s CEO Jen-Hsun Huang disclosed a few updated details of the upcoming Xavier ARM SoC. Xavier, as you may or may not recall with NVIDIA current codename bingo, is the company's next-generation SoC.
Essentially the successor to the Tegra family, Xavier is planned to serve several markets. Chief among these is of course automotive, where NVIDIA has seen increasing success as of late. However, similar to their talk at the primary GTC 2017, at GTC China NVIDIA is pitching Xavier as an “autonomous machine processor,” identifying markets beyond automotive such as drones and industrial robots, pushing a concept in line with NVIDIA’s Jetson endeavors. As a Volta-based successor to the Pascal-based Parker, Xavier does include Volta’s Tensor cores, something that we noted earlier this year, and is thus more suitable than previous Tegras for the deep learning requirements in autonomous machines.
In the keynote, Jen-Hsun additionally revealed updated sampling dates for the new SoC, stating that Xavier would begin in Q1 2018 for select early development partners and Q3/Q4 2018 for the second wave of partners. This timeline actually represents a delay from the originally announced Q4 2017 sampling schedule, and in turn suggests that volume shipments are likely to be in 2019 rather than 2018.
Meanwhile on the software side of matters, Jen-Hsun also announced TensorRT 3, with a release candidate version immediately available as a free download for NVIDIA developer program members. Introduced under its current branding in 2016 and a critical part of NVIDIA's neural networking software stack, TensorRT is programmable AI inference software that takes computational graphs created by traditional frameworks (e.g. Caffe, TensorFlow) and then compiles and optimizes it for NVIDIA CUDA hardware. At the moment, this includes the Tesla P4, Jetson TX2, Drive PX 2, NVIDIA DLA, and Tesla V100. During the keynote, NVIDIA also formally disclosed that a number of large Chinese companies (Alibaba, Baidu, Tencent, JD.com, and Hikvision) were now utilizing TensorRT.