|NVDA CC: TensorRT 3 inference acceleration 100X network speed.................................|
The third segment and this is the segment that you just mentioned, has to do with inference, which is when you're done with developing this network, you had to put it down to the hyperscale datacenters to support the billions and billions of queries that consumers make to the Internet every day. And this is a brand-new market for us. 100% of the world's inference is done on CPUs today. We announced very recently, this last quarter in fact, that TensorRT 3 inference acceleration platform and in combination with our Tensor Core GPU instruction set architecture, we are able to speed up networks by a factor of 100.
Now the way to think about that is, imagine whatever amount of workload that you got, if you can speed up using our platform by a factor of 100, how much you can save.