We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor. We ask that you disable ad blocking while on Silicon
Investor in the best interests of our community. For example, here is how to disable FireFox ad content blocking while on Silicon Investor.
Nvidia Corp. is bringing artificial intelligence to the edge of the network with the launch early Monday of its new Nvidia EGX platform that can perceive, understand and act on data in real time without sending it to the cloud or a data center first.
Delivering AI to edge devices such as smartphones, sensors and factory machines is the next step in the technology’s evolutionary progress. The earliest AI algorithms were so complex that they could be processed only on powerful machines running in cloud data centers, and that means sending lots of information across the network. But this is undesirable because it requires lots of bandwidth and results in higher latencies, which makes “real-time” AI something less than that.
What companies really want is AI to be performed where the data itself is created, be it at manufacturing facilities, retail stores or warehouses. And it’s a problem that several tech firms have attempted to address, most recently Intel Corp. with the launch of its first 10-nanometer “Ice Lake” chips today but also dozens of startups.
But Nvidia’s entrance into the AI edge is notable because the company’s graphics processing units are widely regarded as some of the best AI-processing hardware around. That includes its Tesla V100 for deep learning, and its Quadro GV100, which enables ray tracing, the process of creating realistic images, to be done in real time.
The new NVIDIA EGX platform is scalable from a light server based on the Jetson Nano processor that performs 0.5 Trillion operations per second in a few watts, to a micro data center with a rack of NVIDIA T4 based edge servers that can do 10,000 trillion operations per second. The energy-saving capabilities of the chip are important for AI, since traditional hardware is a massive power hog when running such tasks.
In a media briefing, Justin Boitano, senior director of enterprise and edge computing at Nvidia, said there will be huge demand for a platform such as NGX because there will be something like 150 billion machine sensors and “internet of things” devices in the world by 2025. He said many of these sensors would be used for initiatives such as “smart cities,” and will be pumping out data that needs to be processed onsite, for reasons such as a demand for lower latency, real-time response, data sovereignty rules or privacy concerns.
“AI is really the killer application in all industries both in vision and in speech,” Boitano said.
Partnerships are important as well if people are actually going to put those chips to good use. For that reason Nvidia is integrating the NVIDIA Edge Stack software than runs on EGX with Red Hat Inc.’s OpenShift Kubernetes container orchestration platform in order to make it compatible with modern software applications.
The platform also integrates security, storage and networking technologies from Mellanox Technologies Ltd., which is a company that Nvidia intends to acquire by the end of year for a cool $6.9 billion.
“Mellanox Smart NICs and switches provide the ideal I/O connectivity for data access that scale from the edge to hyperscale data centers,” said Mellanox Chief Technology Officer Michael Kagan.
Nvidia is teaming up with no fewer than 13 different server makers to sell the EGX platform, including big-name manufacturers such as Cisco Systems Inc., Dell-EMC, Hewlett Packard Enterprise Co. and Lenovo Group Holding Ltd.
NGX is also compatible with AI applications running on major cloud infrastructure services such as Amazon Web Services and Microsoft Azure, and can connect to IoT services such as AWS IoT Greengrass and Azure IoT Edge.