|Intel’s EyeQ 5 vs. Nvidia’s Xavier: Wrong Debate|
12/6/2017 00:01 AM EST
MADISON, Wis. — Does comparing Intel’s EyeQ 5 with Nvidia’s Xavier make sense? That is the question.
Nvidia and Intel are engaged in a specsmanship battle over AI chips for autonomous vehicles that reached a new high — or more accurately a new low — when Intel CEO Brian Krzanich recently spoke at an auto show in Los Angeles. Krzanich claimed that EyeQ 5 — designed by Intel subsidiary Mobileye — “can deliver more than twice the deep-learning performance efficiency" than Nvidia’s Xavier SoC.
Intel claimed that the Mobileye EyeQ5 SoC delivers 2.4 TOPS per watt for 2.4 times greater deep learning performance efficiency than Nvidia’s Xavier during Automobility LA (Source: Intel)
After the Intel CEO’s keynote, Danny Shapiro, Nvidia's senior director of automotive, called EE Times from L.A. and cried foul.
First, Shapiro explained that comparing the two chips with two different rollout dates on two different process nodes (Xavier on 16nm vs. EyeQ5 on 7nm) isn’t kosher.
According to Shapiro, Xavier, which is “already in bring up now,” will be in volume production in 2019. Intel, in contrast, said that EyeQ 5 is “sampling in 2018, production/volume in 2020, and first customer car in 2021 (BMW iNext).”
Second, Shapiro pointed out that Nvidia’s Drive PX Xavier’s 30 watts of power consumption at 30 trillion operations per second (TOPS), quoted by Intel, is “for the entire system, CPU, GPU and memory, as opposed to just deep learning cores as in the EyeQ 5.”
Out of line
So, was it out of line for Intel to compare EyeQ5 to Xavier?
“Of course it was,” said Jim McGregor, founder and principal analyst at Tirias Research. But he sees an even a bigger issue in that “nobody is comparing a platform to a platform today,” in autonomous vehicle solutions.
Indeed, just comparing the specs of the two SoCs alone seems almost silly without discussing what other chips — in addition to the said SoCs — are needed to complete a Level 4 or Level 5 autonomous vehicle platform.
much more at eetimes.com;