|Watch Out, Intel. New Types of Chips Are Gaining Ground|
Revolution in chip design may upend old guard, including Intel and Nvidia.
By Tiernan Ray - July 1, 2017 2:05 a.m. ET
The chip revolution starts now. Today’s general-purpose computer chips are losing ground to domain-specific chips—customized parts dedicated to more specific tasks. These chips are tailored to the needs of mobile devices, servers running machine-language tasks in artificial intelligence, and the vast constellation of connected devices known as the Internet of Things.
The implications for Intel (ticker: INTC) and Nvidia (NVDA) and other established chip vendors are stark. Companies that were never involved in semiconductors, such as Alphabet’s (GOOGL) Google, can become their own chip houses. A whole new wave of chip startups can be funded with less money, bringing fresh competition.
Leading the charge is David Patterson, a computer scientist with the University of California, Berkeley. Starting in the 1970s, Patterson proposed a simplified vocabulary for programmers to control chips that would be more efficient than the verbose set of controls Intel offered. Industry embraced Patterson’s “reduced-instruction set computer,” or RISC, as it came to be known.
The personal computing era was dominated by Intel’s microprocessors, but that changed with Apple’s (AAPL) iPhone, which went on sale 10 years ago last week. The chips that run the iPhone, and other RISC-based chips like it, use technology from ARM Holdings.ARM, owned by Japan’s SoftBank Group (9984.Japan), was more aggressive in embracing Patterson’s RISC innovations than was Intel. ARM-based parts sell in the billions every year, versus Intel’s market for PC and server chips in the hundreds of millions.
Patterson sees an equal if not greater challenge coming to Intel, Nvidia, and even ARM, prompted by the crumbling of Moore’s Law. Formulated by Intel co-founder Gordon Moore in 1965, Moore’s Law says that the number of transistors on a chip doubles every 18 to 24 months, powering ever-faster, ever-cheaper computers. But Patterson says plainly that Moore’s Law is dead, finished, kaput. “If I look at the latest generation of microprocessors, this year, performance only went up by 3%,” he told Barron’s. At that rate, it will take two decades for chips to double in performance.
The next big reduction in size of transistors, to 7 billionths of a meter, or 7-nanometer, “won’t make general-purpose microprocessors that much faster,” says Patterson. Moreover, costs are skyrocketing to eke out meager gains. Data from Gartner say the upfront cost to develop a chip at 7-nanometer is $271 million, up from $30 million a couple of generations ago.
The solution, as Henry David Thoreau once wrote, is to “simplify, simplify.” Last week, researchers at Google presented a paper co-authored by Patterson at an academic conference, describing a novel chip called the Tensor Processing Unit, or TPU. Developed by Google, the TPU vastly outperformed comparable chips from Intel and Nvidia for tasks like machine learning.
While Intel’s microprocessor is broadly useful, running everything from scientific computing to spreadsheets, the TPU focuses on a specific problem such as speech recognition so it has power where it counts. It has 3.5 times as much memory as a comparable Intel part in a chip half the size. “We threw out a lot of stuff that was not needed,” says Patterson, who serves as distinguished engineer at Google in its Google Brain unit that focuses on machine learning. “Instead of the Honda for everyone, we are making these Formula One race cars for some things.”
Moreover, the TPU went from sketch to finished chip in just 15 months, he says, whereas the latest Intel processors take years to develop. “We are at a paradigm shift in computing architecture,” he says, and some longtime observers agree. “This is a big revolution in terms of the technology approach,” says Linley Gwennap, editor with chip newsletter Microprocessor Report, referring to domain-specific chips. “Intel is working for two years to squeeze out 10% improvements in performance, and this can get you 10 times the performance,” while being less expensive than Intel’s most complex parts, he says.
To enable the revolution, Patterson and others have created what is now the fifth version of RISC, with commands that are open-source—meaning they can be modified by anyone, just like the freely available Linux operating system. As with Linux, designs tailored to a problem can be made by anyone who grabs the code. And rapid development and improvement are promoted versus the monolithic, years-long process of Intel’s generic chips. “RISC-V shows things can be done by smaller teams much more cheaply,” says Patterson.
ONE OF THOSE STARTUPS is San Francisco–based SiFive, founded by Berkeley alums who built RISC-V, and for whom Patterson is a technical advisor. Using RISC-V, SiFive aims to be the “Amazon of chip development,” says Jack Kang, head of business development, likening it to Amazon’s Web Services cloud computing operation. SiFive uses the open collaboration of RISC-V to automate the design of chips. A company can use SiFive’s automation service to obtain a part at 10% to 20% of the cost it would normally take.
For now, Patterson’s vision faces plenty of skeptics. Some doubt the economic benefits of RISC-V; others argue the narrower focus of domain-specific chips makes them a niche. Having propelled one major revolution, Patterson is undaunted. The death of Moore’s Law means domain-specific chips are not a philosophical stance but a necessity. “We have no other way to build a more energy-efficient processor,” he says.
The market will decide. “It’s not like you’re debating how many angels can dance on the head of a pin,” says Patterson. “We will know in the next five years because the markets are going to tell us who wins.”
TIERNAN RAY can be reached at: firstname.lastname@example.org