SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksArtificial Intelligence, Robotics, Chat bots - ChatGPT


Previous 10 Next 10 
From: Frank Sully12/28/2020 1:11:49 PM
   of 4988
 
Google ASIC: Tensor Processing Units: History and hardware


Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/28/2020 1:26:30 PM
   of 4988
 
What is CPU,GPU and TPU? Understanding these 3 processing units using Artificial Neural Networks.


Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/28/2020 2:32:35 PM
   of 4988
 
Nvidia Jetson System-on-Module (SoM) with CPU, GPU, power management, RAM and flash storage on a single board powers a weed eating Ag robot

zdnet.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/28/2020 2:45:50 PM
   of 4988
 
New AI chips top key benchmark tests: Nvidia

November 6, 2019



NVIDIA logo shown at SIGGRAPH 2017
(Reuters) - Nvidia Corp said on Wednesday its chips used to power artificial intelligence topped a series of benchmark tests that measure a key performance metric in the industry.

The results are a boost to the company's efforts to make headway into one of the biggest growth areas in the field called inference, which is the process of using an AI algorithm for tasks such as translating audio into text-based requests.

Intel Corp's processors currently dominate the market for machine learning inference, which analysts at Morningstar estimate to be worth $11.8 billion by 2021.

However, Nvidia dominates the AI training chip market, where huge amounts of data help algorithms "learn" a task such as how to recognize a human voice.

The company said its inference chips have now achieved the fastest results on five new independent suite of AI benchmarks, called MLPerf Inference 0.5.

The company on Wednesday also unveiled Jetson Xavier NX, a new chip designed for AI computer, which is the size of a credit card.

The chip can be used in devices, including small commercial robots, drones and portable medical devices, that require more computing power without sacrificing size or weight, Nvidia said.

Priced at $399, the new chip will be available in March.

(Reporting by Munsif Vengattil and Stephen Nellis)

Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/29/2020 5:40:31 PM
   of 4988
 
AI chipmaker Graphcore raises $222m as it takes on Nvidia

Latest funding round values four-year-old UK start-up at $2.5bn
Co-founders Simon Knowles, left, and Nigel Toon at Graphcore’s office in Bristol © Gareth Iwan Jones/FT

Please use the sharing tools found via the share button at the top or side of articles. Copying articles to share with others is a breach of FT.com T&Cs and Copyright Policy. Email licensing@ft.com to buy additional rights. Subscribers may share up to 10 or 20 articles per month using the gift article service. More information can be found here.
ft.com

Graphcore, the UK-based maker of artificial intelligence chips, has raised $222m in new funding as it braces itself for tougher competition from US rival Nvidia.

The latest round values Graphcore at $2.5bn (without including the new capital raised), up from $1.5bn two years ago, making it one of the UK’s most valuable private tech companies.

Graphcore’s chips — called “intelligence processing units” — are tailored to AI’s specialised data processing requirements. Its IPUs are sold, often through partners such as Microsoft, Dell and Atos, to researchers at Imperial College London and the University of Oxford, as well as financial, healthcare and telecoms institutions.

Nigel Toon, Graphcore’s co-founder and chief executive, said a stronger balance sheet would build confidence among customers and partners after revenue growth was hit by customer purchasing delays this year.

Researchers at Facebook and Google have published papers comparing the performance of IPUs favourably with the graphics processing units most commonly used in AI today. Nvidia has become the leader in GPUs and overtook Intel to become the most valuable US chipmaker this year.

Mr Toon slammed Nvidia’s planned $40bn acquisition of UK-based chip designer Arm from SoftBank as “bad for competition”, “bad for the market overall” and “bad for Britain”.

Chipmakers such as Graphcore would be less willing to work with Arm knowing that it was owned by a competitor, he said, limiting access to its “world-class” processor designs.

Recommended Tech Tonic podcast25 min listen From the archive: FT interview with Nigel Toon Graphcore’s latest financing is led by Ontario Teachers’ Pension Plan Board, along with funds managed by Fidelity International and Schroders, who are all new investors to the Bristol-based company. Some existing investors, including Draper Esprit and Baillie Gifford, also participated.

The new funding comes less than a year after Graphcore closed a $150m extension to its last round and leaves the four-year-old company with about $440m in cash.

“We don’t need to raise any more money anytime soon,” Mr Toon said. “The next step for us probably would be an IPO [initial public offering] of the business at some point, when things are much more predictable in our business.”

There would be no stock market listing in 2021, he added, which would be focused on “growing the company”. Despite disruption from the coronavirus pandemic, Graphcore launched its second-generation IPU processor on schedule this year, Mr Toon said, with volume production beginning in the fourth quarter.

Graphcore invested $41.8m in research and development in 2019 and since then its headcount has increased further to 450 people. Its annual report for 2019 shows a pre-tax loss of $95.9m, up from $60.3m in 2018, with revenues of $10.1m.

Mr Toon said sales growth this year had fallen short of expectations. “Revenues have not been at the level we had hoped because things have taken longer to come to fruition and some projects have been delayed,” he said.

But Graphcore has built a “phenomenal” pipeline of new customers, he added, which helped win over the new investors. “We’re very hopeful that 2021 is going to be a very strong revenue year.”

Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/29/2020 6:06:40 PM
   of 4988
 
Hyundai Commits to NVIDIA AI Technology

designnews.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/29/2020 7:05:36 PM
   of 4988
 
Chip Giants Intel and Nvidia Face New Threats From Amazon to Google to Apple

Custom chip making has exploded at large tech companies, driven by expanding balance sheets and the growth of AI



Google was an early mover among the tech giants in chip making.

Photo: monica m davey/Shutterstock

By Asa Fitch
Dec. 20, 2020 5:30 am ET

The world’s largest semiconductor companies face a growing competitive threat: their biggest customers making their own chips tailored to the supercharged areas of cloud-computing and artificial intelligence.

Chip making has long been ruled by big manufacturers and design houses such as Intel Corp. INTC 4.93% , Advanced Micro Devices Inc. AMD -1.07% and graphics-chip maker Nvidia Corp. NVDA 0.34% Now Amazon.com Inc., AMZN 1.16% Microsoft Corp. MSFT -0.36% and Google are getting into the game in the hunt for improved performance and lower costs, shifting the balance of power in the industry and pushing traditional chip makers to respond by building more specialized chips for major customers.

Amazon this month unveiled a new chip that, it says, promises to speed up how algorithms that use artificial intelligence learn from data. The company has already designed other processors for its cloud-computing arm, called Amazon Web Services, including the brains of computers known as central processing units.

The pandemic has accelerated the rise of cloud-computing as companies broadly have embraced the kind of digital tools using those remote servers. Amazon, Microsoft, Google and others have enjoyed strong growth in the cloud during the remote-work period.

Business customers also are showing an increased appetite for analyzing the data they gather on their products and customers, fueling demand for artificial intelligence tools to make sense of all that information.

Google was an early mover among the tech giants, releasing an AI processor in 2016, and has updated the hardware several times since. Software giant Microsoft, the No. 2 in the cloud behind Amazon, has also invested in chip designs, including a programmable chip to handle AI and another that enhances security. It’s now also working on a central processor, according to a person familiar with its plans. Bloomberg News previously reported Microsoft’s CPU effort.

Driving the tech giants’ moves are changes in how the semiconductor world operates and a growing sense that Moore’s Law—the sector’s fundamental assumption about the steady improvement in chip performance—is losing relevance. As a result, companies are searching for new ways to eke out better performance, not always measured in speed, but sometimes lower power consumption or heat generation.

“Moore’s Law has been around for 55 years, and this is the first time it’s slowed down very materially,” said Partha Ranganathan, a vice president and engineering fellow in Google’s cloud unit, which has been pursuing specialized chips.

The sheer size of the cloud giants presents a challenge for traditional chip producers. In the past, the semiconductor makers tended to design their high-performance semiconductors for generic applications, leaving it to customers to adapt and get the most out of the chips. Now the biggest customers have the financial muscle to push for more optimized designs.

“Whereas Intel in the 1990s was an order of magnitude larger than all their customers, now the customer has superior scale over the supplier,” said James Wang, an analyst at New York money-manager ARK Investment Management. “As a result, they have more capital and more expertise to take components in-house.”

Nvidia, now the largest U.S. chip maker by market cap, has a value of $330 billion, and Intel is at $207 billion. The cloud behemoths, Amazon, Microsoft and Google-parent Alphabet Inc., GOOG -0.98% each top $1 trillion in market valuation.

The bespoke efforts are partly made possible by the rise of contract chip makers, which make semiconductors designed by other companies. This arrangement helps tech giants avoid the multibillion-dollar cost of building their own chip factories. Taiwan Semiconductor Manufacturing Co. TSM -0.56% , in particular, has jumped to the forefront of chip production technology.

The changes have benefited chip-design firm Arm Holdings Ltd., which sells circuit designs that anyone can use after paying a licensing fee. Apple is a big Arm customer, as are all of the big tech companies that make their own chips.

Amazon, Google and Microsoft each are estimated to operate millions of servers in globe-spanning networks of data centers for their own use and to rent out to their millions of cloud-computing customers. Even small improvements in performance and minute reductions in the cost of powering and cooling chips become worth the effort when spread across those vast technology empires. Facebook Inc. FB -0.08% also has explored working on its own chips.

David Brown, a vice president at AWS, said making its own processors was an obvious choice for Amazon given the performance gains it could achieve by dropping compatibility with older software and other standard features of Intel chips that big data center operators don’t need.

“We were able to build a [chip] that’s optimized for the cloud, so we were able to remove a lot of stuff that’s just not needed,” he said. Amazon’s chip-making efforts took off largely with its acquisition of an Israeli company called Annapurna Labs about five years ago.

Custom chips are gaining favor also in consumer products. Apple this year started using its own processors in Macs after 15 years of sourcing them from Intel. Google has incorporated its AI chip in its Pixel smartphones.

So far, the lost business for traditional chip makers has been modest, said Linley Gwennap, a chip-industry analyst. The market share of all custom-made, Arm-based central processors is less than 1%, he said. Google’s AI chips are by far the highest-volume tech-company-designed processors, he said, comprising at least 10% of all AI chips. Intel still supplies the vast majority of CPUs that go into data centers.

The incumbents also aren’t sitting idle in the race for cloud and AI chip supremacy. Nvidia this year agreed to buy Arm in what would be the chip industry’s biggest acquisition. And Ian Buck, who oversees Nvidia’s data center business, said the company is working closely with its largest customers to optimize the use of its chips in their hardware setups.

Intel said around 60% of its server central processors sold to large data center operators are customized to customers’ needs, often by switching off features of the chip that they don’t need.

And Intel has invested in AI processors and other specialized hardware of its own, including buying Israel-based Habana Labs last year for about $2 billion. AWS recently agreed to put Habana’s AI training chips in its data centers, as Amazon develops its rival chips that it thinks will perform better when they appear next year.

Remi El-Ouazzane, chief strategy officer for Intel’s Data Platforms Group, said Habana’s chips at AWS could challenge Nvidia, long the dominant player in an AI training market that he said would be worth more than $25 billion by 2024. “It’s a big net new opportunity for Intel, and it’s a very large market,” he said.

—For more WSJ Technology analysis, reviews, advice and headlines, sign up for our weekly newsletter.

Write to Asa Fitch at asa.fitch@wsj.com

Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

Appeared in the December 21, 2020, print edition as 'Big-Tech Challenge Jolts Chip Industry.'

Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/29/2020 7:24:20 PM
   of 4988
 
Do These 5 AI Chip Startups Pose a Threat to Nvidia?

They say that a parent who claims to love all of her children equally is either a liar or doesn’t know her rug rats very well. While we can truthfully say we’re equally annoyed by all kids, we can relate to the above sentiment when it comes to some stocks. In other words, we have our favorites. One of the star performers highlighted in the Nanalyze Disruptive Tech Portfolio is Nvidia ( NVDA), the first AI stock we ever covered back in 2017. The company turned its hardware, originally designed for computing-intensive gaming software, into the premiere AI chips for most applications in artificial intelligence. Can it retain that dominance with the rise of well-funded AI chip startups?

AI Chip Market Heats Up The AI chip market is perhaps one of the most dynamic and competitive industries going. In fact, we recently wrote about how AI chips are rapidly changing the semiconductor industry. (It’s a good read for those who don’t know an AI chip from a potato chip.) While we still believe that Nvidia and its graphic processing unit (GPU) technology is a good way to invest in AI chips, as investors we want to keep tabs on threats to the bottom line.

There are the obvious competitors like Intel ( INTC), which has been making big investments in AI chips, autonomy, and chip design. Google ( GOOG) introduced its own AI chips just a few years ago, while Apple ( AAPL) is designing AI chips for the company’s iconic smartphones. Chinese AI chip startups have made little attempt to hide their ambitions to unseat Nvidia, though like many upstarts, they mostly focus on AI edge computing applications. These chips are designed to provide enough computing power in IoT devices like smart video cameras or in smartphones without the need for remote (i.e., cloud) servers to do all of the heavy thinking.

Nvidia has returned more than +2,400% over the last five years compared to about +113% from one of the leading tech index funds and a measly +30% for Intel. Credit: Yahoo! Finance Nvidia’s announcement in September that it would acquire UK-based Arm Holdings, a semiconductor design company, for $40 billion touted the merger as “ creating world’s premier computing company for the age of AI.” The acquisition buys Nvidia a leader in chip design for the mobile space. Arm has also been rapidly expanding its AI-specific hardware offerings, including a miniaturized neural processing unit (NPU) for edge devices that is normally used in data centers where Nvidia has dominated. In fact, nearly 70% of the world’s fastest supercomputers now incorporate Nvidia hardware.

Credit: McKinsey & Co. Suped-up data centers and edge computing are where McKinsey & Company predicts most of the growth from AI-related semiconductors will come in the next five years. The research firm projects annual growth of about 18% between now and 2025, when AI chips could account for almost 20% of all demand, as shown above. That translates into about $67 billion in revenue. For reference, Nvidia as the market-dominant player recorded more than $11.7 billion in revenue in 2019 – and half of all sales still come from gaming.

AI Chips Startups for Data Centers All that means there’s a lot of market share to grab in the next few years. Not only must Nvidia contend with a host of billion-dollar public companies, but plenty of AI chip startups have emerged in recent years. In this article, we want to take a look at five of the most highly touted private companies poised to commercialize their technology in the data center market. While we’ve covered all but one of these startups in previous articles, most were in stealth mode, have raised substantial funding since our last coverage, or are simply making headlines with technological breakthroughs.

World’s Most Well-Funded AI Chip Startups Two of the startups on our list have raised more than $900 million between the two of them, with a combined value of more than $4 billion, though UK-based Graphcore and Silicon Valley-based SambaNova take very different approaches to AI chip solutions.



Founded in 2016, Graphcore has raised $460 million from more than two dozen investors, with a valuation of $1.95 billion. The company took in $150 million in a Series D back in February and there are rumors just this month that it’s looking to add another $200 million to the war chest. Graphcore has created an AI chip it calls an intelligence processing unit (IPU) that, as we explained before, sacrifices a certain amount of number-crunching precision to allow the machine to tackle more math more quickly with less energy. This year it threw down the gauntlet to Nvidia when it released its latest IPU, the Colossus MK2, and packaged four of them into a machine called the IPU-M2000:

Credit: Graphcore About the size of a DVD player (do they still sell those?), the IPU-M2000 packs one petaflop of computing power. One petaflop is a quadrillion calculations per second, and the world’s fastest supercomputer, Japan’s Fugaku, is rated at a world-record 442 petaflops (using Arm’s chip architecture, incidentally). Fortune reported earlier this year how Graphcore could compete with Nvidia in the supercomputer chip race. In a test using a state-of-the-art image classification benchmark, eight of Graphcore’s new IPU-M2000 clustered together could train an algorithm at a cost of $259,000 compared to $3 million for 16 of Nvidia’s DGX clusters, each of which contains eight of the company’s top-of-the-line chips.



Founded in 2017, SambaNova has raised about $465 million, including a $250 million Series C in February that catapulted the company to a valuation of $2.5 billion. It has attracted funding from some big names like Google, investment management company BlackRock ( BLK), and Intel, among others. SambaNova has taken what’s been called a hybrid approach to AI chip design by giving equal emphasis to both the hardware and software. An article last month in The Next Platform, which specializes in coverage of high-end computing like supercomputers and data centers, reported about how SambaNova is generating interest from national laboratories that host supercomputers. Specifically, the startup’s latest DataScale computers were hooked up to the Corona supercomputer and performed five times better against Nvidia’s V100 GPUs.

World’s Biggest AI Chip

Another Silicon Valley-based AI chip startup, Cerebras Systems, has also made its way into the nation’s supercomputer laboratories. Founded in 2016, the Silicon Valley-based startup has raised $112 million from a handful of well-known Silicon Valley/San Francisco venture capital firms, including Sequoia Capital, Benchmark, and Foundation Capital. The company only emerged from stealth mode last year, boasting the world’s largest chip, the Wafer Scale Engine (WSE), which is 56x the size of the largest GPUs on the market:

Credit: Cerebras Systems We’re finally getting a look at why Cerebras thinks bigger is better. Just one WSE powers the company’s new CS-1 computer, which is designed to accelerate deep learning in the data center. The hardware used today to train neural networks to do things like drive autonomously takes weeks, if not months, and tons of power. The company claimed in an article by IEEE Spectrum that just one CS-1 computer could easily outmuscle the equivalent of a cluster of Google AI computers that consume five times as much power, take up 30 times as much space, and deliver just one-third of the performance. More recently, Cerebras demonstrated its AI brawn when the CS-1 beat the world’s 69th fastest supercomputer in a simulation of combustion in a coal-fired power plant. In addition to national laboratories, the company’s biggest known customer is the drugmaker GlaxoSmithKline ( GSK).

AI Chips for Fast Inference While we’ve mostly been talking about AI chip applications for training and simulation, startups like Groq out of Silicon Valley and Tenstorrent from Toronto are working to accelerate the actual execution of AI tasks by neural networks, known as inference. This is when an AI-trained system makes predictions from new data.



Founded in 2016, Groq has raised about $62 million in disclosed funding, though it remained mum on how much money it took in from its latest fundraising back in August. The startup’s founder, Jonathan Ross, was the brains behind Google’s in-house AI chip, the Tensor Processing Unit (TPU). The Groq (a play on the sci-fi author Robert Heinlein’s term grok, which means to understand intuitively) Tensor Streaming Processor (TSP) reputedly more than doubles the performance of today’s GPU-based systems.

We grok the new Groq AI chip. Credit: Groq How? Well, we’re MBAs and not semiconductor manufacturers, but basically, the company has redesigned the chip architecture. Instead of creating a small programmable core and replicating it dozens or hundreds of times, the TSP houses a single enormous processor that has hundreds of functional units. And, like SambaNova, the software is integral to creating the sort of efficiencies in computing power that could make Nvidia investors a wee bit nervous. The company started shipping its Groq node capable of six petaflops per machine in September, targeting cloud computing for industries like autonomous driving and fintech.



Yet another AI chip startup founded in 2016, Tenstorrent has raised $34.5 million. The company has dubbed its AI chip Grayskull, and the design mirrors those of others in our list by doing things like drastically increasing the amount of on-chip memory versus Nvidia, which relies on fast off-chip memory. Another strategy that Tenstorrent employs – and sounds similar to Graphcore, at least in principle – is to build a chip that works like the human brain in that it can draw quick conclusions without processing massive amounts of data. (You just have to look around you to see how well that works for humans.) Anyway, the idea is to compress or eliminate extraneous data that takes more energy to burn. An analysis by the Linley Group, for example, found that Grayskull uses only 75 watts of power to perform 368 trillion operations per second against about 300 watts consumed by Nvidia for the same performance.

Conclusion There is no doubt that Nvidia is the leading AI chipmaker on the planet today, and that dynamic isn’t likely to change anytime soon. However, the five startups featured here are all starting to commercialize their hardware solutions after three or four years of development. They promise better performance, using less power and space, and at reduced cost. Several also claim their AI chip design can scale from data center down to edge devices, meaning less R&D to introduce new applications.

Nvidia has been criticized at times for lagging in innovation, and its hardware largely remains expensive and power hungry. Nvidia’s pending acquisition of Arm is probably more important to the company’s long-term success as a traditional semiconductor manufacturer than its supremacy in AI chips. Investors would like to see Nvidia maintain its high-growth potential, and that could mean adopting (i.e., acquiring) some of the AI chip innovations being championed by these five nimble startups.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/29/2020 9:03:15 PM
   of 4988
 
“Boston Dynamics Will Continue to Be Boston Dynamics,” Company Says

spectrum.ieee.org

Share RecommendKeepReplyMark as Last Read


From: Frank Sully12/29/2020 9:12:11 PM
   of 4988
 
AlphaFold Proves That AI Can Crack Fundamental Scientific Problems

spectrum.ieee.org

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10