SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Frank Sully7/18/2021 8:39:44 PM
   of 2319
 
Data Center Accelerator Market by Processor Type (CPU, GPU, FPGA, ASIC), Type (HPC Accelerator, Cloud Accelerator), Application (Deep Learning Training, Public Cloud Interface, Enterprise Interface), and Geography - Global Forecast to 2026

The global data center accelerator market size is projected to grow from USD 13.7 billion in 2021 to USD 65.3 billion by 2026; it is expected to grow at a CAGR of 36.7% from 2021 to 2026. Factors such as growing demand for deep learning and surge in demand for cloud-based services are driving the growth of the market during the forecast period.

Driver: Growth of cloud-based services

Deep learning services being made available over the cloud are reducing the initial costs associated with executing business operations and curtailing server maintenance tasks. A growing number of tech giants and startups have begun offering machine learning as a cloud service due to the burgeoning demand for AI-based computation. Most companies and startups do not develop their own specialized hardware or software to apply deep learning to their specific business needs. Cloud-based solutions are ideal for small and midsized businesses that find on-premises solutions costlier. Thus, the increasing adoption of cloud-based technology is necessitating the need for deep learning.

Big data analytics has also played a pivotal role in the growth of cloud services. Big data analytics is the process of scrutinizing large datasets to uncover hidden patterns, unknown correlations, market trends, customer preferences, and other actionable insights. Big data has become important to many public and private organizations wherein massive amounts of domain-specific information is generated, which can contain useful information on national intelligence, cybersecurity, fraud detection, marketing, and medical informatics. The deep learning technique is used to extract high-level, complex abstractions from data through a hierarchical learning process. It is an important technique used for analyzing massive amounts of unsupervised data, making it a valuable tool for big data analytics wherein the raw data is largely unstructured. Deep learning is also used for extracting complex patterns from massive volumes of data, semantic indexing, data tagging, fast information retrieval, and simplifying discriminative tasks.

The evolution of technologies, namely machine learning and artificial intelligence (AI), has generated the demand for cognitive computing technology across various verticals such as automotive, industrial, and consumer. Rapid developments in the video analytics domain and increasing adoption of advanced technologies in the security and surveillance industry have resulted in the development of high-performance AI-capable processors such as GPU and TPU, which have higher memory bandwidth and computational capability as compared to traditional processors, i.e., central processing units (CPUs). Creative professionals, gamers, designers, and video enthusiasts require deep learning accelerators with parallel processing capabilities that can facilitate the provisioning of on-demand machine learning for augmented reality, virtual reality, and several other application areas.

Restraint: Limited AI hardware expertssAI is a complex system, and for developing, managing, and implementing AI systems, companies require personnel with certain skill sets. For instance, people dealing with AI systems should be aware of technologies such as cognitive computing, ML and machine intelligence, deep learning, and image recognition. In addition, integrating AI solutions with existing systems is a difficult task that requires well-funded in-house R&D and patent filling. Even minor errors can translate into system failure or malfunctioning of a solution, which can drastically affect the outcome and desired result.

Professional services of data scientists and developers are needed to customize existing ML-enabled AI processors. AI is a technology that is still growing and emerging, and hence workforce possessing in-depth knowledge of this technology is limited. The impact of this restraining factor will likely remain high during the initial years of the forecast period.

Opportunity: Demand in the market for FPGA-based accelerators An FPGA is an integrated circuit that can be configured by a customer or designer after it is manufactured (field programmable). FPGAs are programmed using hardware description languages such as VHSIC hardware description language (VHDL) or Verilog. FPGAs offer advantages such as rapid prototyping, short time to market, ability to be reprogramed in the field for debugging, and long product life cycle. They contain individual programmable logic blocks known as configurable logic blocks (CLBs). These logic blocks are interconnected in such a manner that a user can configure the computing system multiple times. FPGAs contain large resources of logic gates and RAM for complex digital computation.

In 2017, Intel (US) acquired field-programmable gate array (FPGA) chip designer Altera (US). With this, Intel is expected to further leverage FPGA accelerators into its primary data center server business. In May 2020, Aldec, Inc., a pioneer in mixed HDL language simulation and hardware-assisted verification for FPGA and ASIC designs, has launched a new FPGA accelerator board for high-performance computing (HPC), high-frequency trading (HFT) applications, and high-speed FPGA prototyping. The HES-XCKU11P-DDR4 is a 1U form factor board featuring a Xilinx Kintex® UltraScale+™ FPGA, a PCIe inference, and two QSFP-DD connectors (providing a total of up to 400 Gbit/s bandwidth), and which hits the ideal sweet spot between speed, logic cells, low power draw, and price.

Challenge: Unreliability of AI algorithmsAI is implemented through machine learning using a computer to run specific software that can be trained. Machine learning can help systems process data with the help of algorithms and identify certain features from that dataset. However, a concern associated with such systems is that it is unclear as to what is going on inside algorithms; the internal workings remain inaccessible, and unlike humans, the answers provided by these systems are uncontextualized. Researchers at the Facebook AI Research (FAIR) lab found that the chatbots they created had deviated from their predefined script and were communicating in a language created by themselves, which humans could not understand. While one of the important goals of current research is to improve AI-to-human communication, the possibility that an AI system can create its own unique language that humans cannot understand could be a setback. Moreover, several scientists and tech influencers, such as Stephen Hawking, Elon Musk, Bill Gates, and Steve Wozniak, have already warned that future AI technology could lead to unintended consequences.

APAC held the largest market for the data center accelerator market in 2026 owing to growing demand for data center accelerator in ChinaAs multinational and domestic enterprises increasingly transition to cloud services providers (CSPs) and colocation solutions, the data center market in China continues to evolve. The demand for data centers in the country has now exceeded the available supply as organizations seek enhanced connectivity and scalable solutions for their growing businesses. Investments by the Chinese government for stimulating technological developments have led to an increase in the adoption of cloud-based services, such as Big Data Analytics and Internet of Things (IoT). Various government reforms, such as the establishment of free trade in Shanghai, are attracting international investors. The growing demand for high-density, redundant facilities is triggering a shift in the design and development of the country’s data centers.

For instance, in June 2017, AMD (US) collaborated with Baidu (China) to create a comprehensive and open ecosystem to address the growing demand for data center workloads and provide enhanced human-computer interaction. Similarly, in August 2019, Intel and Lenovo (China) announced a multi-year collaboration focused on the rapidly-growing opportunity in the convergence of high-performance computing (HPC) and artificial intelligence (AI) to help accelerate solutions for the world’s most challenging problems. Building on the companies’ long-standing partnership in data centers, the multi-year global collaboration will accelerate the convergence of HPC and AI, creating solutions for organizations of all sizes. Also, in December 2019, NVIDIA and Didi Chuxing (DiDi) (China), the world’s leading mobile transportation platform, announced that DiDi would leverage NVIDIA GPUs and AI technology to develop autonomous driving and cloud computing solutions. DiDi will use NVIDIA GPUs in data centers for training machine learning algorithms and NVIDIA DRIVE for inference on its Level 4 autonomous driving vehicles. The above-mentioned key developments of companies are driving the demand for the data center accelerator market in China.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully7/18/2021 9:23:20 PM
   of 2319
 
Why NVIDIA's Data Center Move Should Give AMD and Intel Sleepless Nights

NVIDIA’s existing data center ecosystem and the Grace chip could pose a threat to its rivals.

google.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully7/18/2021 9:55:11 PM
   of 2319
 
NVIDIA On Sale!

Despite phenomenal growth prospects, NVIDIA went on sale last week, falling 12.3% from $827.94 on July 6th to $726.44 on July 16th.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: Frank Sully7/20/2021 9:23:12 AM
   of 2319
 
NVIDIA Inference Breakthrough Makes Conversational AI Smarter, More Interactive From Cloud to Edge

nvidianews.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen7/20/2021 12:22:37 PM
1 Recommendation   of 2319
 
NVDA completed a four-for-one stock split today.

Share RecommendKeepReplyMark as Last Read


To: Frank Sully who wrote (1924)7/21/2021 11:55:11 AM
From: Halfdave
   of 2319
 
Thanks for the update, I want to increase my position in NVDA.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully7/22/2021 1:24:54 PM
   of 2319
 
Nvidia’s Speedy New Inference Engine Keeps BERT Latency Within a Millisecond

July 21, 2021 by Alex Woodie

enterpriseai.news

Share RecommendKeepReplyMark as Last Read


From: Frank Sully7/22/2021 1:27:32 PM
   of 2319
 
The Global Artificial Intelligence (AI) Chips Market is expected to grow by $ 73.49 billion during 2021-2025, progressing at a CAGR of over 51% during the forecast period

globenewswire.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/1/2021 4:46:53 AM
   of 2319
 

HPC market is forecasted to have a 20% + CAGR for next 4 years to 2025

Intersect360 Report: HPC Market Rebounding and on Track to Reach $60B in 2025 (hpcwire.com)

HPC cloud segment to grow 78% I guess that's what Microsoft is after with Azure from the article posted here earlier.




Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/1/2021 9:29:11 PM
   of 2319
 
Will Nvidia’s huge bet on artificial-intelligence chips pay off?

The unassuming chipmaking giant was early to the AI revolution—and remains ahead of rivals



Aug 1st 2021

“WE’RE ALWAYS 30 days away from going out of business,” is a mantra of Jen-Hsun Huang, co-founder of Nvidia, a semiconductor company. That may be a little hyperbolic coming from the boss of a company whose market value has increased from $31bn to $486bn in five years and which has eclipsed Intel, once the world’s mightiest chipmaker, by selling high-performance chips for gaming and artificial intelligence (AI). But only a little. As Mr Huang observes, Nvidia is surrounded by “giant companies pursuing the same giant opportunity”. To borrow a phrase from Intel’s co-founder, Andy Grove, in this fast-moving market “only the paranoid survive”.

Constant vigilance has served Nvidia well. Between 2016 and 2021 its revenues grew by 233%. In the three months to May sales expanded by a dizzying 84%, year on year, and gross margin reached 64%. Although Intel’s revenues are four times as large and the older firm fabricates chips as well as designing them, investors value Nvidia’s design-only business more highly (twice as much in terms of market capitalisation). Its hardware and accompanying software are used in all data centres that make up the computing clouds operated by Amazon, Google, Microsoft and China’s Alibaba. Nvidia’s systems have been adopted by every big information-technology (IT) firm, as well as by countless scientific research teams in fields from drug discovery to climate modelling. It has created a broad, deep “moat” that protects its competitive advantage.

Now Mr Huang wants to make it broader and deeper still. In September Nvidia confirmed rumours that it was buying Arm, a Britain-based firm that designs zippy and energy-efficient chips for most of the world’s smartphones, for $40bn. The idea is to use Arm’s design prowess to engineer central processing units (CPUs) for data centres and AI uses that would complement Nvidia’s existing strength in specialised chips known as graphics-processing units (GPUs). Given the global reach of Arm and Nvidia, regulators in America, Britain, China and the European Union must all approve the deal. If they do—a considerable “if”, given both firms’ market power in their respective domains—Nvidia’s position in one of computing’s hottest fields would look near-unassailable.

Game time

Mr Huang, whose family immigrated to America from Taiwan when he was a child, founded Nvidia in 1993. For its first 20 years or so the company made GPUs that made video games look lifelike. In the past decade, however, it turned out that GPUs also excel in another futuristic, but less frivolous, area of computing: they dramatically speed up how fast machine-learning algorithms can be trained to perform tasks by feeding them oodles of data. Four years ago Mr Huang, who goes by Jensen, startled Wall Street with a blunt assessment of his company’s prospects in what has become known as accelerated computing. It could “work out great”, he said, “or terribly”. Regardless, the company was “all in”.

Around half of Nvidia’s annual revenues of $17bn still comes from gaming chips. They have also proved excellent at solving the mathematical puzzles that underpin ethereum, a popular cryptocurrency. This has at times injected crypto-like volatility to GPU sales, which contributed to a near-50% fall in Nvidia’s share price in late 2018. Another slug of sales comes from selling chips that accelerate features other than graphics or AI to computer-makers and car companies.

But the AI business is growing fast. It includes specialised chips as well as advanced software that lets programmers fine-tune them—itself enabled by an earlier bet by Mr Huang, which some investors criticised at the time as an expensive distraction. In 2004 Mr Huang started investing in “Cuda”, a base software layer that enables just such fine-tuning, and implanting it in all of Nvidia’s chips.

A lot of these systems end up in servers, the powerful computers that undergird data centres’ processing oomph. Sales to data centres have increased from 25% of total revenues in early 2019 to 36%, contributing nearly as much as to the total as gaming GPUs. As companies across various industries adopt AI, the share of Nvidia’s data-centre sales going to big cloud providers such as Amazon and Google has declined from 100% to half that.

Today its AI hardware-software combo is designed to work seamlessly with the machine-learning algorithms collected in libraries such as TensorFlow (which is maintained by Google) and PyTorch (run by Facebook), boosting the algorithms’ number-crunching power. Nvidia has created programs to hook its hardware and software up to the IT systems of big business customers with AI projects of their own. All this makes AI developers’ job immeasurably easier, says a former Nvidia executive. Nvidia is also expanding into AI “inference”: running AI models, hitherto the preserve of CPUs, rather than merely training them. Real-time, huge AI models like those used for speech recognition or content-recommendation systems increasingly need the specialised GPUs to perform well, says Ian Buck, head of Nvidia’s accelerated-computing business.

This is also where Arm comes in. Owning it would give Nvidia the CPU chops to complement its historic strength in GPUs and more recently acquired abilities in network-interface cards needed to run server farms (in 2019 Nvidia acquired Mellanox, a specialist in such interconnecting technology). In April the company unveiled plans for its first data-centre CPU, Grace, a high-performance chip based on an Arm design. Arm’s energy-efficient chips would help Nvidia supply AI products for “edge computing”—in self-driving cars, factory robots and other places away from data centres, where power-hungry GPUs may not be ideal.

Transistors in microprocessors are already the size of a few atoms, so have little room to shrink and tricks such as outsourcing computing to the cloud, or using software to split a physical computer into several virtual machines, may run their course. So businesses are expected to turn to accelerated computing as a way to gain processing power without spending through the roof on ever more CPUs. Over the next five to ten years, as AI becomes more common, up to half of the $80bn-90bn that is spent annually on servers could shift to Nvidia’s accelerated-computing model, estimates Stacy Rasgon of Bernstein, a broker. Of that, half could go on accelerated chips, a market which Nvidia’s GPUs dominate, he says. Nvidia thinks the global market for accelerated computing, including data centres and the edge, will be more than $100bn a year.

Nvidia is not the only one to have spotted the opportunity. Competitors are proliferating, from startups to other chipmakers and the tech giants. Venture capitalists have backed companies such as Tenstorrent, Untether AI, Cerebras and Groq, all of which are trying to make semiconductors even better suited to AI than Nvidia’s GPUs, which for all their virtues can be power-hungry and fiddly to program. Graphcore, a British firm, is touting its “intelligence-processing unit”.

In 2019 Intel bought an Israeli AI-chip startup called Habana Labs and ceased work on the neural-network processors it had acquired as part of an earlier purchase of Nervana Systems, another startup. Amazon Web Services (AWS), the e-commerce giant’s cloud division, will soon start offering Habana’s Gaudi accelerators to its cloud customers, claiming that the Gaudi chips, which are slower than Nvidia’s GPUs, are nevertheless 40% cheaper relative to performance. Advanced Micro Devices (AMD), a veteran chipmaker that is Nvidia’s main rival in the gaming market and Intel’s in the CPU business, is in the process of finalising a $35bn deal to acquire Xilinx, which makes another kind of accelerator chip called field programmable gate arrays (FPGAs).

A bigger threat comes from Nvidia’s biggest customers. The cloud giants are all designing their own custom silicon. Google was the first to come up with its “tensor-processing unit”. Microsoft’s Azure cloud division opted for FPGAs. Baidu, China’s search giant, has its “Kunlun” chips for AI and Alibaba, its e-commerce titan, has Hanguang 800. AWS already has a chip designed for inference, called Inferentia, and has one coming for training. “The risk is that in ten years’ time AWS will offer a cheap AI box with all AWS-made components,” says the former Nvidia executive. Mark Lipacis at Jefferies, an investment bank, notes that since mid-2020 AWS has put Inferentia into an ever-greater share of its offering to customers, potentially at the expense of Nvidia.

As for the Arm acquisition, it is far from a done deal. Arm’s customers include all of the world’s chipmakers as well as AWS and Apple, which uses Arm chips in its iPhones. Some have complained that Nvidia could restrict access to the chip designer’s blueprints. The Graviton2, AWS’s tailor-made server chip, is based on an Arm design. Nvidia says it has no plans to change Arm’s business model. Western regulators are due to decide on whether to approve the deal with Britain’s competition authority, which had until July 30th to scrutinise the transaction and is expected to be among the first to do so. China, for its part, is unlikely to welcome an American takeover of an important supplier to its own tech firms, which is currently owned by SoftBank, a Japanese technology conglomerate.

Even if one of the antitrust watchdogs puts paid to the acquisition, however, Nvidia’s prospects look bright. Venture capitalists have become markedly less enthusiastic over time about backing startups taking on Nvidia and the tech giants investing in accelerated computing, says Paul Teich of Equinix, an American data-centre operator. Intel has overpromised many things, including accelerated computing, for years, and mostly undelivered. AWS and the rest of big tech have plenty of other things on their plates and lack Nvidia’s clear focus on accelerated computing. Nvidia says that, measured by actual utilisation by businesses, it has not ceded market share to AWS’s Inferentia.

Mr Huang says that it is the expense of training and running AI applications that matters, not the cost of hardware components. On that measure, he says, “we are unrivalled on price-for-performance.” None of Nvidia’s rivals possess its software ecosystem. And it has a proven ability to switch gears and capitalise on good luck. “They’re always looking around at what’s out there,” enthuses another former executive. And with an entrenched position, Mr Lipacis says, it also benefits from inertia.

Investors have not forgotten the near-halving of Nvidia’s share price in 2018. It may still be partly tied to the fortunes of the crypto market. Holding Nvidia stock requires a strong stomach, says Mr Rasgon of Bernstein. Nvidia may present itself as a pillar of the industry, but it remains an aggressive, founder-led firm that behaves like a startup. Sprinkle in some paranoia, and it will be hard to disrupt.

google.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10