We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)

Previous 10 Next 10 
From: Frank Sully8/1/2021 4:46:53 AM
   of 2616

HPC market is forecasted to have a 20% + CAGR for next 4 years to 2025

Intersect360 Report: HPC Market Rebounding and on Track to Reach $60B in 2025 (

HPC cloud segment to grow 78% I guess that's what Microsoft is after with Azure from the article posted here earlier.

Share RecommendKeepReplyMark as Last Read

From: Frank Sully8/1/2021 9:29:11 PM
   of 2616
Will Nvidia’s huge bet on artificial-intelligence chips pay off?

The unassuming chipmaking giant was early to the AI revolution—and remains ahead of rivals

Aug 1st 2021

“WE’RE ALWAYS 30 days away from going out of business,” is a mantra of Jen-Hsun Huang, co-founder of Nvidia, a semiconductor company. That may be a little hyperbolic coming from the boss of a company whose market value has increased from $31bn to $486bn in five years and which has eclipsed Intel, once the world’s mightiest chipmaker, by selling high-performance chips for gaming and artificial intelligence (AI). But only a little. As Mr Huang observes, Nvidia is surrounded by “giant companies pursuing the same giant opportunity”. To borrow a phrase from Intel’s co-founder, Andy Grove, in this fast-moving market “only the paranoid survive”.

Constant vigilance has served Nvidia well. Between 2016 and 2021 its revenues grew by 233%. In the three months to May sales expanded by a dizzying 84%, year on year, and gross margin reached 64%. Although Intel’s revenues are four times as large and the older firm fabricates chips as well as designing them, investors value Nvidia’s design-only business more highly (twice as much in terms of market capitalisation). Its hardware and accompanying software are used in all data centres that make up the computing clouds operated by Amazon, Google, Microsoft and China’s Alibaba. Nvidia’s systems have been adopted by every big information-technology (IT) firm, as well as by countless scientific research teams in fields from drug discovery to climate modelling. It has created a broad, deep “moat” that protects its competitive advantage.

Now Mr Huang wants to make it broader and deeper still. In September Nvidia confirmed rumours that it was buying Arm, a Britain-based firm that designs zippy and energy-efficient chips for most of the world’s smartphones, for $40bn. The idea is to use Arm’s design prowess to engineer central processing units (CPUs) for data centres and AI uses that would complement Nvidia’s existing strength in specialised chips known as graphics-processing units (GPUs). Given the global reach of Arm and Nvidia, regulators in America, Britain, China and the European Union must all approve the deal. If they do—a considerable “if”, given both firms’ market power in their respective domains—Nvidia’s position in one of computing’s hottest fields would look near-unassailable.

Game time

Mr Huang, whose family immigrated to America from Taiwan when he was a child, founded Nvidia in 1993. For its first 20 years or so the company made GPUs that made video games look lifelike. In the past decade, however, it turned out that GPUs also excel in another futuristic, but less frivolous, area of computing: they dramatically speed up how fast machine-learning algorithms can be trained to perform tasks by feeding them oodles of data. Four years ago Mr Huang, who goes by Jensen, startled Wall Street with a blunt assessment of his company’s prospects in what has become known as accelerated computing. It could “work out great”, he said, “or terribly”. Regardless, the company was “all in”.

Around half of Nvidia’s annual revenues of $17bn still comes from gaming chips. They have also proved excellent at solving the mathematical puzzles that underpin ethereum, a popular cryptocurrency. This has at times injected crypto-like volatility to GPU sales, which contributed to a near-50% fall in Nvidia’s share price in late 2018. Another slug of sales comes from selling chips that accelerate features other than graphics or AI to computer-makers and car companies.

But the AI business is growing fast. It includes specialised chips as well as advanced software that lets programmers fine-tune them—itself enabled by an earlier bet by Mr Huang, which some investors criticised at the time as an expensive distraction. In 2004 Mr Huang started investing in “Cuda”, a base software layer that enables just such fine-tuning, and implanting it in all of Nvidia’s chips.

A lot of these systems end up in servers, the powerful computers that undergird data centres’ processing oomph. Sales to data centres have increased from 25% of total revenues in early 2019 to 36%, contributing nearly as much as to the total as gaming GPUs. As companies across various industries adopt AI, the share of Nvidia’s data-centre sales going to big cloud providers such as Amazon and Google has declined from 100% to half that.

Today its AI hardware-software combo is designed to work seamlessly with the machine-learning algorithms collected in libraries such as TensorFlow (which is maintained by Google) and PyTorch (run by Facebook), boosting the algorithms’ number-crunching power. Nvidia has created programs to hook its hardware and software up to the IT systems of big business customers with AI projects of their own. All this makes AI developers’ job immeasurably easier, says a former Nvidia executive. Nvidia is also expanding into AI “inference”: running AI models, hitherto the preserve of CPUs, rather than merely training them. Real-time, huge AI models like those used for speech recognition or content-recommendation systems increasingly need the specialised GPUs to perform well, says Ian Buck, head of Nvidia’s accelerated-computing business.

This is also where Arm comes in. Owning it would give Nvidia the CPU chops to complement its historic strength in GPUs and more recently acquired abilities in network-interface cards needed to run server farms (in 2019 Nvidia acquired Mellanox, a specialist in such interconnecting technology). In April the company unveiled plans for its first data-centre CPU, Grace, a high-performance chip based on an Arm design. Arm’s energy-efficient chips would help Nvidia supply AI products for “edge computing”—in self-driving cars, factory robots and other places away from data centres, where power-hungry GPUs may not be ideal.

Transistors in microprocessors are already the size of a few atoms, so have little room to shrink and tricks such as outsourcing computing to the cloud, or using software to split a physical computer into several virtual machines, may run their course. So businesses are expected to turn to accelerated computing as a way to gain processing power without spending through the roof on ever more CPUs. Over the next five to ten years, as AI becomes more common, up to half of the $80bn-90bn that is spent annually on servers could shift to Nvidia’s accelerated-computing model, estimates Stacy Rasgon of Bernstein, a broker. Of that, half could go on accelerated chips, a market which Nvidia’s GPUs dominate, he says. Nvidia thinks the global market for accelerated computing, including data centres and the edge, will be more than $100bn a year.

Nvidia is not the only one to have spotted the opportunity. Competitors are proliferating, from startups to other chipmakers and the tech giants. Venture capitalists have backed companies such as Tenstorrent, Untether AI, Cerebras and Groq, all of which are trying to make semiconductors even better suited to AI than Nvidia’s GPUs, which for all their virtues can be power-hungry and fiddly to program. Graphcore, a British firm, is touting its “intelligence-processing unit”.

In 2019 Intel bought an Israeli AI-chip startup called Habana Labs and ceased work on the neural-network processors it had acquired as part of an earlier purchase of Nervana Systems, another startup. Amazon Web Services (AWS), the e-commerce giant’s cloud division, will soon start offering Habana’s Gaudi accelerators to its cloud customers, claiming that the Gaudi chips, which are slower than Nvidia’s GPUs, are nevertheless 40% cheaper relative to performance. Advanced Micro Devices (AMD), a veteran chipmaker that is Nvidia’s main rival in the gaming market and Intel’s in the CPU business, is in the process of finalising a $35bn deal to acquire Xilinx, which makes another kind of accelerator chip called field programmable gate arrays (FPGAs).

A bigger threat comes from Nvidia’s biggest customers. The cloud giants are all designing their own custom silicon. Google was the first to come up with its “tensor-processing unit”. Microsoft’s Azure cloud division opted for FPGAs. Baidu, China’s search giant, has its “Kunlun” chips for AI and Alibaba, its e-commerce titan, has Hanguang 800. AWS already has a chip designed for inference, called Inferentia, and has one coming for training. “The risk is that in ten years’ time AWS will offer a cheap AI box with all AWS-made components,” says the former Nvidia executive. Mark Lipacis at Jefferies, an investment bank, notes that since mid-2020 AWS has put Inferentia into an ever-greater share of its offering to customers, potentially at the expense of Nvidia.

As for the Arm acquisition, it is far from a done deal. Arm’s customers include all of the world’s chipmakers as well as AWS and Apple, which uses Arm chips in its iPhones. Some have complained that Nvidia could restrict access to the chip designer’s blueprints. The Graviton2, AWS’s tailor-made server chip, is based on an Arm design. Nvidia says it has no plans to change Arm’s business model. Western regulators are due to decide on whether to approve the deal with Britain’s competition authority, which had until July 30th to scrutinise the transaction and is expected to be among the first to do so. China, for its part, is unlikely to welcome an American takeover of an important supplier to its own tech firms, which is currently owned by SoftBank, a Japanese technology conglomerate.

Even if one of the antitrust watchdogs puts paid to the acquisition, however, Nvidia’s prospects look bright. Venture capitalists have become markedly less enthusiastic over time about backing startups taking on Nvidia and the tech giants investing in accelerated computing, says Paul Teich of Equinix, an American data-centre operator. Intel has overpromised many things, including accelerated computing, for years, and mostly undelivered. AWS and the rest of big tech have plenty of other things on their plates and lack Nvidia’s clear focus on accelerated computing. Nvidia says that, measured by actual utilisation by businesses, it has not ceded market share to AWS’s Inferentia.

Mr Huang says that it is the expense of training and running AI applications that matters, not the cost of hardware components. On that measure, he says, “we are unrivalled on price-for-performance.” None of Nvidia’s rivals possess its software ecosystem. And it has a proven ability to switch gears and capitalise on good luck. “They’re always looking around at what’s out there,” enthuses another former executive. And with an entrenched position, Mr Lipacis says, it also benefits from inertia.

Investors have not forgotten the near-halving of Nvidia’s share price in 2018. It may still be partly tied to the fortunes of the crypto market. Holding Nvidia stock requires a strong stomach, says Mr Rasgon of Bernstein. Nvidia may present itself as a pillar of the industry, but it remains an aggressive, founder-led firm that behaves like a startup. Sprinkle in some paranoia, and it will be hard to disrupt.

Share RecommendKeepReplyMark as Last Read

From: Frank Sully8/2/2021 2:14:24 AM
   of 2616
NVIDIA And King’s College London Uses Cambridge-1 To build AI Models To Generate Synthetic Brain Images

Sanskriti Dalmia
August 1, 2021

NVIDIA and King’s College London have revealed new information about one of the first projects to be run on Cambridge-1, the UK’s most powerful supercomputer. The UK’s most powerful supercomputer, Cambridge-1, was announced in October last year and cost $100 million to build.

King’s College London uses Cambridge-1 to create AI models that can generate synthetic brain images by learning from tens of thousands of MRI (Magnetic resonance imaging) brain scans of people of all ages and disorders.

The company’s early collaborations with AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore Technologies include:-
  • Developing a deeper understanding of brain diseases similar to dementia.
  • Using AI to design new drugs.
  • Improving the accuracy of being able to find disease-causing variations in human genomes.
Scientists will be able to distinguish healthy brains from diseased brains due to this new research, providing them a more sophisticated knowledge of how diseases appear and potentially allowing for earlier and more accurate diagnoses. Jorge Cardoso, a senior lecturer of artificial medical intelligence at King’s College London, mentioned that Cambridge-1 allows accelerated generation of synthetic data that gives researchers at King’s College London a better understanding of how different factors affect the brain, anatomy, and pathology. Jorge also added that you could ask their model to generate an almost infinite amount of data with prescribed ages and diseases. With this, they can start tackling problems such as how diseases affect the brain and when abnormalities might start existing.

AI for healthcare is proliferating in the UK, with a range of startups and larger pharmaceutical companies turning to mine the vast quantities of data available to discover potential drugs, further understand certain diseases, and hence, improve and personalize patient care.

The use of synthetic data has the extra benefit of ensuring patient privacy since the images were AI-generated. This also allows King’s to open the research to the broader UK healthcare community.

The AI model was created by data scientists and engineers from King’s and NVIDIA. It’s one of the numerous ongoing initiatives on Cambridge-1. Drug discovery and genome sequencing are among the digital biology projects proposed by other top UK healthcare organizations.

With 80 NVIDIA DGXTM A100 systems integrating NVIDIA A100 GPUs, BlueField®-2 DPUs, and NVIDIA HDR InfiniBand networking, Cambridge-1 is the UK’s most powerful supercomputer.

The synthetic data model developed by King’s College London will be shared with the more extensive research and startup community.

Share RecommendKeepReplyMark as Last Read

From: Frank Sully8/2/2021 3:16:53 PM
   of 2616
Nvidia AI development hub now available to North American customers

Monthly subscription pricing to the Nvidia Base Command Platform starts at $90,000, with a three-month minimum.

By Jonathan Greig | August 2, 2021 -- 13:00 GMT (06:00 PDT)

Nvidia announced on Monday that its new hosted AI development hub -- the Nvidia Base Command Platform -- is now available to North American customers after debuting in May.

Nvidia said in a statement that the platform "provides enterprises with instant access to powerful computing infrastructure wherever their data resides."

The tool is available to be rented for a monthly subscription price of $90,000. There is a three-month minimum to all subscriptions, Nvidia explained.

Manuvir Das, head of Enterprise Computing at Nvidia, said the Base Command Platform makes it easy for enterprises to instantly access the power of an Nvidia DGX SuperPOD to "accelerate the AI and data science development lifecycle."

The platform gives companies access to Nvidia DGX SuperPODTM supercomputers through optimized AI workflow software, and the tool is hosted remotely by Equinix. According to a statement from the company, the Base Command Platform is the first Nvidia-powered hybrid cloud offering available through the Nvidia AI LaunchPad partner program.

The tool is tailored for organizations that have large-scale, multiuser and multi-team AI workflows looking to push AI projects into production.

Nvidia announced that Adobe was already using the tool to help researchers and data scientists work "simultaneously on shared accelerated computing resources to speed up the development of new AI-powered software features and applications."

Abhay Parasnis, CTO and chief product officer at Adobe, said the platform requires little effort to onboard AI developers.

"Our team is exploring the potential of Base Command Platform to simplify the machine learning development workflow," Parasnis said.

The tool is supported by a number of Nvidia partner organizations like NetApp and Equinix, and Weights and Biases, which offers MLOps software for the Base Command Platform.

In addition to a cloud-based user interface, the tool comes with a command-line API, integrated monitoring and reporting dashboards to accelerate the AI development lifecycle, incorporating a "broad range of AI and data science tools" like the Nvidia NGCTM catalog of AI and analytics software.

Equinix vice president Steve Steinhilber added that businesses often struggle to provide the simple yet powerful digital infrastructure that researchers and scientists can share efficiently when it comes to AI.

The Base Command Platform is "the fastest and most cost-effective way to tap into the leading performance of an Nvidia DGX SuperPOD to accelerate AI development, seamlessly access distributed data lakes wherever they may be located via Equinix Fabric, and quickly deploy developed and tested algorithms to inference engines all over the world," Steinhilber explained.

Kim Stevenson, senior vice president and general manager of the foundational data services group at NetApp, noted that the tool was a cloud-hosted solution for end-to-end AI development with fully managed AI infrastructure.

"Enterprises want to simplify AI experimentation and streamline workflow management across teams of users and jobs," Stevenson said.

Share RecommendKeepReplyMark as Last Read

From: Frank Sully8/2/2021 3:40:15 PM
   of 2616
Nvidia is tracking more than 8,500 AI startups with $60B in funding

Dean Takahashi @deantak

August 2, 2021 8:00 AM

Nvidia is tracking more than 8,500 AI startups through its Inception AI startup program. Those companies have raised more than $60 billion in funding and come from 90 countries, Nvidia said.

Based on estimates from market researcher Pitchbook, the Nvidia numbers represent roughly two-thirds of all AI startups. Overall, Nvidia believes there are about 12,000 AI startups in the world.

“It’s a good picture of the landscape,” said Serge Lemonde, global head of Nvidia Inception, in an interview with VentureBeat.

Across the startups, the definition of an AI company is changing, as many companies across all industries are adopting AI. There are new uses of AI emerging as companies adopt deep learning neural networks. The Inception companies now include more than high-performance computing, graphics, and other common startups.

“The fastest growing segments or verticals in the healthcare itself are around pharma and AI biology,” Lemonde said. “We launched the program in 2016. And every year, it’s been growing faster. In 2020, we had a plus 26% growth in the number of members joining Inception, and just this first half of this year is already plus 17%. So AI adoption is impacting every industry.”

The Inception program provides assistance and software for AI startups, and it’s Nvidia’s way of introducing AI companies to its hardware products such as its AI chips. The data from the ecosystem gives the companies a lot of insights into the AI economy.

Regional strengths

Above: Nvidia’s Inception program tracks AI startups.
Image Credit: Nvidia

The U.S. leads the world with nearly 27% of the Inception AI startups. Those U.S. companies have raised more than $27 billion. And of the U.S. startups, 42% are based in California. That means more than one in 10 AI startups are based in California, with 29% in the San Francisco Bay Area. This underscores the draw of Silicon Valley for startup founders and VC funding, Lemonde said.

Following the U.S. is China, in terms of both funding and company stage, with 12% of Nvidia Inception members based there. India comes in third at 7%, with the United Kingdom right behind at 6%.

Taken together, AI startups based in the U.S., China, India and the U.K. account for just over half of all startups in Nvidia Inception. Following in order after these are Germany, Russia, France, Sweden, Netherlands, Korea and Japan.

Industry focus

In terms of industries, healthcare, information technology services (IT), intelligent video analytics (IVA), media and entertainment (M&E) and robotics are the top five in Nvidia Inception. AI startups in healthcare account for 16% of Inception members, followed by those in IT services at 15%.

AI startups in IVA make up 8%, with M&E and robotics AI startups tied at 7%.

Recent growth

Above: Nvidia’s Inception AI startups are from the green countries.
Image Credit: Nvidia

More than 3,000 AI startups have joined Nvidia Inception since 2020. Similar to data across Inception as a whole, AI startups from the U.S. account for the largest segment (27 percent), followed by China (12 percent), and India and the U.K. (tied at 6 percent).

“Some countries are accelerating their ecosystem of AI startups by investing money and encouraging the local players to create more companies,” Lemonde said. “We saw India growing these last couple of months, and so India is definitely now the third country with 7% of the AI startups in the world.”

Additionally, startups that have joined since 2020 are concentrated in the same top five industries, though in slightly different order. IT services leads the way at 17%, followed by healthcare at 16%, M&E at 9%, IVA at 8% and robotics at 5%.

Within the top two industries — healthcare and IT services — there’s more detail among AI startups who have joined since 2020. The dominant segment within IT services is computer vision at 27%, with predictive analytics in second place at 9%. The top two segments in healthcare are medical analytics at 38% and medical imaging at 36%, though the fastest growth is among AI startups in the pharma and AI biology industries at 15%.

Virtual and augmented reality startup companies are far outpacing any other segment within M&E, mostly due to the pandemic. These startups are coming to Nvidia Inception with a shared vision of building an ecosystem for the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.

Healthcare AI startups skyrocketed during the pandemic as well, with growth in medical imaging and more.

“Now it’s about biology, pharma, DNA, and more,” Lemonde said. “I think there is a lot of growth there as well. We saw during COVID new verticals grow fast like virtual reality and augmented reality. We saw the usage of AI go up but this metaverse shared vision in many countries grow up.”

Growing regional hubs

Above: Regional Advantage by Annalee Saxenian studied the rise of Silicon Valley over Boston.
Image Credit: Annalee Saxenian

Since Inception’s launch in 2016, it has grown more than tenfold. This growth has accelerated year over year, with membership increasing to 26% in 2020, and already reaching 17% in the first half of 2021.

To grow a big AI hub in a region, Lemonde believes it’s most important to have good universities and educational infrastructure in a region.

“If you look at the top countries, the governments push technology, invest in science and AI, invest computing infrastructures in their countries, and push for investments,” he said.

Nvidia Inception is a program built to accommodate and nurture every startup that is accelerating computing, at every stage in their journey. All program benefits are free of charge. And unlike other accelerators or incubators, startups never have to give up equity to join. After the startups graduate from Inception, Nvidia hands them off to its developer relations and sales departments.

“In our program, what we are looking at is to help them all,” Lemonde said. “The lesson here is really having this window on the landscape and helping the startups all around the world is helping us understand at the new trends. We can help more startups by developing our software and platforms for the upcoming trends.”

Share RecommendKeepReplyMark as Last Read

From: Frank Sully8/3/2021 7:46:39 AM
1 Recommendation   of 2616

I have become moderator of the Baidu (BIDU) board. Read the Introduction header as I have updated it. Also, I posted a representative sample of the important news in the last six months, starting with message 1814. If you have any concerns, suggestions or questions please PM me.

Subject 55838

Frank Sully

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen8/4/2021 6:25:28 AM
2 Recommendations   of 2616
Why Nvidia’s $40 billion bid for Arm could be in jeopardy

Sam Shead @SAM_L_SHEAD


-- The deal, one of the biggest semiconductor takeovers ever seen, was announced last September to much fanfare, although competition regulators around the world soon announced plans to investigate the acquisition.

-- Probes were launched in the U.S., the U.K., China and Europe after companies like Qualcomm, Microsoft, Google and Huawei complained that the deal was bad for the semiconductor industry.

-- The U.K. is reportedly considering blocking the deal on national security grounds, while China and Europe’s probes are reportedly subject to delays.

LONDON — Nvidia’s $40 billion bid to buy U.K.-based chip designer Arm from Japan’s SoftBank has started to look increasingly uncertain in recent weeks.

The deal, one of the biggest semiconductor takeovers ever seen, was announced last September to much fanfare, although competition regulators around the world soon announced plans to investigate the acquisition. Probes were launched in the U.S., the U.K., China and Europe after companies like Qualcomm, Microsoft, Google and Huawei complained that the deal was bad for the semiconductor industry.

The U.K. investigation, being led by the Competition and Markets Authority, is also taking national security concerns into account. The CMA submitted its initial report to U.K. Culture Secretary Oliver Dowden on July 20.

The assessment contains worrying implications for national security and the U.K. is currently inclined to reject the takeover, according to a report from Bloomberg on Tuesday, citing an unnamed source familiar with the matter. A separate unnamed source said the U.K. was likely to conduct a deeper review into the merger as a result of national security concerns, Bloomberg reported. CNBC was unable to independently verify the report.

It’s unclear how U.K. national security will be impacted if Arm goes from being Japanese-owned to U.S.- owned but governments have come to view semiconductor technology as a vital asset amid the global chip shortage.

An Nvidia spokesperson told CNBC: “We continue to work through the regulatory process with the U.K. government. We look forward to their questions and expect to resolve any issues they may have.” Arm and the U.K. government did not immediately respond to CNBC’s request for comment.

The deal, which was initially expected to close by March 2022, also risks being held up elsewhere. In June, Chinese antitrust lawyers reportedly told The Financial Times that China’s investigation could take the deal beyond the 18-month window given by Nvidia in Sept. 2020.

Meanwhile, European regulators are thought to be reluctant to consider the case until after the summer holidays, according to a Reuters report published in June that cites people familiar with the matter, who say this could make it difficult for Nvidia to close the deal by March next year.

The purchase agreement gives the two companies the option to extend the deadline to September 2022. But, at that point, either company can walk away if the deal does not receive government approval.

What is Arm?

Cambridge-based Arm sells its chip blueprints and licenses to chip manufacturers around the world; it is viewed as a “neutral player” and is sometimes referred to as the “Switzerland of the chip industry.”

Some of these manufacturers, which compete with Nvidia, are concerned that the Santa Clara-headquartered chip giant could make it harder for them to access Arm’s technology.

Nvidia has repeatedly insisted that it won’t change Arm’s business model and that it will invest heavily in the company to help it meet increasing demand.

Nvidia’s share price does not seem to have been affected following the Bloomberg report. It closed at $198.15 on Tuesday, up almost 1% for the day.

Elsewhere, another semiconductor acquisition is also being scrutinized. U.K. Prime Minister Boris Johnson has ordered the national security adviser, Stephen Longrove, to review the takeover of Newport Wafer Fab, the U.K.’s largest semiconductor wafer manufacturing facility. The company is being acquired by Chinese-owned Nexperia for £63 million ($88 million).

Nvidia’s $40 billion bid for Arm could be in jeopardy (

Share RecommendKeepReplyMark as Last Read

From: Frank Sully8/4/2021 11:12:43 PM
1 Recommendation   of 2616
Nvidia stock gains after Rosenblatt price target boost

Aug. 04, 2021 2:32 PM ET
NVIDIA Corporation (NVDA)
by Brandy Betz, SA News Editor

Justin Sullivan/Getty Images News
  • Calling the company a best-in-class artificial intelligence play, Rosenblatt Securities reiterates a Buy rating on Nvidia (NASDAQ: NVDA)and raises the price target from $200 to $250.
  • Analyst Hans Mosesmann makes the move the day after a fireside chat with company management.
  • The analyst notes that Nvidia ( NVDA) also has "growth vectors into next generation networking/DPU adoption and early-days of autonomous driving."
  • Mosesmann thinks the $40B acquisition of SoftBank's Arm chip unit is unlikely to happen, which the Street is slowly realizing, but says the stock "will work nonetheless."
  • NVIDIA ( NVDA) shares are up 2.2% to $202.41.

Share RecommendKeepReplyMark as Last Read

From: Frank Sully8/5/2021 1:46:22 AM
   of 2616
AutoX Robotaxis Now Using NVIDIA DRIVE, NVIDIA Acquiring DeepMap, & DiDi Booming On NVIDIA DRIVE’s Back

Zachary Shahan

Five years ago, in a blogging competition about “what will be the most important technological development over the next 10 years that will have the greatest impact in reducing climate change risks,” I concluded that the answer was robotaxis. If true robotaxis, broadly available and deployed in cities around the world, come to fruition, the potential reduction in emissions is immense. This is assuming they are electrically powered, but that seems most sensible for several reasons — especially by the middle of the decade.

Naturally, many of us think that Tesla is quite far ahead in the development of broadly applicable, cost-competitive robotaxi hardware and firmware. However, it certainly isn’t the only name in town, and there are also many who think that Tesla’s approach cannot lead to true robotaxis. One other tech company you have to keep on the table of possibilities is NVIDIA. Aside from being a tech giant in other realms, one of the advantages NVIDIA has is that it supplies hardware — and increasingly software services — for a bunch of automakers. Also, as the industry has evolved, NVIDIA has looked more seriously at providing integrated, robust technology partnerships with these automakers — not just as a supplier, but as a team working with automakers’ driver-assist or self-driving teams.

With all of that in mind, NVIDIA has rolled out a series of 6 news stories in the past two months related to autonomous driving. In this article, I’m going through 3 of those that relate to the tech giant’s NVIDIA DRIVE solutions. Let’s catch up and check those out.

AutoX Robotaxis in Service NowProbably the biggest story of the batch is that AutoX, a self-driving vehicle startup out of China, has launched its 5th generation robotaxi platform and the platform uses NVIDIA DRIVE. The system’s “automotive-grade GPUs to reach up to 2,200 trillion operations per second (TOPS) of AI compute performance.”

We did cover the rollout of AutoX robotaxis in January, when they launched to the public in Shenzhen, the 5th largest city in China (population over 12 million). It’s a solid testament to NVIDIA that a company with robotaxis on the road just upgraded to the new NVIDIA DRIVE platform. “Safety is key,” said Jianxiong Xiao, founder and CEO of AutoX. “We need higher processing performance for safe and scalable robotaxi operations. With NVIDIA DRIVE, we now have power for more redundancy in a form factor that is automotive grade and more compact.”

Even more impressive that this service is in place in the high-traffic, highly complex streets of Shenzhen. NVIDIA notes, “Safely navigating such chaotic streets requires sensors that can detect obstacles and other road users with the highest levels of accuracy. The Gen5 system relies on 28 automotive-grade camera sensors generating more than 200 million pixels per frame 360-degrees around the car. (For comparison, a single high-definition video frame contains about 2 million pixels.)” Mind blowing. “In addition to cameras, the robotaxi system includes six high-resolution lidar sensors that produce 15 million points per second and surround 4D radar.”

Now, Tesla fans will quickly point out that Tesla recently ditched radar because it basically just got in the way, and that Tesla is working to solve broad, general AI challenges. Nonetheless, let’s not miss the fact that NVIDIA DRIVE is being used in robotaxis that are in service right this moment in one of the largest and most traffic-heavy cities on Earth.

“At the center of the Gen5 system are two NVIDIA Ampere architecture GPUs that deliver 900 TOPS each for a truly level 4 autonomous, production platform. With this unprecedented level of AI compute at the core, Gen5 has enough performance to power ultra complex self-driving DNNs while maintaining the compute headroom for more advanced upgrades.

“This capability makes it possible for the vehicles to react to high-traffic situations — like dozens of motorcycles and scooters cutting in or riding the opposite way at the same time — in real time, and continually improving, learning how to manage new scenarios as they arise.”

See — other systems can learn, too.

AutoX is just getting started, with plans to roll out robotaxis in cities around the world and with large automotive partners like Honda and Stellantis. And NVIDIA is just getting started, as well.

NVIDIA Acquires DeepMap

To further improve its mapping solutions for aforementioned autonomous driving systems, NVIDIA announced in June that it was acquiring DeepMap. Clearly, there’s an implication of deep learning in that name — it’s all AI all the time these days. The summary highlights from that announcement: “DeepMap expected to extend NVIDIA mapping products, scale worldwide map operations and expand NVIDIA’s full-self driving expertise.
NVIDIA is an amazing, world-changing company that shares our vision to accelerate safe autonomy,” said James Wu, co-founder and CEO of DeepMap. “Joining forces with NVIDIA will allow our technology to scale more quickly and benefit more people sooner. We look forward to continuing our journey as part of the NVIDIA team.” DeepMap cofounders James Wu and Mark Wheeler previously worked at Google, Apple, and Baidu, so going back into a tech giant must feel a little bit like going home after getting DeepMap off the ground and acquired by NVIDIA.

What’s so special about DeepMap? Well, we don’t have an insight into the coding (and seeing it wouldn’t help me much anyway), but the key appears to be the crowdsourced data collection from a broad fleet of vehicles, which “lets DeepMap build a high-definition map that’s continuously updated as the car drives.” Naturally, the coding must be good, too. Getting integrated into NVIDIA DRIVE, it will certainly collect a lot more data and benefit from fast-growing deployment.

The acquisition hasn’t closed yet — going through all of the paperwork and lawyers necessary, it’s expected to close this quarter.

DiDi Goes Public, Also Benefiting From NVIDIA DRIVE
DiDi robotaxis, courtesy of DiDi & NVIDIA.

Gigantic Chinese ride-hailing company DiDi just went public about a month ago, raising a ginormous $4.4 billion. Not too shabby, but note that DiDi has nearly 500 million active users across 71 countries and 10,000 cities. NVIDIA took the moment to note that DiDi “is developing its upcoming robotaxi platform on NVIDIA DRIVE AGX Pegasus.”

The question is, who isn’t using NVIDIA DRIVE?

Share RecommendKeepReplyMark as Last Read

From: Frank Sully8/5/2021 11:04:57 AM
   of 2616
Interview With Murali Gopalakrishna, GM, Robotics @ NVIDIA


NVIDIA created the Isaac robotics platform, including the Isaac Sim application on the NVIDIA Omniverse platform for simulation and training of robots in virtual environments

For this week’s practitioners series, Analytics India Magazine (AIM) got in touch with Murali Gopalakrishna, Head of Product Management, Autonomous Machines and General Manager for Robotics. He also leads the business development team focusing on robots, drones, industrial IoT and enterprise collaboration products at NVIDIA. In this interview, we discuss in detail the robotics solutions developed by NVIDIA and their significance.

AIM: Can you tell us about how NVIDIA is building robotics solutions to be used at scale?Murali: Robotics algorithms can be mainly classified into (1) sensing/perception, (2) mobility (motion/path planning), and (3) robot control. All these fields are seeing significant innovation in the recent past with AI/Deep Learning playing an important role. With NVIDIA GPU-accelerated AI at the edge computing platforms, manufacturers will be able develop complex algorithms and deploy robotics at scale.

Robots have to sense, plan and act. To develop robots that are autonomous and efficient, developers have to accelerate algorithms for the complete stack. Algorithms such as object detection, pose estimation and depth estimation are used to perceive the environment, create a map of the environment and localise the robot in the environment. Algorithms such as free space segmentation are used for planning the efficient path for the robot, while control algorithms determine the commands for the robot to go on the planned path. Advances in AI and GPU-accelerated computing are making all these algorithms more accurate and faster, creating robots that are more capable and safe.

Ease of use and deployment have made the NVIDIA Jetson platform a logical choice for over half a million developers, researchers, and manufacturers building and deploying robots worldwide. We provide a full suite of tools and SDKs for developers and companies scaling robotics and automation applications:
  • Open source packages for ROS/ROS2 (Human Pose Estimation, Accelerated AprilTags), Docker containers, Cuda library support and more.
  • For training: NVIDIA Transfer Learning Toolkit (TLT) helps reduce costs associated with large scale data collection, labeling, and eliminates the burden of training AI/ML models from the ground up. This enables developers to build and scale production quality pre-trained models faster with no code. Auto Mixed Precision allows developers to train with half precision while maintaining the network accuracy achieved with single precision, enabling significantly faster training time.
  • For real-time inference: NVIDIA TensorRT is a high-performance SDK for deep learning, including a DL inference optimizer and runtime that delivers low latency and high throughput for inference applications. NVIDIA Triton Inference Server simplifies the deployment of AI models at scale in production. It is an open source inference serving software that lets teams deploy trained AI models from any framework on any GPU or CPU-based infrastructure (cloud, data center, or edge).
  • For perception: NVIDIA DeepStream SDK helps developers build and scale AI-powered Intelligent Video Analytics apps and services. DeepStream offers a multi-platform scalable framework with TLS security to deploy on the edge and connect to any cloud.
  • NVIDIA Fleet Command is a hybrid-cloud platform for managing and scaling AI at the edge. From one control plane, anyone with a browser and internet connection can deploy applications, update software over the air, and monitor location health.
  • NVIDIA Jarvis is an application framework for multimodal conversational AI services that delivers real-time performance on GPUs.

AIM: What is the scope of these solutions?

Muralli: Powerful GPU-based AI-at-the-edge computing, along with a full spectrum of sensors, are widely implemented in the field today. Fueled by AI and DL, sensor technologies that power perception for real-time decision making have revolutionised several areas of robotics, including navigation, visual recognition and object manipulation.

Today’s AI-enabled robots perform myriad tasks and functions, allowing them to work as “cobots” in close collaboration with humans in complex environments including warehouses, retail stores, hospitals and industrial environments as well as in our homes. AI and DL continue to play a significant role in the programming of robots, speeding development time for roboticists and helping advance these systems from single functionality to multi functionality.

And there’s no arguing the pandemic accelerated the need and urgency for robotics deployment, especially in healthcare, logistics, manufacturing and retail.
  • Healthcare: To minimise contact and support shortage of staff and resources, robots have found invaluable use in the delivery of medicine/supplies, patient monitoring, medical procedures, temperature detection, and UV disinfectant applications in public and private spaces.
  • Logistics: From pick-n-place to last mile delivery, robots have clearly become indispensable with the ever-increasing need for efficiencies across the supply chain and e-commerce.
  • Manufacturing: Using AI/DL to create the factory-of-the future, leveraging robots and cobots for no-touch manufacturing as well as enabling zero downtime to increase productivity and efficiency.
  • Retail: From cleaning, inventory and safety (temperature detection, mask detection, social distancing) to shelf-scanning and self-checkout, robots are transforming the shopping experience.We have a large customer base in a diverse set of industries like agriculture, manufacturing, healthcare and logistics (e.g., John Deere in agriculture and Komatsu in construction). Most of the last mile delivery robots are using NVIDIA technology (Postmates, JD-X , Cianio, etc.)

AIM: Tell us about NVIDIA Isaac Sim.Murali: NVIDIA created the Isaac robotics platform, including the Isaac Sim application on the NVIDIA Omniverse platform for simulation and training of robots in virtual environments before deploying them in the real world. NVIDIA Omniverse is the underlying foundation for all our simulators, including the Isaac platform. We’ve added many features in our latest Isaac Sim open beta release including ROS/ROS2 compatibility and multi camera support, as well as enhanced synthetic data generation and domain randomization capabilities which are important for generating datasets to train perception models for AI based robots.

Simulation technology like Isaac Sim on Omniverse can be used for every aspect: from design and development of the mechanical robot, then training the robot in navigation and behavior, to deploying in a “digital twin” in which the robot is simulated and tested in an accurate and photorealistic environment before deployed in the real world.

AIM: What are the current challenges and what does the future hold for robotics?

Muralli: One of the most interesting areas of development is cobots, which can be deployed in areas where robots have not been used thus far. Traditionally, the use of robots on factory floors posed safety risks and were deemed too dangerous to work alongside humans, and therefore these machines were typically placed in isolated environments or caged. Enter cobots. Though designed to work in close proximity with humans, cobots faced several challenges like limited capabilities and inability to think, putting a damper on their widespread adoption.

But now, thanks to advancements in AI, which brings intelligence to cobots, we’re seeing these systems make real-time decisions that ensure safety in the factory-of-the future, while maintaining and optimizing productivity. This includes training a cobot to perceive the environment around it and adapt accordingly — allowing it to reduce its speed, adjust its force/strength, detect changing working conditions, or even shut down safely before it interferes with a human in its proximity. By leveraging the power of AI, coupled with changes in cobot design (softer materials, new types of joints, removal of sharp edges, etc.), we’re seeing the emergence of applications and use cases that were not previously feasible (e.g., robots in commercial kitchens, etc.)

Robots are being taught what to do, and how to improve upon complex tasks, as quickly as within a few hours or overnight (versus what used to take weeks or even months)! AI techniques such as one-shot learning, transfer learning, imitation learning, reinforcement learning, etc. are no longer confined to research papers; many of these methods are in practical use today for real-world robotics deployments.

AIM: How do you see the Robotics landscape evolving in India?

Muralli: Manufacturing is increasingly reliant on robotic production. For example, the automotive industry. Our collaboration with BMW for example, begins with creating a digital twin of a future factory in Omniverse and laying out the entire robotic managed production lines digitally, before committing to physical construction. Other industries benefiting from robotics are the industrial & nuclear power sectors. For example, warehouse and inventory management, materials transportation, quality inspection and predictive maintenance in the former & internal reactor inspection and emergency response in the latter.

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10