SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Frank Sully8/7/2021 5:57:23 PM
1 Recommendation   of 2627
 
Accelerating Deep Learning Research with NVIDIA DGX Station A100

Aug 3, 2021


Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/8/2021 3:19:15 PM
   of 2627
 
These 2 Things Make NVIDIA the Best Semiconductor Stock For the 2020s

Nvidia’s chip licensing and software make it the best chip stock for the next decade.

Key Points
  • Nvidia is trying to acquire leading chip licensor ARM, but has its own extensive chip licensing business already established.
  • A number of in-house cloud-based software services have been developed on Nvidia's hardware platform.
  • More than just a semiconductor company, Nvidia is increasingly looking like a tech platform.
Demand for high-end computing is booming in the wake of the pandemic, and showing no signs of letting up. And yet at the same time, the semiconductor industry -- the provider of all technology's basic building blocks -- is consolidating to address the economy's tech needs.

There are good reasons for this, and NVIDIA's ( NASDAQ:NVDA) bid for chip architecture licensing leader ARM Holdings embodies the issue. At the same time, Nvidia is pouring vast resources into research and development, and coming up with an expanding suite of cloud-based software as a result. The rulebook is changing for semiconductor industry success, and Nvidia's combo of tech hardware licensing and software makes it the best bet for the 2020s.

A new battle looming with tech giants

Cloud computing is reshaping the economy. The pandemic and the remote work movement it spawned have shoved the world further down the cloud path, cementing the data center (from which cloud services are built and delivered to users) as a critical computing unit for the decades ahead.

This has of course been a boon for semiconductor companies, but it's also presented a potential problem. Chip companies' biggest customers could eventually become its biggest competitors. You see, massive secular growth trends have turned multiple companies into tech titans with vast resources at their disposal. And several of them -- including Apple ( NASDAQ:AAPL), Alphabet's ( NASDAQ:GOOGL)( NASDAQ:GOOG) Google, and Amazon ( NASDAQ:AMZN) -- have started developing their own semiconductors to best suit their needs. All three have licensed ARM's extensive portfolio of chip designs to help get the ball rolling.

To be fair, tech giants represent a tiny share of the silicon market at this point. Even with the help of ARM's advanced blueprints, it takes incredible scale to engineer circuitry in-house and then partner with a fab (like Taiwan Semiconductor Manufacturing ( NYSE:TSM)) to start production. But that's the point. Companies like Apple, Google, and Amazon are large enough and have enough spare cash that it's beginning to make financial sense for them to journey down this path. The potential of this is concerning for the semiconductor industry.

That's why Nvidia's bid for ARM is such an incredible move. Granted, Nvidia has promised to keep ARM independent and won't deny anyone access to its designs if the merger is approved (there are still lots of question marks on whether regulators in the U.K. where ARM is based, as well as those in Europe and China, will sign off on the deal). Nevertheless, if Nvidia does get ARM, it says it will devote more research dollars to the firm and add its own extensive tech licensing know-how -- especially in the artificial intelligence department. Rather than diminish the competitive landscape, this could give purpose-built semiconductor firms a fighting chance to continue developing best-in-class components for a world that is increasingly reliant on digital systems.

And if Nvidia doesn't get to acquire ARM? There's nothing stopping it from accessing ARM's portfolio and adding its own flair to the design. In fact, even ahead of the merger decision, Nvidia has announced a slew of new products aimed at the data center market. And if it can't redirect some of its research budget to ARM, it can continue to develop on its own. In fact, Nvidia is one of the top spenders on research and development as a percentage of revenue out there, doling out nearly one-quarter of its $19.3 billion in sales on research and development over the last trailing 12 months.

With or without ARM, Nvidia is in prime position to dominate the tech hardware market in the decade ahead as data centers and AI grow in importance in the global economy.

Becoming a partner on the cloud software front

Of course, when it comes to designing semiconductors, the real end goal is to build a killer product or software service. Once chip companies do their job, that process has historically been out of their hands, and in the realm of engineers and software developers.

Historically, Nvidia has played by the same playbook -- but that's changed in recent years. The company has been planting all sorts of seed for its future cloud software and service library. It has its own video game streaming platform GeForce Now, Nvidia DRIVE has partnered with dozens of automakers and start-ups to advance autonomous vehicle software and system technology, and the creative collaboration tool Omniverse, which builds on Nvidia's digital communicationscapabilities, is in beta testing.

New cloud services like AI Enterprise and the Base Command Platform demonstrate the power of Nvidia's hardware, as well as Nvidia's scale to be able to build business tools it can directly go to market with. While public cloud computing firms like Amazon, Microsoft ( NASDAQ:MSFT), and Google get all the attention, don't ignore Nvidia. It's going after the massive and still fast-expanding software world as secular trends like the cloud and AI are forcing the transformation of the tech world.

Between its top-notch tech hardware licensing business and newfound software prowess, it's clear Nvidia is no normal semiconductor company. It may not be the most timely purchase ever -- shares currently value the company at 95 times trailing 12-month free cash flow, partially reflecting the massive growth this year from elevated demand for its chips. The stock price could also get volatile later this year and next, especially as a more definitive answer on the ARM acquisition emerges. However, if you're looking for a top semiconductor investment for the next decade, look no further than Nvidia, as it's poised to rival the scale of the biggest of the tech giants.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/9/2021 12:40:28 AM
   of 2627
 
Soar into the Hybrid-Cloud: Project Monterey Early Access Program Now Available to Enterprises

Dell Technologies, VMware and NVIDIA partner to help organizations boost data center performance, manageability and security.

August 3, 2021 by MOTTI BECK

Modern workloads such as AI and machine learning are putting tremendous pressure on traditional IT infrastructure.

Enterprises that want to stay ahead of these changes can now register to participate in an early access of Project Monterey, an initiative to dramatically improve the performance, manageability and security of their environments.

VMware, Dell Technologies and NVIDIA are collaborating on this project to evolve the architecture for the data center, cloud and edge to one that is software-defined and hardware-accelerated to address the changing application requirements.

AI and other compute-intensive workloads require real-time data streaming analysis, which, along with growing security threats, puts a heavy load on server CPUs. The increased load significantly increases the percentage of processing power required to run tasks that aren’t an integral part of application workloads. This reduces data center efficiency and can prevent IT from meeting its service-level agreements.

Project Monterey is leading the shift to advanced hybrid-cloud data center architectures, which benefit from hypervisor and accelerated software-defined networking, security and storage.



Project Monterey – Next-Generation VMware Cloud Foundation Architecture

With access to Project Monterey’s preconfigured clusters, enterprises can explore the evolution of VMware Cloud Foundation and take advantage of the disruptive hardware capabilities of the Dell EMC PowerEdge R750 server equipped with NVIDIA BlueField-2 DPU (data processing unit).
Selected functions that used to run on the core CPU are offloaded, isolated and accelerated on the DPU to support new possibilities, including:
  • Improved performance for application and infrastructure services
  • Enhanced visibility, application security and observability
  • Offloaded firewall capabilities
  • Improved data center efficiency and cost for enterprise, edge and cloud.
Interested organizations can register for the NVIDIA Project Monterey early access program. Learn more about NVIDIA and VMware’s collaboration to modernize the data center.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/9/2021 10:29:23 AM
   of 2627
 
Can You Trust The Solutions That AI Technologies Deliver? With Mathematical Optimization, You Can



Edward Rothberg
Forbes Councils Member

Forbes Technology CouncilCOUNCIL POST| Membership (fee-based)
Innovation

Edward Rothberg is CEO and Co-Founder of Gurobi Optimization, which produces the world’s fastest mathematical optimization solver.

Every day, more and more enterprises are using AI technologies — such as machine learning, mathematical optimization and heuristics — to make high-stakes decisions in industries like healthcare, electric power, logistics, financial services and in public sector areas like the military, infrastructure and criminal justice.

But as our reliance on these AI technologies to make critical decisions has increased, concerns over the reliability of the solutions delivered by these technologies have grown.

Numerous high-profile incidents — like self-driving cars failing to recognizeslightly modified stop signs, machine learning-based scoring systems demonstrating racial bias when predicting the likelihood of criminals committing future crimes, Google Trends wrongly predicting flu outbreaks based on search data and the algorithms used by Apple to determine credit-worthiness apparently discriminating against women — have shone a spotlight on some of the inherent shortcomings and unintended biases of AI technologies, shaking our confidence in the accuracy of these solutions.

Indeed, these and other incidents have left many wondering: Can we really trust the solutions delivered by AI technologies?

The Importance Of Interpretability

The root of the problem is that many of these AI tools are black boxes – meaning that users have little or no understanding of their inner workings and how they arrive at a given solution or make a particular automated decision.

The opaqueness of many AI technologies — which are based on sophisticated algorithms and complex mathematical models — has fueled concerns that AI may be producing inaccurate solutions and perpetuating bias in decision-making — and sowed distrust in AI-based solutions and decisions. This has spurred demands for greater transparency and accountability (with some formally calling for a “ right to explanation” for decisions made by algorithms) and illuminated the importance of interpretability in AI.

Interpretability — the capability to understand how an AI system works and explain why it generated a solution or made a decision — is a hot topic in the business world and an area of active research, with developers across the AI software space striving to make technologies such as machine learning more interpretable. There has been significant progress with introduction of new approaches that help improve the interpretability of machine learning and other AI tools.

There are, however, some AI technologies that are inherently interpretable — and mathematical optimization is, without a doubt, one of these technologies.

Indeed, interpretability is (and always has been) an essential characteristic and a key strength of mathematical optimization. As the CEO of a mathematical optimization software firm, I witness every day how organizations across the business world depend on this prescriptive analytics technology to deliver solutions they can understand, trust and use to make pivotal decisions.

Assessing Interpretability

How can we gauge the interpretability of AI technologies?

The US National Institute of Standards and Technology (NIST) has developed four principles that encompass the “core concepts of explainable AI” — and this framework provides a useful lens through which we can explore and evaluate the interpretability of AI technologies.

Let’s take a look at how mathematical optimization stacks up against these four NIST principles:

1. Knowledge Limits

AI systems should only operate within the “limits” and “under conditions” they were designed for.

Mathematical optimization systems consist of two components: a mathematical optimization solver (an algorithmic engine) and a mathematical model (a detailed, customized representation of your business problem, which encapsulates all of your decision-making processes, business objectives and constraints). Users of mathematical optimization can design their models and thereby define what the “limits” and “conditions” of their systems are.

2. Explanation

AI systems should “supply evidence, support or reasoning” for each output or solution.

A mathematical optimization model is essentially a digital twin of your business environment, and the constraints (or business rules) that must be satisfied are embedded into that model. Any solution generated by your mathematical optimization system can easily be checked against those constraints.

3. Explanation Accuracy

The explanations provided by AI systems should “accurately describe how the system came to its conclusion.”

Mathematical optimization is a problem-solving technology that can rapidly and comprehensively comb through trillions or more possible solutions to incredibly complex business problems and find the optimal one. Since mathematical optimization conducts this search in a systematic fashion, the solutions delivered by this AI technology come with a mathematically backed guarantee of quality — and this fact can be audited and validated.

4. Meaningful

AI systems should “provide explanations that are understandable to individual users.”

Most AI tools — like neural networks and random forests — run on black box models. You feed them data, they work their magic and automatically spit out a solution. It’s essentially impossible (even for many developers) to gain insight into how these systems actually work or why they are making specific predictions or decisions. Mathematical optimization models, in contrast, are transparent and interpretable and are meaningful by design (as they capture the fundamental features of your real-world operating environment). The models themselves (and the solutions they deliver) reflect reality — and are thus understandable for users.

As you can see, mathematical optimization fulfills all four NIST principles and excels in the area of interpretability. With mathematical optimization, you can attain a deep understanding of how and why the AI system makes certain decisions — and thereby gain trust in those decisions.

It’s important to note that mathematical optimization is not the only AI tool that can deliver interpretable solutions — there are a number of other AI technologies that have this capability (and other technologies that are developing in this area).

When you’re deciding whether or not to invest in one of these AI technologies, one critical factor to consider is their interpretability — and the NIST principles provide a good framework through which to assess this.

Understanding “The Why” Of AI

The issue of interpretability in AI continues to captivate the business world, with “explainable AI” trends and technological breakthroughs grabbing news headlines and “explainable AI” initiatives topping the agendas of IT executives. While many companies in the AI space grapple with questions of how to make their technologies more transparent and trustworthy, there are already AI tools out there — like mathematical optimization — that are innately equipped to deliver interpretable, reliable and optimal solutions to today’s problems.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/9/2021 11:14:38 AM
   of 2627
 
Synopsys, Cadence, Google And NVIDIA All Agree: Use AI To Help Design Chips



Karl Freund
Contributor
Enterprise Tech

Synopsys created a buzz in 2020, and now Google, NVIDIA, and Cadence Design have joined the party. What lies ahead?

Introduction

Designing modern semiconductors can take years and scores of engineers armed with state-of-the-art EDA design tools. But the semiconductor landscape and the world around us is being revolutionized by hundreds of new chips, primarily driven by AI. Some entrepreneurial thought leaders believe that the expensive and lengthy chip design process could shrink from 2-3 years to 2-3 months if hardware development was to become more agile, more autonomous. And chief among a new breed of agile design tools is AI itself.

The Semiconductor Design Landscape

This discussion began in earnest when the EDA leader Synopsys announced DSO.ai, Design Space Optimization AI, an software product that could more autonomously identify optimal ways to arrange silicon components (layouts) on a chip to reduce the area and reduce power consumption, even while increasing performance. Using reinforcement learning, DSO.ai could evaluate billions of alternatives against design goals, and produce a design that was significantly better than that produced by talented engineers. The size of the problem/solution space DSO.ai addresses is staggering: there are something like 1090,000possible ways to place components on a chip. That compares to 10360 possible moves in the game of Go which was mastered by Google AI in 2016. Since reinforcement learning can play Go better than the world champion, one could conceivably design a better chip if one is willing to spend the compute time to do it.

Results are quite impressive, realizing 18% faster operating frequency at 21% lower power, while reducing engineering time from six months to as little as one. In a recent interview, Synopsys’ Founder and Co-CEO Aart de Geus disclosed that Samsung have a working chip in-house today that was designed with DSO.ai. This would indeed be the world’s first use of AI to create a chip layout in production – from RTL to tapeout.

Recently Google published results of doing something similar, as has NVIDIA. And Cadence Design Systems just announced an AI-based optimization platform similar to Synopsys DSO.ai. Before we take a look at these efforts, lets back up a little and look at the entire semiconductor design space. A good place to start is the Gajski-Kuhn Chart that outlines all the steps of chip design along three axes: the Behavioral level where architects defines what the chip is supposed to do, the Structural level where they determine how the chip is organized, and the Geometry level where engineers define how the chip is laid out.

Based on this model, each step towards the center (which is when the team “tapes out” the chip to the fabrication partner) feeds the work in the next phase in a clockwise direction. To date, all the application of AI has been in the geometry space, or physical design, to address the waning of Moore’s Law.

Synopsys DSO.ai

As I covered at launch, Synopsys DSO.ai was the first entrant to apply AI to the physical design process, producing floor plans that consumed lower power, ran at higher frequencies, and occupied less space than the best an experienced design could produce. What really attracted my attention was the profound effect of AI on productivity; DSO.ai users were able to achieve in a few days what it used to take teams of experts, many weeks.

Google Research and NVIDIA Research

Both companies have produced research papers that describe the use of reinforcement learning to assist in the physical design of the floor plan. In Google’s case, AI is being used to lay out the floor plan of the next generation TPU Chip and the company is investigating additional uses of AI such as in architectural optimization.

NVIDIA similarly has focused on that same low-hanging fruit: floorplanning, and with all the compute capacity they have in-house, I’d expect NVDIA to continue to eat their own dogfood and use AI to design better AI chips.

google.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/9/2021 4:54:21 PM
   of 2627
 
NVIDIA Graphics Research

One Minute Video


Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/9/2021 7:51:12 PM
   of 2627
 
NVIDIA maintains tight grip on market for AI processors in cloud and data centers

OMDIA’s AI Processors for Cloud and Data Center Forecast Reporthas declared NVIDIA as one of the frontrunners in terms of how it uses artificial intelligence (AI) in the cloud and data centers.

According to the report, NVIDIA swept up an impressive 80.6% share of all global revenue - a total of $3.2 billion in 2020, up from $1.8 billion in 2019.

The report labels NVIDIA’s market dominance as ‘supremacy in the market for GPU-derived chips’ - the likes of which are commonly deployed in servers, workstations, and expansion cards across cloud and data center equipment.

Omdia principal analyst advanced computing, Jonathan Cassell, says NVIDIA employs key strategies to maintain its growth.

“With their capability to accelerate deep-learning applications, GPU-based semiconductors became the first type of AI processor widely employed for AI acceleration. And as the leading supplier of GPU-derived chips, NVIDIA has established itself and bolstered its position as the AI processor market leader for the key cloud and data center market,” explains Cassell.

NVIDIA’s dominance is also leading to intense market competition as suppliers battle it out and claim their share of the total $4 billion market revenue for cloud and data center AI processors. Total market revenue could reach $37.6 billion by 2026.

“Despite the onslaught of new competitors and new types of chips, NVIDIA’s GPU-based devices have remained the default choice for cloud hyperscalers and on-premises data centers, partly because of their familiarity to users,” says Cassell.

Cassell points to the NVIDIA Compute Unified Device Architecture (CUDA) Toolkit, which is used extensively in the AI software development community. This, by default, provides a boost for NVIDIA’s associated products such as GPU chips.

But market competition will only increase, particularly as the market looks towards other, non NVIDIA-based GPU chips and other AI processors in the future.

According to Omdia’s research, other major market players in the cloud and data center AI processor market include Xilinx, Google, Intel, and AMD.

Xilinx ranked second behind NVIDIA. Xilinx provides field-programmable gate array FPGA products commonly used for AI inferencing in cloud and data center servers.

Google ranked third. Its Tensor Processing Unit (TPU) AI ASIC is employed extensively in its own hyperscale cloud operations.

Intel ranked fourth. Its Habana AI proprietary-core AI ASSPs and its FPGA products are designed for AI cloud and data center servers.

AMD ranked fifth for its GPU-derived AI ASSPs for cloud and data center servers.

channellife.com.au

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 2:06:46 PM
   of 2627
 
Nvidia expands Omniverse with a new GPU, new collaborations

The new RTX A2000 is a powerful, low-profile, dual-slot GPU for professionals, with 6GB of ECC graphics memory in a compact form factor. Nvidia says it will expand access to Omniverse, its 3D collaboration platform.



By Stephanie Condon for Between the Lines | August 10, 2021 -- 16:00 GMT (09:00 PDT) | Topic: Processors

Nvidia on Tuesday announced a series of ways it plans to bring the Omniverse design and collaboration platform to a vastly larger audience. Those plans include new integrations with Blender and Adobe, companies that will extend the potential reach of Omniverse by millions. Nvidia is also introducing the new RTX A2000 GPU, bringing the RTX technology that powers Omniverse to a wide range of mainstream computers.

Nvidia rolled out Omniverse in open beta back in December, giving 3D designers a shared virtual world from which they can collaborate across different software applications and from different geographic locations. Earlier this year, the company introduced Omniverse Enterprise, bringing the platform to the enterprise community via a familiar licensing model.

"We are building Omniverse to be the connector of the physical and virtual worlds," Richard Kerris, VP of Omniverse for Nvidia, said to reporters last week. "We believe that there will be more content and experiences shared in virtual worlds than in physical worlds. And we believe that there will be amazing exchange markets and economic situations that will be first built in the virtual world... Omniverse is an exchange of these vital worlds. We connect everything and everyone, through a baseline architecture that is familiar to existing tools that are out there and existing workflows."

Nvidia unveiled the RTX A2000 GPU to bring RTX technology to mainstream workstations.

In a blog post, Nvidia VP Bob Pette wrote that the new A2000 GPU "would serve as a portal" to Omniverse "for millions of designers." The A2000 is Nvidia's most compact, power-efficient GPU for standard and small-form-factor workstations.

The GPU has 6GB of memory capacity with an error correction code (ECC) to maintain data integrity -- a feature especially important for industries such as healthcare and financial services.

Based on the Nvidia Ampere architecture, it features 2nd Gen RT Cores, enabling real-time ray tracing for professional workflows. It offers up to 5x the rendering performance from the previous generation with RTX on. It also features 3rd Gen tensor cores to enable AI-augmented tools and applications, as well as CUDA cores with up to 2x the FP32 throughput of the previous generation.

Speaking to reporters, Pette said the A200 would enable RTX in millions of additional mainstream computers. More designers will have access to the real-time ray tracing and AI acceleration capabilities that RTX offers. "This is the first foray of RTX into what is the largest volume segment of GPUs for Nvidia," Pette said.

Among the first customers using the RTX A2000 are Avid, Cuhaci & Peterson and Gilbane Building Company.

The A2000 desktop GPU will be available in workstations from manufacturers including ASUS, BOXX Technologies, Dell Technologies, HP and Lenovo, as well as Nvidia's global distribution partners, starting in October.

Meanwhile, Nvidia is encouraging the adoption of Omniverse by supporting Universal Scene Description (USD), an interchange framework invented by Pixar in 2012. USD was released as open-source software in 2016, providing a common language for defining, packaging, assembling and editing 3D data.

Omniverse is built on the USD framework, giving other software makers different ways to connect to the platform. Nvidia announced Tuesday that it's collaborating with Blender, the world's leading open-source 3D animation tool, to provide USD support to the upcoming release of Blender 3.0. This will give Blender's millions of users access to Omniverse production pipelines. Nvidia is contributing USD and materials support in Blender 3.0 alpha USD, which will be available soon.

Nvidia has also collaborated with Pixar and Apple to define a common approach for expressing physically accurate models in USD. More specifically, they've developed a new schema for rigid-body physics, the math that describes how solids behave in the real world (for example, how marbles would roll down a ramp). This will help developers create and share realistic simulations in a standard way.

Nvidia also announced a new collaboration with Adobe on a Substance 3D plugin that will bring Substance Material support to Omniverse. This will give Omniverse and Substance 3D users new material editing capabilities.

EXPANDING OMNIVERSE ACCESSNvidia on Tuesday also announced that Omniverse Enterprise, currently in limited early access, will be available later this year on a subscription basis from its partner network. That includes ASUS, BOXX Technologies, Dell Technologies, HP, Lenovo, PNY and Supermicro.

The company is also extending its Developer Program to include Omniverse. This means the developer community for Nvidia will have access to Omniverse with custom extensions, microservices, source code, examples, resources and training.

zdnet.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 2:15:40 PM
1 Recommendation   of 2627
 
Nvidia Has Turned $10,000 Into $250,000. Here's Why It Can Do It Again

The graphics specialist can zoom higher thanks to a solid set of catalysts.



Harsh Chauhan
(TMFTechJunk13)

Aug 10, 2021 at 6:30AM

Key Points

Nvidia can grow at a faster pace in the coming years as compared to the last five years.The company's domination of the graphics card market opens up a huge opportunity that could supercharge sales growth.Additional catalysts in the form of the data center and automotive markets strengthen the bullish case.

If you had $10,000 to invest at the beginning of 2016 and bought shares of Nvidia( NASDAQ:NVDA) using that money, your initial investment would be worth just about $250,000 right now.



NVDA DATA BY YCHARTS

Nvidia has beaten the broader market handsomely over the years thanks to its strong suite of products, which has helped it attract millions of customers and dominate a fast-growing space. It is now offering products that deliver more bang for their buck to existing customers, allowing it to increase average selling prices (ASPs) and bring new customers into the fold. As such, Nvidia can repeat -- or improve upon -- its terrific stock market performance once again in the coming years.

Let's look at the reasons why.

Nvidia's outstanding growth can last for a long timeNvidia's dominance of the GPU (graphics processing unit) market has accelerated its revenue and earnings growth over the years. In fiscal 2016, the company had reported just $5 billion in annual revenue and $929 million in adjusted net income. In fiscal 2021, which ended in January this year, Nvidia's annual revenue had ballooned to $16.7 billion and adjusted net income had jumped to $6.2 billion.

This translates into a compound annual revenue growth rate of 27% over the five-year period, while the non- GAAP net income grew 46% per year. Of late, Nvidia has been outpacing its historical growth. Its revenue for the first quarter of fiscal 2022 surged 84% year over year to $5.66 billion, while adjusted net income was up 107% to $2.3 billion.

Analysts forecast that Nvidia's surge will continue this fiscal year, with revenue expected to jump 49% over last year and earnings per share anticipated to increase to the tune of 58%. I think that Nvidia can sustain its outstanding growth beyond this year because of a few simple reasons.

First, Nvidia controls 80% of the discrete GPU market, according to Jon Peddie Research. The discrete GPU market is expected to generate $54 billion in annual revenue by 2025 as compared to $23.6 billion last year. Nvidia's market share puts it in pole position to corner a major chunk of the additional revenue opportunity, and it is unlikely to yield ground to smaller rival Advanced Micro Devices( NASDAQ:AMD) because of its technological advantage.

According to a survey by game distribution service Steam, Nvidia's flagship RTX 3090 card is outselling AMD's latest RX 6000 series cards by a ratio of 11:1. What's surprising is that the RTX 3090 is outperforming AMD's entire lineup despite its high starting price of $1,499, which makes it significantly more expensive than AMD's flagship RX 6900 XT that starts at $999.

The reason why this may be the case is that Nvidia's latest RTX 30 series cards outperform AMD's offerings as per independent benchmarks and are also priced competitively. For instance, the RTX 3080 that starts at $699 reportedly outperforms the pricier RX 6900 XT. Meanwhile, Nvidia takes a substantial lead when it comes to identically priced offerings. The RTX 3070 Ti that starts at $599 is reportedly 21% faster than AMD's RX 6800 that starts at $579, indicating that Nvidia offers more bang for the buck.

As a result, Nvidia is commanding a higher price from customers for its RTX 30 series cards. The average selling price of Nvidia's RTX 30 series cards hit $360 in the first six months of their launch, up 20% from the previous generation Turing cards.

With the gaming business producing 49% of Nvidia's total revenue in Q1, the bright prospects of this segment will play a critical role in helping the company deliver solid upside to investors in the future. This, however, isn't the only catalyst to look out for.

Two more reasons to go long

While Nvidia's gaming business will provide the base for its long-term growth, its data center and automotive businesses will bring additional catalysts into play.

The data center segment, for instance, generated just $339 million in revenue in fiscal 2016. The segment's fiscal 2021 revenue jumped to $6.7 billion and accounted for 40% of the total revenue. Nvidia investors can expect the data center business to keep booming as the company is attacking a substantial opportunity in this space through multiple chip platforms.

Nvidia's data center GPUs are already a hit among major cloud service providers thanks to their ability to tackle artificial intelligence and machine learning workloads. The company is now doubling down on new opportunities such as data processing units (DPUs) and CPUs (central processing units), strengthening its position in the data center accelerator space.

According to a third-party estimate, the data center accelerator market was worth $4.2 billion last year. That number is expected to jump to $53 billion by 2025, opening another huge opportunity for the chipmaker. And finally, the automotive segment is expected to turn into another happy hunting ground for Nvidia.

The company generated just $536 million in revenue from the automotive segment last year. However, Nvidia says that it has built an automotive design win pipeline worth over $8 billion through fiscal 2027, partnering with several well-known OEMs (original equipment manufacturers) such as Mercedes-Benz, Audi, Hyundai, Volvo, Navistar, and others. So, Nvidia's automotive business seems ready to step on the gas.

In the end, it can be said that Nvidia now has bigger growth drivers compared to 2016. More importantly, it is in a strong position to tap into the multibillion-dollar end-market opportunities, making it a top growth stock investors can comfortably buy and hold for at least the next five years.

fool.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 9:46:53 PM
   of 2627
 
A Code for the Code: Simulations Obey Laws of Physics with USD

Universal Scene Description now sports an extension so objects can be simulated to act just like they would in the real world thanks to the collaboration of Apple, NVIDIA and Pixar Animation Studios.

August 10, 2021 by Richard Kerris



Life in the metaverse is getting more real.

Starting today, developers can create and share realistic simulations in a standard way. Apple, NVIDIA and Pixar Animation Studios have defined a common approach for expressing physically accurate models in Universal Scene Description (USD), the common language of virtual 3D worlds. Pixar released USD and described it in 2016 at SIGGRAPH. It was originally designed so artists could work together, creating virtual characters and environments in a movie with the tools of their choice.

Fast forward, and USD is now pervasive in animation and special effects. USD?is?spreading to other professions?like architects who can benefit from?their tools to design and test everything from skyscrapers to sports cars and smart cities.

Playing on the Big Screen To serve the needs of this expanding community, USD needs to stretch in many directions. The good news is Pixar designed USD to be open and flexible.

So, it’s fitting the SIGGRAPH 2021 keynote provides a stage to describe USD’s latest extension. In technical terms, it’s a new schema for rigid-body physics, the math that describes how solids behave in the real world.

For example, when you’re simulating a game where marbles roll down ramps, you want them to respond just as you would expect when they hit each other. To do that, developers need physical details like the weight of the marbles and the smoothness of the ramp. That’s what this new extension supplies.



USD Keeps Getting Better

The initial HTML 1.0 standard, circa 1993, defined how web pages used text and graphics. Fifteen years later HTML5 extended the definition to include video so any user on any device could watch videos and movies.

Apple and NVIDIA were both independently working on ways to describe physics in simulations. As members of the SIGGRAPH community, we came together with Pixar to define a single approach as a new addition to USD.

In the spirit of flexibility, the extension lets developers choose whatever solvers they prefer as they can all be driven from the same set of USD-data. This presents a unified set of data suitable for off-line simulation for film, to games, to augmented reality.

That’s important because solvers for real-time uses like gaming prioritize speed over accuracy, while architects, for example, want solvers that put accuracy ahead of speed.

An Advance That Benefits All Together the three companies wrote a white paper describing their combined proposal and shared it with the USD community. The reviews are in and it’s a hit. Now the extension is part of the standard USD distribution, freely available for all developers.

The list of companies that stand to benefit reads like credits for an epic movie. It includes architects, building managers, product designers and manufacturers of all sorts, companies that design games — even cellular providers optimizing layouts of next-generation networks. And, of course, all the vendors that provide the digital tools to do the work.

“USD is a major force in our industry because it allows for a powerful and consistent representation of complex, 3D scene data across workflows,” said Steve May,?Chief Technology Officer at Pixar.

“Working with NVIDIA and Apple, we have developed a new physics extension that makes USD even more expressive and will have major implications for entertainment and other industries,” he added.

Making a Metaverse Together It’s a big community we aim to serve with NVIDIA Omniverse, a collaboration environment that’s been described as an operating system for creatives or “like Google Docs for 3D graphics.”

We want to make it easy for any company to create lifelike simulations with the tools of their choice. It’s a goal shared by dozens of organizations now evaluating Omniverse Enterprise, and close to 400 companies and tens of thousands of individual creators who have downloaded Omniverse open beta since its release in December 2020.

We envision a world of interconnected virtual worlds — a metaverse — where someday anyone can share their life’s work.

Making that virtual universe real will take a lot of hard work. USD will need to be extended in many dimensions to accommodate the community’s diverse needs.

A Virtual Invitation To get a taste of what’s possible, watch a panel discussion from GTC (free with registration), where 3D experts from nine companies including Pixar, BMW Group, Bentley Systems, Adobe and Foster + Partners talked about the opportunities and challenges ahead.

We’re happy we could collaborate with engineers and designers at Apple and Pixar on this latest USD extension. We’re already thinking about a sequel for soft-body physics and so much more.

Together we can build a metaverse where every tool is available for every job.

For more details, watch a talk on the USD physics extension from NVIDIA’s Adam Moravanszky and attend a USD birds-of-a-feather session hosted by Pixar.

blogs.nvidia.com


Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10