SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

Revision History For: NVIDIA Corporation (NVDA)

19 Jun 2024 10:22 PM <--
24 May 2024 03:11 AM
24 May 2024 02:51 AM
07 Jun 2023 03:20 AM
02 Jun 2023 12:15 AM
20 May 2023 09:11 PM
05 Mar 2023 11:27 PM
05 Mar 2023 09:39 PM
30 Mar 2022 11:06 AM
25 Jan 2022 10:25 AM
13 Jan 2022 05:12 PM
24 Dec 2021 05:16 AM
05 Dec 2021 05:05 PM
04 Dec 2021 10:16 PM
30 Nov 2021 03:33 PM
27 Nov 2021 07:46 AM
27 Nov 2021 06:07 AM
22 Nov 2021 01:58 AM
19 Nov 2021 05:49 PM
19 Nov 2021 02:55 PM
19 Nov 2021 07:52 AM
19 Nov 2021 04:03 AM
19 Nov 2021 02:52 AM
19 Nov 2021 12:37 AM
18 Nov 2021 11:11 PM
18 Nov 2021 10:57 PM
18 Nov 2021 04:26 PM
18 Nov 2021 10:38 AM
11 Nov 2021 04:07 PM
09 Nov 2021 11:11 PM
09 Nov 2021 10:58 PM
04 Nov 2021 08:29 AM
11 Oct 2021 10:02 PM
10 Oct 2021 02:05 AM
08 Sep 2021 07:11 PM
20 Aug 2021 07:10 PM
20 Aug 2021 06:25 PM
19 Aug 2021 09:10 PM
19 Aug 2021 06:26 PM
18 Aug 2021 11:31 PM
16 Aug 2021 11:30 PM
13 Aug 2021 05:54 AM
13 Aug 2021 03:13 AM
13 Aug 2021 02:47 AM
13 Aug 2021 01:54 AM
12 Aug 2021 10:28 PM
12 Aug 2021 07:36 PM
12 Aug 2021 06:24 PM
12 Aug 2021 05:37 PM
12 Aug 2021 04:55 PM
06 Nov 2015 10:39 AM
15 Jan 1999 11:56 PM

Return to NVIDIA Corporation (NVDA)
 
Hoover’s Description:

NVIDIA makes 3-D graphics processors that are built into products made by PC original equipment manufacturers (OEMs) and add-in board makers. The company's RIVA128 graphics processor combines 3-D and 2-D graphics on a single chip and is designed to provide a lower cost alternative to multi-chip or multi-board graphics systems. Customers include STB Systems (63% of sales) and Diamond Multimedia Systems (31%). These two companies incorporate NVIDIA's processors into add-in boards that are then sold to OEMs such as Compaq, Dell, Gateway, and Micron Technology. NVIDIA is fighting patent-infringement suits filed by 3DFX Interactive, Silicon Graphics and S3 that seek to block the sale of its RIVA processors.

Web Site: nvidia.com

Expected IPO date: Week of Jan. 18, 1999

***********************************************************************************************

Update November 18, 2021

Congratulations for finding your way here. NVIDIA is a major player in AI & Robotics chips and software and will continue its exponential growth for many years in the future. Its future growth will be driven by Data Center AI chips and software and AI chips and software for the Onmiverse, Digital Twins and Digital Avatars. It's share price is up 140% year-to-date, operating in AI & Robotics chip and software markets projected to grow at a 38% CAGR over the next five years, growing five-fold over this time. From 2011 to 2020, its share price grew 22-fold, a CAGR of 36%, as NVIDIA transformed itself from a gaming graphics chip company to an AI company, following the vision of its CEO Jensen Huang, who realized fhe applicability of its GPU chips (which NVIDIA invented) to deep neural networks AI computing due to its ability to do parallel processing.

Despite it's exponential growth over the past decade and seemingly rich valuation, I predict that it will continue to grow at a 38% CAGR over the next five years, growing five-fold. I feel like we've found that pot of gold at the end of the rainbow.



Concerning the summary below, you can skim over the extensive history of NVIDIA’s legacy product, gaming graphics chips, and focus on the discussion of AI & Robotics chips and software platforms, including Data Centers, Autonomous Vehicles, the Omniverse (NVIDIA’s version of the Metaverse), Digital Twins, Digital Avatars and other deep neural network AI initiatives. There is also a primer on the “nuts and bolts” of machine learning, viz. deep neural networks, and training the AI models using second year calculus, viz. multi-dimensional Newton Raphson Iteration aka Gradient Descent. And for really big data/models Stochastic Gradient Descent.

**********************************************************************************************
Update May 24,2024
  • Jensen Huang talks Q1 Earnings - 12 minute video

youtu.be
  • NVIDIA just started a new era of Supercomputing - 6 1/2 minute excerpt from GTC 2024
youtu.be

  • GTC 2024 Complete

youtube.com

**********************************************************************************************
Update March 5,2023



CNBC published this video item, entitled “How Nvidia Grew From Gaming To A.I. Giant, Now Powering ChatGPT” – below is their description.

Thirty years ago, Taiwan immigrant Jensen Huang founded Nvidia with the dream of revolutionizing PCs and gaming with 3D graphics. In 1999, after laying off the majority of workers and nearly going bankrupt, the company succeeded when it launched what it claims as the world’s first Graphics Processing Unit (GPU). Then Jensen bet the company on something entirely different: AI. Now, that bet is paying off in a big way as Nvidia’s A100 chips quickly become the coveted training engines for ChatGPT and other generative AI. But as the chip shortage eases, other chip giants like Intel are struggling. And with all it’s chips made by TSMC in Taiwan, Nvidia remains vulnerable to mounting U.S.-China trade tensions. We went to Nvidia’s Silicon Valley, California, headquarters to talk with Huang and get a behind-the scenes-look at the chips powering gaming and the AI boom.

Chapters:

02:04 — Chapter 1: Popularizing the GPU

07:02 — Chapter 2: From graphics to AI and ChatGPT

11:52 — Chapter 3: Geopolitics and other concerns

14:31 — Chapter 4: Amazon, autonomous cars and beyond

Produced and shot by: Katie Tarasov

**********************************************************************************************

Update August 12, 2021

NVIDIA still makes its bread and butter with graphics chips (graphics processing units or GPUs) and it is dominant (NVIDIA's share of the graphics chip market grew from 75% in Q1 2020 to 81% in Q1 2021, far ahead of closest competitor AMD). Nvidia's Graphics segment includes the GeForce GPUs for gaming and PCs, the GeForce NOW game-streaming service and related infrastructure, and solutions for gaming platforms. It also includes the Quadro/NVIDIA RTX GPUs for enterprise workstation graphics, vGPU software for cloud-based visual and virtual computing, and automotive platforms for infotainment systems. In 2020, the Graphics segment generated $9.8 billion, or about 59%, of Nvidia's total revenue.This was up 28.7% compared to the previous year. The segment's operating income grew 41.2% to $4.6 billion, comprising about 64% of the total. The Compute and Networking segment includes Nvidia's Data Center platforms as well as systems for AI, high-performance computing, and accelerated computing. It also includes Mellanox networking and interconnected solutions, automotive AI Cockpit, autonomous driving development agreements, autonomous vehicle solutions, and Jetson for robotics and other embedded platforms. The Compute and Networking segment delivered revenue of $6.8 billion in 2020, up 108.6% from the previous year. The segment accounts for about 41% of Nvidia's total revenue. Operating income grew 239.3% to $2.5 billion. Compute & Networking accounts for about 36% of the company's total operating income.

AI is considered by management and observers to be the future, and NVIDIA even incorporates AI into its RTX graphics chips with DLSS (DLSS stands for deep learning super sampling. It's a type of video rendering technique that looks to boost frame rates by rendering frames at a lower resolution than displayed and using deep learning, a type of AI, to upscale the frames so that they look as sharp as expected at the native resolution.) Data centers is a fast-growing area and last year it introduced the data processing unit (DPU). One of Nvidia’s newer concepts in AI hardware for data centers is the BlueField DPU (data processing unit) for data centers, first revealed at the GTC in October 2020. In April 2021 the company unveiled BlueField-3, a DPU it said was designed specifically for “AI and accelerated computing.” Like Nvidia GPUs, its DPUs are accelerators, meaning they are meant to offload compute-heavy tasks from a system’s CPU, leaving the latter with more capacity to tackle other workloads. DPUs are powered by Arm chips. Nvidia DPUs, based on the BlueField SmartNICs by Mellanox (acquired by Nvidia in 2019), take on things like software-defined networking, storage management, and security workloads. They’re also eventually expected to offload server virtualization, via a partnership with VMware as part of VMware’s Project Monterey.

4 Reasons to Invest in Nvidia's AI in 2022

Many of the world's leading companies are using Nvidia's GPUs to power their AI systems.



Danny Vena

January 13, 2022
Good eight minute discussion of NVIDIA’s history in graphics chips (GPUs) for gaming and how their ability to do parallel processing (performing multiple computations simultaneously) led to NVIDIA’s AI world dominance.


NVIDIA is actively involved in AI supercomputers. NVIDIA technologies power 342 systems on the TOP500 list released at the ISC High Performance event in June 2021, including 70 percent of all new systems and eight of the top 10.The latest ranking of the world’s most powerful systems shows high performance computing centers are increasingly adopting AI. It also demonstrates that users continue to embrace the combination of NVIDIA AI, accelerated computing and networking technologies to run their scientific and commercial workloads. NVIDIA is at the forefront of developing autonomous vehicles. NVIDIA is at the forefront of virtual reality and AI applications with its Omniverse, its version of the Metaverse..

Here is a one and a half hour video of the November 2021 GTC Keynote by NVIDIA CEO Jensen Huang. He focused on the Omniverse and digital and physical robots,including autonomous vehicles. Scroll to 13:00 minutes for start.



Here is a one and a half hour video of the May 2021 GTC Keynote by NVIDIA CEO Jensen Huang, discussing the latest developments in graphics chips, data centers, supercomputers, autonomous vehicles and the Omniverse. Just this week it was revealed that part of the GTC Keynote, including Jensen Huang and his kitchen, was simulated in Omniverse. Scroll to 13:00 minutes for start.



Graphics Chips

High 10 Most Necessary Nvidia GPUs of All Time
(Ten minute video Summary)



AI And GPUs

Shall we play a game? How video games transformed AI Message 33443736



After 40 years in the wilderness, two huge breakthroughs are fueling an AI renaissance. The internet handed us a near unlimited amount of data. A recent IBM paper found 90% of the world’s data has been created in just the last two years. From the 290+ billion photos shared on Facebook, to millions of e-books, billions of online articles and images, we now have endless fodder for neural networks. The breathtaking jump in computing power is the other half of the equation. RiskHedge readers know computer chips are the “brains” of electronics like your phone and laptop. Chips contain billions of “brain cells” called transistors. The more transistors on a chip, the faster it is. And in the past decade, a special type of computer chip emerged as the perfect fit for neural networks. Do you remember the blocky graphics on video games like Mario and Sonic from the ‘90s? If you have kids who are gamers, you’ll know graphics have gotten far more realistic since then.

This incredible jump is due to chips called graphics processing units (GPUs). GPUs can perform thousands of calculations all at once, which helps create these movie-like graphics. That’s different from how traditional chips work, which calculate one by one. Around 2006, Stanford researchers discovered GPUs “parallel processing” abilities were perfect for AI training. For example, do you remember Google’s Brain project? The machine taught itself to recognize cats and people by watching YouTube videos. It was powered by one of Google’s giant data centers, running on 2,000 traditional computer chips. In fact, the project cost a hefty $5 billion. Stanford researchers then built the same machine with GPUs instead. A dozen GPUs delivered the same data crunching performance of 2,000 traditional chips. And it slashed costs from $5 billion to $33,000! The huge leap in computing power and explosion of data means we finally have the “lifeblood” of AI.

The one company with a booming AI business is NVIDIA (NVDA). NVIDIA invented graphics processing units back in the 1990s. It’s solely responsible for the realistic video game graphics we have today. And then we discovered these gaming chips were perfect for training neural networks. NVIDIA stumbled into AI by accident, but early on, it realized it was a huge opportunity. Soon after, NVIDIA started building chips specifically optimized for machine learning. And in the first half of 2020, AI-related sales topped $2.8 billion.

In fact, more than 90% of neural network training runs on NVIDIA GPUs today. Its AI-chips are light years ahead of the competition. Its newest system, the A100, is described as an “AI supercomputer in a box.” With more than 54 billion transistors, it’s the most powerful chip system ever created. In fact, just one A100 packs the same computing power as 300 data center servers. And it does it for one-tenth the cost, takes up one-sixtieth the space, and runs on one-twentieth the power consumption of a typical server room. A single A100 reduces a whole room of servers to one rack. NVIDIA has a virtual monopoly on neural network training. And every breakthrough worth mentioning has been powered by its GPUs. Computer vision is one of the world’s most important disruptions. And graphics chips are perfect for helping computers to “see.” NVIDIA crafted its DRIVE chips specially for self-driving cars. These chips power several robocar startups including Zoox, which Amazon just snapped up for $1.2 billion. With NVIDIA’s backing, vision disruptor Trigo is transforming grocery stores into giant supercomputers.
  • Nvidia’s Chips Have Powered Nearly Every Major AI Breakthrough etftrends.com
  • Will Nvidia’s huge bet on artificial-intelligence chips pay off? Message 33422292
  • NVIDIA's Battle For The Future Of AI Chips - Ten minute video


AI And Data Centers
  • Why NVIDIA's Data Center Move Should Give AMD and Intel Sleepless Nights fool.com
  • NVIDIA maintains tight grip on market for AI processors in cloud and data centers Message 33433417
  • Inside the DPU: Talk Describes an Engine Powering Data Center Networks Message 33449379
AI And Supercomputers
  • America to get world's 'most powerful' AI supercomputer to create the most detailed 3D map of the universe yet theregister.com
  • Tesla Unveils Top AV Training Supercomputer Powered by NVIDIA A100 GPUs blogs.nvidia.com
  • Buckle Up for the Industrial HPC Revolution. AI paired with accelerated and high performance computing have forged a digital flywheel that’s propelling super-exponential advances. June 25, 2021 by PARESH KHARYA. Jensen Huang Teratec keynote (18 minute video).

  • Jensen Huang gave the two hour Computex Keynote speech in May of 2033. I watched it twice to let it all sink in. My favorite sections were the comments on data Center performance increasing a million fold in the next decade, starting with Grace Hopper, and the amazing interaction of Digtal Avatars with LLM, and Applications of the Omniverse, including Digital Twins of Factories and Isaac Sim Robotics, including Autonomous Vehicles. Watching a second time reinforced my optimism. Poster Sir Liberte responded: I agree My shocker was 3 points:First, Jensen mentioned that 60% of the largest companies in the world are already using Nvidia Omniverse. Yikes! Second, Nvidia now has 3 Million 3rd-party developers.while 5 years ago Nvidia only had around 6,000 3rd party developers.Third, Nvidia start-up early stage Inception program now has 50,000 start up early stage companies in the program.They have also combined VC's into the inception ecosystem program. Providing funding and guidance with Nvidia to these companies.Another gigantic moat.The 50,000 startups are developing with Nvidia AI hardware and AI CUDA software in new emerging markets.Another gigantic moat.Good luck in your investing


  • Karl Freund on Good, Bad and Ugly from Supercomputer Conference, Fall 2023. forbes.com
AI And Autonomous Vehicles
  • A Plus for Autonomous Trucking: Startup to Build Next-Gen Self-Driving Platform with NVIDIA DRIVE Orin blogs.nvidia.com
  • AutoX Robotaxis Now Using NVIDIA DRIVE, NVIDIA Acquiring DeepMap, & DiDi Booming On NVIDIA DRIVE’s Back Message 33427043
  • Ride in NVIDIA's Self-Driving Car ( seven and a half minute video)


AI And Robotics
  • NVIDIA’s Liila Torabi Talks the New Era of Robotics Through Isaac Sim blogs.nvidia.com
  • Robot Autonomy with the Digital Twin in Isaac Sim (6 minute video)

  • Can We Simulate A Real Robot? (21 minute video)

  • Learning To Walk In Minutes Using Massively Parallel Deep Reinforcement Learning
Another example of the fascinating work NVIDIA is doing in AI and Robotics. Two and a half minute video. They are using computer simulation to create virtual robots, then train the AI to walk on rugged terrains using deep neural networks. Then the trained AI is downloaded to the real-life robot and it is able to walk autonomously on rugged terrains. Pretty amazing stuff.

Message 33523761
  • NVIDIA Is Doubling Down On Robotics
Message 33558931
  • NVIDIA's new AI brain for robots is six times more powerful than its predecessor
Message 33567012

AI And The Omniverse
  • What Is the Metaverse? With NVIDIA Omniverse we can (finally) connect to it to do real work - here’s how. Message 33435116
  • From Our Kitchen to Yours: NVIDIA Omniverse Changes the Way Industries Collaborate Message 33437426
  • Nvidia wants to fill the virtual and physical worlds with AI avatars Message 33567029
  • More comments from SA poster grxbstrd: Gotta make a shout out to Josh Brown. This is the first talking head that gets the nvidia story. Cramer for example is impressed, but he doesn't understand the why, just the results. Josh is putting all the pieces together and really understands the vision. (He appears to have the same reaction I did when you truly understand how omniverse is a marriage of every bit of their technology: they have an enormous lead over everyone, and now they're going to sit back and sell hardware and licenses for it with companies lined up beyond the horizon. Just print money, like the scene from Jaws, we're gonna need a bigger . . . printing press) "It's an expensive stock for a reason, and it just got more expensive" - or words to that effect.


The Future of AI Chips

The future of AI chips is application-specific integrated circuit (ASIC), e.g., Google's Tensor Processing Unit (TPU).
  • Google ASIC: Tensor Processing Units: History and hardware (5 minute video)


The following video discusses the advantages of GPU over X86 CPU, and the advantages of ASIC over GPU.
  • What is CPU,GPU and TPU? Understanding these 3 processing units using Artificial Neural Networks.


The Future - Competition

Besides the obvious competition from Intel, AMD and Google's TPU, there are start-ups in both China and the West which want to dethrone NVIDIA as Emperor of AI Chips.
  • Besieging GPU: Who Will Be the Next Overlord of AI Computing After NVIDIA Message 33435880
Briefing: This article will introduce several AI chip companies with unique technologies. They either have advanced computing concepts, or have the top architects. These AI chips with new architectures are destroying half of the GPU’s world like the magical Thanos Gloves. In the Post-Moore era, the process technology has been gradually approaching the physical limit, with the speed of progress gradually slowing down; the computation mode of semiconductor chips is also changing from all-purpose towards specific-purpose.
AI in general

For a comprehensive discussion of AI and AI companies in general, see the Artificial Intelligence, Robotics and Automation board moderated by my friend Glenn Petersen. Subject 59856


Deep Learning

Modern AI is based on Deep Learning algorithms. Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data.

Deep Learning Algorithms for AI

The first one-hour video explains how this works. Amazingly, it is just least squares minimization of the neural network loss function using multi-dimensional Newton Raphson (gradient descent). See second one half hour video. Who thought Calculus would come in handy?

1. MIT Introduction to Deep Learning



2. Gradient Descent, Step-by-Step



Math Issues: Optimizing With Multiple Peaks Or Valleys




A problem with gradient descent optimization is that it can find minima of functions as well as maxima of functions. Worse, there can be multiple peaks and valleys, so more properly gradient descent finds local extrema. One is interested in machine learning, e.g., in global minima. This makes the problem considerably more difficult. This is particularly true since loss functions for deep learning neural networks can have millions or even billions of parameters.

Another problem has to do with the size of the data sets used to train deep learning neural networks, which can be huge. Since gradient descent is an iterative process, it becomes prohibitively time-consuming to evaluate the loss function at each and every data point, even with high-performance AI chips. This leads to stochastic gradient descent: the loss function is evaluated at a relatively small random sample of the data at each iterative step.

3. Stochastic Gradient Descent:



Exponential Growth Of NVIDIA


Some Comments on Moore’s Law and Super Exponential Growth in Supercomputer Performance

Some Comments on Moore’s Law

Moore’s Law was coined in the early 1960’s by then-Intel CEO Gordon Moore to describe his forecast that the number of transistors in a computer chip would double every two years. An exponential curve fit (using linear regression to the logs of the counts) which I did to the historical number of transistors on an Intel CPU grew from 2,300 transistors per chip in 1971 to 8 billion transistors per chip in 2017, growing in the 46 year period from 1971 to 2017 at a CAGR of 39.1%, doubling every 2.1 years. See graph below:



Super Exponential Growth in Supercomputer Performance

However, NVIDIA CEO Jensen Huang, in a supercomputer conference earlier this year (Teradec), pointed out that supercomputer performance was growing at a super exponential growth rate. He noted that in the fifteen prior years supercomputer performance had grown 10 trillion-fold, a CAGR of 635%! At this CAGR, supercomputer performance grows over seven-fold “every year”! This is really amazing! Huang announced a GTC that NVIDIA is planning on using this super exponential growth rates in supercomputer performance to create a new supercomputer to study and forecast Climate change by creating an Omniverse Digital Twin of the entire earth. These are very interesting times!

Cheers,
Sully

Earnings History
Why did Nvidia shares climb more than 8% today?
seekingalpha.com
Nvidia posts record revenues, boosts operating income 275%

Message 33447356
Nvidia posts another record-beating quarter, guides for revenue upside

seekingalpha.com












Nvidia climbs after fourth quarter smashes revenue records

seekingalpha.com
Nvidia Q3 beat driven by record gaming, data center sales; guidance tops estimates

seekingalpha.com
Nvidia reports upside Q2 on gaming strength, record data center sales

seekingalpha.com
Nvidia posts strong FQ1 after closing Mellanox deal

seekingalpha.com
Nvidia beats estimates on data center strength; shares +6%

seekingalpha.com