We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor. We ask that you disable ad blocking while on Silicon Investor in the best interests of our community. If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level. |
Hoover’s Description: NVIDIA makes 3-D graphics processors that are built into products made by PC original equipment manufacturers (OEMs) and add-in board makers. The company's RIVA128 graphics processor combines 3-D and 2-D graphics on a single chip and is designed to provide a lower cost alternative to multi-chip or multi-board graphics systems. Customers include STB Systems (63% of sales) and Diamond Multimedia Systems (31%). These two companies incorporate NVIDIA's processors into add-in boards that are then sold to OEMs such as Compaq, Dell, Gateway, and Micron Technology. NVIDIA is fighting patent-infringement suits filed by 3DFX Interactive, Silicon Graphics and S3 that seek to block the sale of its RIVA processors. Web Site: nvidia.com Expected IPO date: Week of Jan. 18, 1999 *********************************************************************************************** Update November 18, 2021 Congratulations for finding your way here. NVIDIA is a major player in AI & Robotics chips and software and will continue its exponential growth for many years in the future. Its future growth will be driven by Data Center AI chips and software and AI chips and software for the Onmiverse, Digital Twins and Digital Avatars. It's share price is up 140% year-to-date, operating in AI & Robotics chip and software markets projected to grow at a 38% CAGR over the next five years, growing five-fold over this time. From 2011 to 2020, its share price grew 22-fold, a CAGR of 36%, as NVIDIA transformed itself from a gaming graphics chip company to an AI company, following the vision of its CEO Jensen Huang, who realized fhe applicability of its GPU chips (which NVIDIA invented) to deep neural networks AI computing due to its ability to do parallel processing. Despite it's exponential growth over the past decade and seemingly rich valuation, I predict that it will continue to grow at a 38% CAGR over the next five years, growing five-fold. I feel like we've found that pot of gold at the end of the rainbow. Concerning the summary below, you can skim over the extensive history of NVIDIA’s legacy product, gaming graphics chips, and focus on the discussion of AI & Robotics chips and software platforms, including Data Centers, Autonomous Vehicles, the Omniverse (NVIDIA’s version of the Metaverse), Digital Twins, Digital Avatars and other deep neural network AI initiatives. There is also a primer on the “nuts and bolts” of machine learning, viz. deep neural networks, and training the AI models using second year calculus, viz. multi-dimensional Newton Raphson Iteration aka Gradient Descent. And for really big data/models Stochastic Gradient Descent. ********************************************************************************************** Update May 24,2024
youtu.be
youtube.com ********************************************************************************************** Update March 5,2023 CNBC published this video item, entitled “How Nvidia Grew From Gaming To A.I. Giant, Now Powering ChatGPT” – below is their description. Thirty years ago, Taiwan immigrant Jensen Huang founded Nvidia with the dream of revolutionizing PCs and gaming with 3D graphics. In 1999, after laying off the majority of workers and nearly going bankrupt, the company succeeded when it launched what it claims as the world’s first Graphics Processing Unit (GPU). Then Jensen bet the company on something entirely different: AI. Now, that bet is paying off in a big way as Nvidia’s A100 chips quickly become the coveted training engines for ChatGPT and other generative AI. But as the chip shortage eases, other chip giants like Intel are struggling. And with all it’s chips made by TSMC in Taiwan, Nvidia remains vulnerable to mounting U.S.-China trade tensions. We went to Nvidia’s Silicon Valley, California, headquarters to talk with Huang and get a behind-the scenes-look at the chips powering gaming and the AI boom. Chapters: 02:04 — Chapter 1: Popularizing the GPU 07:02 — Chapter 2: From graphics to AI and ChatGPT 11:52 — Chapter 3: Geopolitics and other concerns 14:31 — Chapter 4: Amazon, autonomous cars and beyond Produced and shot by: Katie Tarasov ********************************************************************************************** Update August 12, 2021 NVIDIA still makes its bread and butter with graphics chips (graphics processing units or GPUs) and it is dominant (NVIDIA's share of the graphics chip market grew from 75% in Q1 2020 to 81% in Q1 2021, far ahead of closest competitor AMD). Nvidia's Graphics segment includes the GeForce GPUs for gaming and PCs, the GeForce NOW game-streaming service and related infrastructure, and solutions for gaming platforms. It also includes the Quadro/NVIDIA RTX GPUs for enterprise workstation graphics, vGPU software for cloud-based visual and virtual computing, and automotive platforms for infotainment systems. In 2020, the Graphics segment generated $9.8 billion, or about 59%, of Nvidia's total revenue.This was up 28.7% compared to the previous year. The segment's operating income grew 41.2% to $4.6 billion, comprising about 64% of the total. The Compute and Networking segment includes Nvidia's Data Center platforms as well as systems for AI, high-performance computing, and accelerated computing. It also includes Mellanox networking and interconnected solutions, automotive AI Cockpit, autonomous driving development agreements, autonomous vehicle solutions, and Jetson for robotics and other embedded platforms. The Compute and Networking segment delivered revenue of $6.8 billion in 2020, up 108.6% from the previous year. The segment accounts for about 41% of Nvidia's total revenue. Operating income grew 239.3% to $2.5 billion. Compute & Networking accounts for about 36% of the company's total operating income. AI is considered by management and observers to be the future, and NVIDIA even incorporates AI into its RTX graphics chips with DLSS (DLSS stands for deep learning super sampling. It's a type of video rendering technique that looks to boost frame rates by rendering frames at a lower resolution than displayed and using deep learning, a type of AI, to upscale the frames so that they look as sharp as expected at the native resolution.) Data centers is a fast-growing area and last year it introduced the data processing unit (DPU). One of Nvidia’s newer concepts in AI hardware for data centers is the BlueField DPU (data processing unit) for data centers, first revealed at the GTC in October 2020. In April 2021 the company unveiled BlueField-3, a DPU it said was designed specifically for “AI and accelerated computing.” Like Nvidia GPUs, its DPUs are accelerators, meaning they are meant to offload compute-heavy tasks from a system’s CPU, leaving the latter with more capacity to tackle other workloads. DPUs are powered by Arm chips. Nvidia DPUs, based on the BlueField SmartNICs by Mellanox (acquired by Nvidia in 2019), take on things like software-defined networking, storage management, and security workloads. They’re also eventually expected to offload server virtualization, via a partnership with VMware as part of VMware’s Project Monterey. 4 Reasons to Invest in Nvidia's AI in 2022 Many of the world's leading companies are using Nvidia's GPUs to power their AI systems. Danny Vena January 13, 2022 Good eight minute discussion of NVIDIA’s history in graphics chips (GPUs) for gaming and how their ability to do parallel processing (performing multiple computations simultaneously) led to NVIDIA’s AI world dominance. NVIDIA is actively involved in AI supercomputers. NVIDIA technologies power 342 systems on the TOP500 list released at the ISC High Performance event in June 2021, including 70 percent of all new systems and eight of the top 10.The latest ranking of the world’s most powerful systems shows high performance computing centers are increasingly adopting AI. It also demonstrates that users continue to embrace the combination of NVIDIA AI, accelerated computing and networking technologies to run their scientific and commercial workloads. NVIDIA is at the forefront of developing autonomous vehicles. NVIDIA is at the forefront of virtual reality and AI applications with its Omniverse, its version of the Metaverse.. Here is a one and a half hour video of the November 2021 GTC Keynote by NVIDIA CEO Jensen Huang. He focused on the Omniverse and digital and physical robots,including autonomous vehicles. Scroll to 13:00 minutes for start. Here is a one and a half hour video of the May 2021 GTC Keynote by NVIDIA CEO Jensen Huang, discussing the latest developments in graphics chips, data centers, supercomputers, autonomous vehicles and the Omniverse. Just this week it was revealed that part of the GTC Keynote, including Jensen Huang and his kitchen, was simulated in Omniverse. Scroll to 13:00 minutes for start. Graphics Chips High 10 Most Necessary Nvidia GPUs of All Time (Ten minute video Summary) AI And GPUs Shall we play a game? How video games transformed AI Message 33443736 After 40 years in the wilderness, two huge breakthroughs are fueling an AI renaissance. The internet handed us a near unlimited amount of data. A recent IBM paper found 90% of the world’s data has been created in just the last two years. From the 290+ billion photos shared on Facebook, to millions of e-books, billions of online articles and images, we now have endless fodder for neural networks. The breathtaking jump in computing power is the other half of the equation. RiskHedge readers know computer chips are the “brains” of electronics like your phone and laptop. Chips contain billions of “brain cells” called transistors. The more transistors on a chip, the faster it is. And in the past decade, a special type of computer chip emerged as the perfect fit for neural networks. Do you remember the blocky graphics on video games like Mario and Sonic from the ‘90s? If you have kids who are gamers, you’ll know graphics have gotten far more realistic since then. This incredible jump is due to chips called graphics processing units (GPUs). GPUs can perform thousands of calculations all at once, which helps create these movie-like graphics. That’s different from how traditional chips work, which calculate one by one. Around 2006, Stanford researchers discovered GPUs “parallel processing” abilities were perfect for AI training. For example, do you remember Google’s Brain project? The machine taught itself to recognize cats and people by watching YouTube videos. It was powered by one of Google’s giant data centers, running on 2,000 traditional computer chips. In fact, the project cost a hefty $5 billion. Stanford researchers then built the same machine with GPUs instead. A dozen GPUs delivered the same data crunching performance of 2,000 traditional chips. And it slashed costs from $5 billion to $33,000! The huge leap in computing power and explosion of data means we finally have the “lifeblood” of AI. The one company with a booming AI business is NVIDIA (NVDA). NVIDIA invented graphics processing units back in the 1990s. It’s solely responsible for the realistic video game graphics we have today. And then we discovered these gaming chips were perfect for training neural networks. NVIDIA stumbled into AI by accident, but early on, it realized it was a huge opportunity. Soon after, NVIDIA started building chips specifically optimized for machine learning. And in the first half of 2020, AI-related sales topped $2.8 billion. In fact, more than 90% of neural network training runs on NVIDIA GPUs today. Its AI-chips are light years ahead of the competition. Its newest system, the A100, is described as an “AI supercomputer in a box.” With more than 54 billion transistors, it’s the most powerful chip system ever created. In fact, just one A100 packs the same computing power as 300 data center servers. And it does it for one-tenth the cost, takes up one-sixtieth the space, and runs on one-twentieth the power consumption of a typical server room. A single A100 reduces a whole room of servers to one rack. NVIDIA has a virtual monopoly on neural network training. And every breakthrough worth mentioning has been powered by its GPUs. Computer vision is one of the world’s most important disruptions. And graphics chips are perfect for helping computers to “see.” NVIDIA crafted its DRIVE chips specially for self-driving cars. These chips power several robocar startups including Zoox, which Amazon just snapped up for $1.2 billion. With NVIDIA’s backing, vision disruptor Trigo is transforming grocery stores into giant supercomputers.
AI And Data Centers
AI And Robotics
Message 33523761
AI And The Omniverse
The Future of AI Chips The future of AI chips is application-specific integrated circuit (ASIC), e.g., Google's Tensor Processing Unit (TPU).
The following video discusses the advantages of GPU over X86 CPU, and the advantages of ASIC over GPU.
The Future - Competition Besides the obvious competition from Intel, AMD and Google's TPU, there are start-ups in both China and the West which want to dethrone NVIDIA as Emperor of AI Chips.
For a comprehensive discussion of AI and AI companies in general, see the Artificial Intelligence, Robotics and Automation board moderated by my friend Glenn Petersen. Subject 59856
Modern AI is based on Deep Learning algorithms. Deep learning is a subset of machine learning, which is essentially a neural network with three or more layers. These neural networks attempt to simulate the behavior of the human brain—albeit far from matching its ability—allowing it to “learn” from large amounts of data. Deep Learning Algorithms for AI The first one-hour video explains how this works. Amazingly, it is just least squares minimization of the neural network loss function using multi-dimensional Newton Raphson (gradient descent). See second one half hour video. Who thought Calculus would come in handy? 1. MIT Introduction to Deep Learning 2. Gradient Descent, Step-by-Step Math Issues: Optimizing With Multiple Peaks Or Valleys A problem with gradient descent optimization is that it can find minima of functions as well as maxima of functions. Worse, there can be multiple peaks and valleys, so more properly gradient descent finds local extrema. One is interested in machine learning, e.g., in global minima. This makes the problem considerably more difficult. This is particularly true since loss functions for deep learning neural networks can have millions or even billions of parameters. Another problem has to do with the size of the data sets used to train deep learning neural networks, which can be huge. Since gradient descent is an iterative process, it becomes prohibitively time-consuming to evaluate the loss function at each and every data point, even with high-performance AI chips. This leads to stochastic gradient descent: the loss function is evaluated at a relatively small random sample of the data at each iterative step. 3. Stochastic Gradient Descent: Exponential Growth Of NVIDIA Some Comments on Moore’s Law and Super Exponential Growth in Supercomputer Performance Some Comments on Moore’s Law Moore’s Law was coined in the early 1960’s by then-Intel CEO Gordon Moore to describe his forecast that the number of transistors in a computer chip would double every two years. An exponential curve fit (using linear regression to the logs of the counts) which I did to the historical number of transistors on an Intel CPU grew from 2,300 transistors per chip in 1971 to 8 billion transistors per chip in 2017, growing in the 46 year period from 1971 to 2017 at a CAGR of 39.1%, doubling every 2.1 years. See graph below: Super Exponential Growth in Supercomputer Performance However, NVIDIA CEO Jensen Huang, in a supercomputer conference earlier this year (Teradec), pointed out that supercomputer performance was growing at a super exponential growth rate. He noted that in the fifteen prior years supercomputer performance had grown 10 trillion-fold, a CAGR of 635%! At this CAGR, supercomputer performance grows over seven-fold “every year”! This is really amazing! Huang announced a GTC that NVIDIA is planning on using this super exponential growth rates in supercomputer performance to create a new supercomputer to study and forecast Climate change by creating an Omniverse Digital Twin of the entire earth. These are very interesting times! Cheers, Sully Earnings History Why did Nvidia shares climb more than 8% today? seekingalpha.com Nvidia posts record revenues, boosts operating income 275% Message 33447356 Nvidia posts another record-beating quarter, guides for revenue upside seekingalpha.com seekingalpha.com Nvidia Q3 beat driven by record gaming, data center sales; guidance tops estimates seekingalpha.com Nvidia reports upside Q2 on gaming strength, record data center sales seekingalpha.com Nvidia posts strong FQ1 after closing Mellanox deal seekingalpha.com Nvidia beats estimates on data center strength; shares +6% seekingalpha.com | ||||||||||||||||||||||||||
|
Home | Hot | SubjectMarks | PeopleMarks | Keepers | Settings |
Terms Of Use | Contact Us | Copyright/IP Policy | Privacy Policy | About Us | FAQ | Advertise on SI |
© 2024 Knight Sac Media. Data provided by Twelve Data, Alpha Vantage, and CityFALCON News |