SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Frank Sully8/10/2021 2:06:46 PM
   of 2632
 
Nvidia expands Omniverse with a new GPU, new collaborations

The new RTX A2000 is a powerful, low-profile, dual-slot GPU for professionals, with 6GB of ECC graphics memory in a compact form factor. Nvidia says it will expand access to Omniverse, its 3D collaboration platform.



By Stephanie Condon for Between the Lines | August 10, 2021 -- 16:00 GMT (09:00 PDT) | Topic: Processors

Nvidia on Tuesday announced a series of ways it plans to bring the Omniverse design and collaboration platform to a vastly larger audience. Those plans include new integrations with Blender and Adobe, companies that will extend the potential reach of Omniverse by millions. Nvidia is also introducing the new RTX A2000 GPU, bringing the RTX technology that powers Omniverse to a wide range of mainstream computers.

Nvidia rolled out Omniverse in open beta back in December, giving 3D designers a shared virtual world from which they can collaborate across different software applications and from different geographic locations. Earlier this year, the company introduced Omniverse Enterprise, bringing the platform to the enterprise community via a familiar licensing model.

"We are building Omniverse to be the connector of the physical and virtual worlds," Richard Kerris, VP of Omniverse for Nvidia, said to reporters last week. "We believe that there will be more content and experiences shared in virtual worlds than in physical worlds. And we believe that there will be amazing exchange markets and economic situations that will be first built in the virtual world... Omniverse is an exchange of these vital worlds. We connect everything and everyone, through a baseline architecture that is familiar to existing tools that are out there and existing workflows."

Nvidia unveiled the RTX A2000 GPU to bring RTX technology to mainstream workstations.

In a blog post, Nvidia VP Bob Pette wrote that the new A2000 GPU "would serve as a portal" to Omniverse "for millions of designers." The A2000 is Nvidia's most compact, power-efficient GPU for standard and small-form-factor workstations.

The GPU has 6GB of memory capacity with an error correction code (ECC) to maintain data integrity -- a feature especially important for industries such as healthcare and financial services.

Based on the Nvidia Ampere architecture, it features 2nd Gen RT Cores, enabling real-time ray tracing for professional workflows. It offers up to 5x the rendering performance from the previous generation with RTX on. It also features 3rd Gen tensor cores to enable AI-augmented tools and applications, as well as CUDA cores with up to 2x the FP32 throughput of the previous generation.

Speaking to reporters, Pette said the A200 would enable RTX in millions of additional mainstream computers. More designers will have access to the real-time ray tracing and AI acceleration capabilities that RTX offers. "This is the first foray of RTX into what is the largest volume segment of GPUs for Nvidia," Pette said.

Among the first customers using the RTX A2000 are Avid, Cuhaci & Peterson and Gilbane Building Company.

The A2000 desktop GPU will be available in workstations from manufacturers including ASUS, BOXX Technologies, Dell Technologies, HP and Lenovo, as well as Nvidia's global distribution partners, starting in October.

Meanwhile, Nvidia is encouraging the adoption of Omniverse by supporting Universal Scene Description (USD), an interchange framework invented by Pixar in 2012. USD was released as open-source software in 2016, providing a common language for defining, packaging, assembling and editing 3D data.

Omniverse is built on the USD framework, giving other software makers different ways to connect to the platform. Nvidia announced Tuesday that it's collaborating with Blender, the world's leading open-source 3D animation tool, to provide USD support to the upcoming release of Blender 3.0. This will give Blender's millions of users access to Omniverse production pipelines. Nvidia is contributing USD and materials support in Blender 3.0 alpha USD, which will be available soon.

Nvidia has also collaborated with Pixar and Apple to define a common approach for expressing physically accurate models in USD. More specifically, they've developed a new schema for rigid-body physics, the math that describes how solids behave in the real world (for example, how marbles would roll down a ramp). This will help developers create and share realistic simulations in a standard way.

Nvidia also announced a new collaboration with Adobe on a Substance 3D plugin that will bring Substance Material support to Omniverse. This will give Omniverse and Substance 3D users new material editing capabilities.

EXPANDING OMNIVERSE ACCESSNvidia on Tuesday also announced that Omniverse Enterprise, currently in limited early access, will be available later this year on a subscription basis from its partner network. That includes ASUS, BOXX Technologies, Dell Technologies, HP, Lenovo, PNY and Supermicro.

The company is also extending its Developer Program to include Omniverse. This means the developer community for Nvidia will have access to Omniverse with custom extensions, microservices, source code, examples, resources and training.

zdnet.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 2:15:40 PM
1 Recommendation   of 2632
 
Nvidia Has Turned $10,000 Into $250,000. Here's Why It Can Do It Again

The graphics specialist can zoom higher thanks to a solid set of catalysts.



Harsh Chauhan
(TMFTechJunk13)

Aug 10, 2021 at 6:30AM

Key Points

Nvidia can grow at a faster pace in the coming years as compared to the last five years.The company's domination of the graphics card market opens up a huge opportunity that could supercharge sales growth.Additional catalysts in the form of the data center and automotive markets strengthen the bullish case.

If you had $10,000 to invest at the beginning of 2016 and bought shares of Nvidia( NASDAQ:NVDA) using that money, your initial investment would be worth just about $250,000 right now.



NVDA DATA BY YCHARTS

Nvidia has beaten the broader market handsomely over the years thanks to its strong suite of products, which has helped it attract millions of customers and dominate a fast-growing space. It is now offering products that deliver more bang for their buck to existing customers, allowing it to increase average selling prices (ASPs) and bring new customers into the fold. As such, Nvidia can repeat -- or improve upon -- its terrific stock market performance once again in the coming years.

Let's look at the reasons why.

Nvidia's outstanding growth can last for a long timeNvidia's dominance of the GPU (graphics processing unit) market has accelerated its revenue and earnings growth over the years. In fiscal 2016, the company had reported just $5 billion in annual revenue and $929 million in adjusted net income. In fiscal 2021, which ended in January this year, Nvidia's annual revenue had ballooned to $16.7 billion and adjusted net income had jumped to $6.2 billion.

This translates into a compound annual revenue growth rate of 27% over the five-year period, while the non- GAAP net income grew 46% per year. Of late, Nvidia has been outpacing its historical growth. Its revenue for the first quarter of fiscal 2022 surged 84% year over year to $5.66 billion, while adjusted net income was up 107% to $2.3 billion.

Analysts forecast that Nvidia's surge will continue this fiscal year, with revenue expected to jump 49% over last year and earnings per share anticipated to increase to the tune of 58%. I think that Nvidia can sustain its outstanding growth beyond this year because of a few simple reasons.

First, Nvidia controls 80% of the discrete GPU market, according to Jon Peddie Research. The discrete GPU market is expected to generate $54 billion in annual revenue by 2025 as compared to $23.6 billion last year. Nvidia's market share puts it in pole position to corner a major chunk of the additional revenue opportunity, and it is unlikely to yield ground to smaller rival Advanced Micro Devices( NASDAQ:AMD) because of its technological advantage.

According to a survey by game distribution service Steam, Nvidia's flagship RTX 3090 card is outselling AMD's latest RX 6000 series cards by a ratio of 11:1. What's surprising is that the RTX 3090 is outperforming AMD's entire lineup despite its high starting price of $1,499, which makes it significantly more expensive than AMD's flagship RX 6900 XT that starts at $999.

The reason why this may be the case is that Nvidia's latest RTX 30 series cards outperform AMD's offerings as per independent benchmarks and are also priced competitively. For instance, the RTX 3080 that starts at $699 reportedly outperforms the pricier RX 6900 XT. Meanwhile, Nvidia takes a substantial lead when it comes to identically priced offerings. The RTX 3070 Ti that starts at $599 is reportedly 21% faster than AMD's RX 6800 that starts at $579, indicating that Nvidia offers more bang for the buck.

As a result, Nvidia is commanding a higher price from customers for its RTX 30 series cards. The average selling price of Nvidia's RTX 30 series cards hit $360 in the first six months of their launch, up 20% from the previous generation Turing cards.

With the gaming business producing 49% of Nvidia's total revenue in Q1, the bright prospects of this segment will play a critical role in helping the company deliver solid upside to investors in the future. This, however, isn't the only catalyst to look out for.

Two more reasons to go long

While Nvidia's gaming business will provide the base for its long-term growth, its data center and automotive businesses will bring additional catalysts into play.

The data center segment, for instance, generated just $339 million in revenue in fiscal 2016. The segment's fiscal 2021 revenue jumped to $6.7 billion and accounted for 40% of the total revenue. Nvidia investors can expect the data center business to keep booming as the company is attacking a substantial opportunity in this space through multiple chip platforms.

Nvidia's data center GPUs are already a hit among major cloud service providers thanks to their ability to tackle artificial intelligence and machine learning workloads. The company is now doubling down on new opportunities such as data processing units (DPUs) and CPUs (central processing units), strengthening its position in the data center accelerator space.

According to a third-party estimate, the data center accelerator market was worth $4.2 billion last year. That number is expected to jump to $53 billion by 2025, opening another huge opportunity for the chipmaker. And finally, the automotive segment is expected to turn into another happy hunting ground for Nvidia.

The company generated just $536 million in revenue from the automotive segment last year. However, Nvidia says that it has built an automotive design win pipeline worth over $8 billion through fiscal 2027, partnering with several well-known OEMs (original equipment manufacturers) such as Mercedes-Benz, Audi, Hyundai, Volvo, Navistar, and others. So, Nvidia's automotive business seems ready to step on the gas.

In the end, it can be said that Nvidia now has bigger growth drivers compared to 2016. More importantly, it is in a strong position to tap into the multibillion-dollar end-market opportunities, making it a top growth stock investors can comfortably buy and hold for at least the next five years.

fool.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 9:46:53 PM
   of 2632
 
A Code for the Code: Simulations Obey Laws of Physics with USD

Universal Scene Description now sports an extension so objects can be simulated to act just like they would in the real world thanks to the collaboration of Apple, NVIDIA and Pixar Animation Studios.

August 10, 2021 by Richard Kerris



Life in the metaverse is getting more real.

Starting today, developers can create and share realistic simulations in a standard way. Apple, NVIDIA and Pixar Animation Studios have defined a common approach for expressing physically accurate models in Universal Scene Description (USD), the common language of virtual 3D worlds. Pixar released USD and described it in 2016 at SIGGRAPH. It was originally designed so artists could work together, creating virtual characters and environments in a movie with the tools of their choice.

Fast forward, and USD is now pervasive in animation and special effects. USD?is?spreading to other professions?like architects who can benefit from?their tools to design and test everything from skyscrapers to sports cars and smart cities.

Playing on the Big Screen To serve the needs of this expanding community, USD needs to stretch in many directions. The good news is Pixar designed USD to be open and flexible.

So, it’s fitting the SIGGRAPH 2021 keynote provides a stage to describe USD’s latest extension. In technical terms, it’s a new schema for rigid-body physics, the math that describes how solids behave in the real world.

For example, when you’re simulating a game where marbles roll down ramps, you want them to respond just as you would expect when they hit each other. To do that, developers need physical details like the weight of the marbles and the smoothness of the ramp. That’s what this new extension supplies.



USD Keeps Getting Better

The initial HTML 1.0 standard, circa 1993, defined how web pages used text and graphics. Fifteen years later HTML5 extended the definition to include video so any user on any device could watch videos and movies.

Apple and NVIDIA were both independently working on ways to describe physics in simulations. As members of the SIGGRAPH community, we came together with Pixar to define a single approach as a new addition to USD.

In the spirit of flexibility, the extension lets developers choose whatever solvers they prefer as they can all be driven from the same set of USD-data. This presents a unified set of data suitable for off-line simulation for film, to games, to augmented reality.

That’s important because solvers for real-time uses like gaming prioritize speed over accuracy, while architects, for example, want solvers that put accuracy ahead of speed.

An Advance That Benefits All Together the three companies wrote a white paper describing their combined proposal and shared it with the USD community. The reviews are in and it’s a hit. Now the extension is part of the standard USD distribution, freely available for all developers.

The list of companies that stand to benefit reads like credits for an epic movie. It includes architects, building managers, product designers and manufacturers of all sorts, companies that design games — even cellular providers optimizing layouts of next-generation networks. And, of course, all the vendors that provide the digital tools to do the work.

“USD is a major force in our industry because it allows for a powerful and consistent representation of complex, 3D scene data across workflows,” said Steve May,?Chief Technology Officer at Pixar.

“Working with NVIDIA and Apple, we have developed a new physics extension that makes USD even more expressive and will have major implications for entertainment and other industries,” he added.

Making a Metaverse Together It’s a big community we aim to serve with NVIDIA Omniverse, a collaboration environment that’s been described as an operating system for creatives or “like Google Docs for 3D graphics.”

We want to make it easy for any company to create lifelike simulations with the tools of their choice. It’s a goal shared by dozens of organizations now evaluating Omniverse Enterprise, and close to 400 companies and tens of thousands of individual creators who have downloaded Omniverse open beta since its release in December 2020.

We envision a world of interconnected virtual worlds — a metaverse — where someday anyone can share their life’s work.

Making that virtual universe real will take a lot of hard work. USD will need to be extended in many dimensions to accommodate the community’s diverse needs.

A Virtual Invitation To get a taste of what’s possible, watch a panel discussion from GTC (free with registration), where 3D experts from nine companies including Pixar, BMW Group, Bentley Systems, Adobe and Foster + Partners talked about the opportunities and challenges ahead.

We’re happy we could collaborate with engineers and designers at Apple and Pixar on this latest USD extension. We’re already thinking about a sequel for soft-body physics and so much more.

Together we can build a metaverse where every tool is available for every job.

For more details, watch a talk on the USD physics extension from NVIDIA’s Adam Moravanszky and attend a USD birds-of-a-feather session hosted by Pixar.

blogs.nvidia.com


Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 10:21:15 PM
   of 2632
 
What Is the Metaverse?

With NVIDIA Omniverse we can (finally) connect to it to do real work - here’s how.

August 10, 2021 by Brian Caulfield



What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.

Just as the physical universe is a collection of worlds that are connected in space, the metaverse can be thought of as a bunch of worlds, too.

Massive online social games, like battle royale juggernaut Fortnite and user-created virtual worlds like Minecraft and Roblox, reflect some elements of the idea.

Video-conferencing tools, which link far-flung colleagues together amidst the global COVID pandemic, are another hint at what’s to come.

But the vision laid out by Neal Stephenson’s 1992 classic novel “Snow Crash” goes well beyond any single game or video-conferencing app.

The metaverse will become a platform that’s not tied to any one app or any single place — digital or real, explains Rev Lebaredian, vice president of simulation technology at NVIDIA.

And just as virtual places will be persistent, so will the objects and identities of those moving through them, allowing digital goods and identities to move from one virtual world to another, and even into our world, with augmented reality.



The metaverse will become a platform that’s not tied to any one place, physical or digital. “Ultimately we’re talking about creating another reality, another world, that’s as rich as the real world,” Lebaredian says.

Those ideas are already being put to work with NVIDIA Omniverse, which, simply put, is a platform for connecting 3D worlds into a shared virtual universe.

Omniverse is in use across a growing number of industries for projects such as design collaboration and creating “digital twins,” simulations of real-world buildings and factories.



BMW Group uses NVIDIA Omniverse to create a future factory, a perfect “digital twin” designed entirely in digital and simulated from beginning to end in NVIDIA Omniverse. How NVIDIA Omniverse Creates, Connects Worlds Within the Metaverse So how does Omniverse work? We can break it down into three big parts.




NVIDIA Omniverse weaves together the Universal Scene Description interchange framework invented by Pixar with technologies for modeling physics, materials, and real-time path tracing. The first is Omniverse Nucleus. It’s a database engine that connects users and enables the interchange of 3D assets and scene descriptions.

Once connected, designers doing modeling, layout, shading, animation, lighting, special effects or rendering can collaborate to create a scene.

Omniverse Nucleus relies on USD, or Universal Scene Description, an interchange framework invented by Pixar in 2012.

Released as open-source software in 2016, USD provides a rich, common language for defining, packaging, assembling and editing 3D data for a growing array of industries and applications.

Lebardian and others say USD is to the emerging metaverse what hyper-text markup language, or HTML, was to the web — a common language that can be used, and advanced, to support the metaverse.

Multiple users can connect to Nucleus, transmitting and receiving changes to their world as USD snippets.

The second part of Omniverse is the composition, rendering and animation engine — the simulation of the virtual world.




Simulation of virtual worlds in NVIDIA DRIVE Sim on Omniverse. Omniverse is a platform built from the ground up to be physically based. Thanks to NVIDIA RTX graphics technologies, it is fully path traced, simulating how each ray of light bounces around a virtual world in real-time.

Omniverse simulates physics with NVIDIA PhysX. It simulates materials with NVIDIA MDL, or material definition language.



Built in NVIDIA Omniverse Marbles at Night is a physics-based demo created with dynamic, ray-traced lights and over 100 million polygons. And Omniverse is fully integrated with NVIDIA AI (which is key to advancing robotics, more on that later).

Omniverse is cloud-native, scales across multiple GPUs, runs on any RTX platform and streams remotely to any device.

The third part is NVIDIA CloudXR, which includes client and server software for streaming extended reality content from OpenVR applications to Android and Windows devices, allowing users to portal into and out of Omniverse.



NVIDIA Omniverse promises to blend real and virtual realities. You can teleport into Omniverse with virtual reality, and AIs can teleport out of Omniverse with augmented reality.

Metaverses Made Real NVIDIA released Omniverse to open beta in December, and NVIDIA Omniverse Enterprise in April. Professionals in a wide variety of industries quickly put it to work.

At Foster + Partners, the legendary design and architecture firm that designed Apple’s headquarters and London’s famed 30 St Mary Axe office tower — better known as “the Gherkin” — designers in 14 countries worldwide create buildings together in their Omniverse shared virtual space.

Visual effects pioneer Industrial Light & Magic is testing Omniverse to bring together internal and external tool pipelines from multiple studios. Omniverse lets them collaborate, render final shots in real-time and create massive virtual sets like holodecks.



Multinational networking and telecommunications company Ericsson uses Omniverse to simulate 5G wave propagation in real-time, minimizing multi-path interference in dense city environments.



Ericsson uses Omniverse to do real-time 5G wave propagation simulation in dense city environments. Infrastructure engineering software company Bentley Systems is using Omniverse to create a suite of applications on the platform. Bentley’s iTwin platform creates a 4D infrastructure digital twin to simulate an infrastructure asset’s construction, then monitor and optimize its performance throughout its lifecycle.

The Metaverse Can Help Humans and Robots Collaborate These virtual worlds are ideal for training robots.

One of the essential features of NVIDIA Omniverse is that it obeys the laws of physics. Omniverse can simulate particles and fluids, materials and even machines, right down to their springs and cables.



Modeling the natural world in a virtual one is a fundamental capability for robotics.

It allows users to create a virtual world where robots — powered by AI brains that can learn from their real or digital environments — can train.

Once the minds of these robots are trained in the Omniverse, roboticists can load those brains onto a NVIDIA Jetson, and connect it to a real robot.

Those robots will come in all sizes and shapes — box movers, pick-and-place arms, forklifts, cars, trucks and even buildings.



In the future, a factory will be a robot, orchestrating many robots inside, building cars that are robots themselves. How the Metaverse, and NVIDIA Omniverse, Enable Digital Twins NVIDIA Omniverse provides a description for these shared worlds that people and robots can connect to — and collaborate in — to better work together.

It’s an idea that automaker BMW Group is already putting to work.

The automaker produces more than 2 million cars a year. In its most advanced factory, the company makes a car every minute. And each vehicle is customized differently.

BMW Group is using NVIDIA Omniverse to create a future factory, a perfect “digital twin.” It’s designed entirely in digital and simulated from beginning to end in Omniverse.



The Omniverse-enabled factory can connect to enterprise resource planning systems, simulating the factory’s throughput. It can simulate new plant layouts. It can even become the dashboard for factory employees, who can uplink into a robot to teleoperate it.

The AI and software that run the virtual factory are the same as what will run the physical one. In other words, the virtual and physical factories and their robots will operate in a loop. They’re twins.

No Longer Science Fiction Omniverse is the “plumbing,” on which metaverses can be built.

It’s an open platform with USD universal 3D interchange, connecting them into a large network of users. NVIDIA has 12 Omniverse Connectors to major design tools already, with another 40 on the way. The Omniverse Connector SDK sample code, for developers to write their own Connectors, is available for download now.

The most important design tool platforms are signed up. NVIDIA has already enlisted partners from the world’s largest industries — media and entertainment; gaming; architecture, engineering and construction; manufacturing; telecommunications; infrastructure; and automotive.

And the hardware needed to run it is here now.

Computer makers worldwide are building NVIDIA-Certified workstations, notebooks and servers, which all have been validated for running GPU-accelerated workloads with optimum performance, reliability and scale. And starting later this year, Omniverse Enterprise will be available for enterprise license via subscription from the NVIDIA Partner Network.



With NVIDIA Omniverse teams are able to collaborate in real-time, from different places, using different tools, on the same project. Thanks to NVIDIA Omniverse, the metaverse is no longer science fiction.

Back to the Future So what’s next?

Humans have been exploiting how we perceive the world for thousands of years, NVIDIA’s Lebaredian points out. We’ve been hacking our senses to construct virtual realities through music, art and literature for millennia.

Next, add interactivity and the ability to collaborate, he says. Better screens, head-mounted displays like the Oculus Quest, and mixed-reality devices like Microsoft’s Hololens are all steps toward fuller immersion.

All these pieces will evolve. But the most important one is here already: a high-fidelity simulation of our virtual world to feed the display. That’s NVIDIA Omniverse.

To steal a line from science-fiction master William Gibson: the future is already here; it’s just not very evenly distributed.

The metaverse is the means through which we can distribute those experiences more evenly. Brought to life by NVIDIA Omniverse, the metaverse promises to weave humans, AI and robots together in fantastic new worlds.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 11:20:28 PM
   of 2632
 
All AI Do Is Win: NVIDIA Research Nabs ‘Best in Show’ with Digital Avatars at SIGGRAPH

August 10, 2021 by Isha Salian

The video at the end, although it's 30 minutes long, is well worth watching, giving the state of the art of graphics, AI and the Omniverse.



In a turducken of a demo, NVIDIA researchers stuffed four AI models into a serving of digital avatar technology for SIGGRAPH 2021’s Real-Time Live showcase — winning the Best in Show award.
The showcase, one of the most anticipated events at the world’s largest computer graphics conference, held virtually this year, celebrates cutting-edge real-time projects spanning game technology, augmented reality and scientific visualization. It featured a lineup of jury-reviewed interactive projects, with presenters hailing from Unity Technologies, Rensselaer Polytechnic Institute, the NYU Future Reality Lab and more.

Broadcasting live from our Silicon Valley headquarters, the NVIDIA Research team presented a collection of AI models that can create lifelike virtual characters for projects such as bandwidth-efficient video conferencing and storytelling.

The demo featured tools to generate digital avatars from a single photo, animate avatars with natural 3D facial motion and convert text to speech.

“Making digital avatars is a notoriously difficult, tedious and expensive process,” said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, in the presentation. But with AI tools, “there is an easy way to create digital avatars for real people as well as cartoon characters. It can be used for video conferencing, storytelling, virtual assistants and many other applications.”

AI Aces the Interview In the demo, two NVIDIA research scientists played the part of an interviewer and a prospective hire speaking over video conference. Over the course of the call, the interviewee showed off the capabilities of AI-driven digital avatar technology to communicate with the interviewer.

The researcher playing the part of interviewee relied on an NVIDIA RTX laptop throughout, while the other used a desktop workstation powered by RTX A6000 GPUs. The entire pipeline can also be run on GPUs in the cloud.

While sitting in a campus coffee shop, wearing a baseball cap and a face mask, the interviewee used the Vid2Vid Cameo model to appear clean-shaven in a collared shirt on the video call (seen in the image above). The AI model creates realistic digital avatars from a single photo of the subject — no 3D scan or specialized training images required.

“The digital avatar creation is instantaneous, so I can quickly create a different avatar by using a different photo,” he said, demonstrating the capability with another two images of himself.

Instead of transmitting a video stream, the researcher’s system sent only his voice — which was then fed into the NVIDIA Omniverse Audio2Face app. Audio2Face generates natural motion of the head, eyes and lips to match audio input in real time on a 3D head model. This facial animation went into Vid2Vid Cameo to synthesize natural-looking motion with the presenter’s digital avatar.

Not just for photorealistic digital avatars, the researcher fed his speech through Audio2Face and Vid2Vid Cameo to voice an animated character, too. Using NVIDIA StyleGAN, he explained, developers can create infinite digital avatars modeled after cartoon characters or paintings.



The models, optimized to run on NVIDIA RTX GPUs, easily deliver video at 30 frames per second. It’s also highly bandwidth efficient, since the presenter is sending only audio data over the network instead of transmitting a high-resolution video feed.

Taking it a step further, the researcher showed that when his coffee shop surroundings got too loud, the RAD-TTS model could convert typed messages into his voice — replacing the audio fed into Audio2Face. The breakthrough text-to-speech, deep learning-based tool can synthesize lifelike speech from arbitrary text inputs in milliseconds.

RAD-TTS can synthesize a variety of voices, helping developers bring book characters to life or even rap “The Real Slim Shady” by Eminem, as the research team showed in the demo’s finale.

SIGGRAPH continues through Aug. 13. Check out the full lineup of NVIDIA events at the conference and catch the premiere of our documentary, “ Connecting in the Metaverse: The Making of the GTC Keynote,” on Aug. 11.

Scroll ahead to 13:00




Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/11/2021 12:33:05 PM
   of 2632
 
Chinese chipmaking upstarts race to rival Nvidia

By Staff -

11/08/2021



COMPARTIR:

Chinese venture capital is pouring into the development of next-generation microprocessors as Chinese startups race to challenge the dominance of U.S. chipmaking giant Nvidia.

Investment in the new general purpose graphics processing units (GPGPUs) — an advanced computing chip — has been booming as venture capital bets on the growing Chinese industry. While traditional graphic processing units (GPUs) render images on computers, GPGPUs are designed to harness data processing power for artificial intelligence computing.

Several Chinese front-runners have jumped into the race, attracting investor attention. Beijing has been pushing for more self-reliance in semiconductors, and a global chip shortage has created an opportunity for Chinese companies to make breakthroughs. Aiming to leapfrog to the next generation of integrated circuit technology, the crowded field of Chinese startups has been recruiting veterans of Nvidia itself and other leading semiconductor companies.

One is artificial intelligence chipmaker Iluvatar CoreX, founded in 2015. In March, it unveiled China’s first GPGPU built with advanced 7-nanometer technology.

Another is Shanghai-based Biren Technology, with a valuation of more than 10 billion yuan ($1.5 billion). It managed to raise 4.7 billion yuan from more than 40 investors since its founding in 2019. Investors included Hillhouse Group, Walden International China and BAI Capital.

MetaX Integrated Circuit was set up in 2020 and attracted investment from Lightspeed China Partners, Sequoia Capital and ZhenFund. The two founders both previously worked for the U.S. semiconductor giant Advanced Micro Devices. Newcomer Moore Threads Technology raised several billion yuan in two rounds of financing within 100 days of its founding.

Although the upstarts have generated plenty of enthusiasm while raising significant funds, investors and leaders of the companies acknowledge that taking on the likes of Nvidia is no easy task. No more than one or two of the companies will survive because it will take billions of dollars to build up a software ecosystem comparable to Nvidia’s, according to one chip industry investor.

First, we should develop a product and start product iteration,” said Diao Shijing, chairman and CEO of Iluvatar CoreX. “I don’t think anyone will surprise the world or disrupt the industry with its very first product, and it definitely will need constant refinement.”

Growing trend

At the same time, the Chinese startups stand on the edge of a tremendous opportunity, said Wang Endong, executive president and chief scientist of cloud computing and big data provider Inspur. He predicted that demand for AI computing chips to power deep machine learning will grow exponentially.

In 2020, computing by AI accelerator chips — special integrated circuits designed for artificial intelligence applications such as GPGPUs — surpassed computing by the conventional central processing units (CPUs) that have powered computers for decades, Wang said.

“AI accelerator chips will account for more than 80% of overall computing power by 2025,” Wang said.

Nvidia captured the lead in AI technology over the past decade, making GPUs a standard for artificial intelligence processing. The company’s share price rose more than 25-fold in the past five years, giving it a market value of more than $450 billion, second behind Taiwan Semiconductor Manufacturing Co.

“We have invested tens of billions of dollars in GPUs over the past 30 years, and only in this one area,” Nvidia founder Jensen Huang said in a June 2 video interview with Caixin at the company’s U.S. headquarters. “I can certainly understand why it will spawn so many competitors in light of such a huge market.”

In 2020, China’s semiconductor industry attracted venture capital of more than 140 billion yuan, surpassing the internet as the most attractive sector, according to data from the U.S. corporate law firm Katten Muchin Rosenman. In the first five months of 2021, about 164 Chinese semiconductor companies received investments with total financing of more than 40 billion yuan, close to the level of the full year 2019, according to Katten Muchin.

Nvidia challengers

The new wave of Chinese chip startups is led by Biren. Founder Michael Zhang is the former president of the AI startup Sense Time and a former U.S. lawyer. Zhang recruited his core management from industry veterans in the U.S., including Chief Technical Officer and Chief Architect Mike Hong. Hong helped build Huawei’s GPU research and development team in the U.S. in 2016 and also worked for Nvidia.

“This team’s previous work experience covers the entire chipmaking process,” said Xing Yaopeng, senior investment manager at BAI Capital, an investor in Biren.

Biren started operation in November 2019 with a plan to introduce its first product, a 7-nm GPGPU, in 2022, according to co-founder Xu Lingjie, who previously worked for Nvidia, Samsung and Alibaba.

“The demand for AI computing from major internet companies is still growing at more than 40% a year,” Xu said. “Even if only one-third of the servers are replaced every year, procurement from companies and government will still be huge.”

Like Biren, Shanghai-based MetaX and Iluvatar CoreX also started in the GPGPU arena with founding teams that previously worked for AMD. Headquartered in Beijing, Moore Threads, founded in October 2020, jump-started with GPUs. CEO Jams Zhang is the former China general manager of Nvidia.

Battle for the future

The GPGPU market is still nascent, and it will take time for the Chinese startups to meet the standards of major customers such as Alibaba and Tencent, according to Liu Hongchun, founding partner of Winreal Investment, a venture capital company.

Other challenges facing the upstart chipmakers include intellectual property and relationships with China’s tech giants, such as Huawei, Alibaba and Baidu, which all have their own chipmaking businesses. It is hard to tell whether these tech giants will regard the startups as friends or foes, an industry expert said.

In the longer run, analysts said the chipmaking startups need to establish a software ecology in China that can rival Nvidia’s. The American company released its parallel computing platform CUDA in 2006, including a development toolkit that enabled anyone with a laptop equipped with an Nvidia GPU to develop software. Over the past decade or so, Nvidia has promoted CUDA in schools and research institutes, enabling software such as climate simulation and seismic data processing to be developed based on the platform.

Currently, Chinese GPU makers are adopting a CUDA-compatible strategy and aim to build their own software ecosystem on top of it. However, once users are accustomed to CUDA, they will be unlikely to migrate to other platforms, said a midlevel manager at AI startup Enflame Technology. Enflame makes chips designed to process huge amounts of data to train artificial intelligence systems.

“No ecosystem is built in one day,” said Jeffrey Wang, managing director at Lenovo Capital, an accelerator and venture capital unit of Lenovo Group. “It will take time.”

While the Chinese companies are cutting their teeth in the GPU sector, Nvidia is eyeing the CPU and Data Processing Unit (DPU) field. The company is pursuing a $40 billion bid for U.K.-based Arm Holdings from Japan’s SoftBank. The deal, which would be one of the biggest semiconductor takeovers ever, is pending approval from regulators in the U.K, the U.S. and China.

Nvidia released its first DPU last October after completing its $6.9 billion acquisition of Israel’s chipmaker Mellanox in April 2020.

While Nvidia is busy with the CPU and DPU market, it may become a great opportunity for Chinese GPU companies to play catch-up, industry experts told Caixin.

Source: Asia Nikkei

tynmagazine.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/11/2021 12:52:22 PM
   of 2632
 
Besieging GPU: Who Will Be the Next Overlord of AI Computing After NVIDIA?

By Stocks News Feed

August 11, 2021

Newsfile Corp

Briefing: This article will introduce several AI chip companies with unique technologies. They either have advanced computing concepts, or have the top architects. These AI chips with new architectures are destroying half of the GPU’s world like the magical Thanos Gloves.New York, New York–(Newsfile Corp. – August 11, 2021) – In the Post-Moore era, the process technology has been gradually approaching the physical limit, with the speed of progress gradually slowing down; the computation mode of semiconductor chips is also changing from all-purpose towards specific-purpose.



Siege GPU, strong competitors with new architectures!

The threat that GPU faces in the AI industry comes from strong competitors in new architectures. There are veteran giants such as Intel, who have been investing heavily in AI chips and new architectures.

The unicorn enterprises like SambaNova and Untether AI were both invested by Intel. In addition to launching its own artificial intelligence chip, Google is deeply involved in SambaNova’s investment. Apart from the western start-ups from North America and Europe such as Graphcore, Cerebras, Groq, and Tenstorrent; TensorChip, a new architecture AI chip company from China has never tried to conceal its ambition of replacing NVIDIA.

SambaNova, invested by Google and Intel at the same time

“The fraction of our chip is better than your entire chip.” As soon as this opinion came out, the entire Silicon Valley turned its attention to the CEO of SambaNova– Rodrigo Liang. As a fast-growing unicorn company, SambaNova has received heavy investment from Google and continuous follow-up investment from Intel. Other participating institutions include SoftBank, Temasek, and Walden International. SambaNova got $676 million in series D which brought its valuation to $5.1 billion. The shots of these top industrial capital have made the industry realize that the War of Replacing GPU has already started.

In the interview, Liang mentioned that he believes only a reconfigurable data stream processor system can keep up with the development trend of the entire industry. Reconfigurable data stream technology based on memory (SRAM) is breaking through the limits of computing limitations of AI hardware and software constantly. Compared with the NVIDIA A100, which is used to be the leading product of the AI benchmark test in data center, SambaNova presents it can provide better performance.

Untether AI, which got three consecutive rounds of investment from Intel

The Canadian startup, Untether AI has announced that it has received $125 million in funding since they established in 2018 to develop its novel computing architecture and provide its customers with powerful computing power support. Untether AI developed a new chip architecture which can increase the speed of data movement by 1,000 times.

Untether’s main product, TsunAimi accelerator card is composed of four runAI200 chips that are crafted in 16nm process, providing 2000 TOPS of computing power, which is 16 times of mainstream products’ performance. Compared with SambaNova, Untether AI pays more attention to the improvement of computing power and energy efficiency of AI chips by memory-computing. Based on digital storage computing (SRAM) technology, the computing energy efficiency is increased to 8TOPS/W.

TensorChip, builder of RMU from China

We were surprised to notice the existence of complementary advanced architectures outside of North America. A Chinese company named TensorChip designed a new RMU architecture by combining reconfigurable technology with memory-computing technology. This idea seems very close to SambaNova’s RDU.

TensorChip obtains the high energy efficiency and large computing power, with an energy efficiency ratio of 10TOPS/W. It not only has the reconfigurable ability recommended by SambaNova, but also surpasses the energy efficiency ratio of Untether AI. Compared with SambaNova’s which focuses on large AI models, TensorChip’s RMU has a broader application range, covering cloud computing and edge computing.

Groq, created by the former Google TPU team

Groq was founded in 2016, so far, the total funding has reached $362.3 million. Jonathan Ross, CEO of Groq, was involved in the development of Google’s Tensor Processing Unit (TPU), which is a customized chip for accelerating machine learning.

Allegedly, Groq’s architecture pays close attention to the low-latency and single-threaded performance when the Batch Size is 1. For the GPU, when it is used as a processing unit in a machine learning application, once the data is input with a small batch size, the gaps in the data stream will appear to cause the stagnation of the GPU. In this case, the performance will be significantly decreased. On the contrary, the Groq processor is 17.6 times faster than the GPU-based platform when the batch size is 1; it is 2.5 times faster when the batch size is large.

Tenstorrent, the company which AMD Chief Architect Jim Keller currently works at

Tenstorrent was founded in 2016 and has raised $200 million at a valuation of US$1 billion to build a sustainable product route and continue to challenge NVIDIA in the AI market. Its AI chip–Grayskull has a larger on-chip memory (SRAM), while NVIDIA relies on fast off-chip GDDR or HBM. Grayskull uses only 75 watts of power to perform 368 trillion operations per second, while NVIDIA consumes about 300 watts of power to achieve the same performance.

Who will be the overlord of AI computing after NVIDIA?

These leapfrog breakthroughs in chip architecture are milestones. These innovations have circumvented GPU patent barriers and opened up new ideas and products in the computing field. SambaNova, Untether AI, TensorChip, Groq, and Tenstorrent, the efforts of these companies are changing the pattern of the AI computing area. Perhaps one day, the latest AI chips will no longer be baked by the Chinese man in a leather jacket (referring to NVIDIA CEO, Jensen Huang) from his own kitchen. Although GPU performs well in the field of AI computing, its main job is still graphics rendering and display.

These innovations are catching up with the GPUs and trying to fully convert the world to the AI era. The huge waves of change will engulf each of us and bring unimaginable changes to the world of AI computing.

stocksnewsfeed.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/12/2021 1:25:11 PM
   of 2632
 
From Our Kitchen to Yours: NVIDIA Omniverse Changes the Way Industries Collaborate

August 11, 2021 by Brian Caulfield

Talk about a magic trick. One moment, NVIDIA CEO Jensen Huang was holding forth from behind his sturdy kitchen counter.

The next, the kitchen and everything in it slid away, leaving Huang alone with the audience and NVIDIA’s DGX Station A100, a glimpse at an alternate digital reality.

For most, the metaverse is something seen in sci-fi movies. For entrepreneurs, it’s an opportunity. For gamers, a dream.

For NVIDIA artists, researchers and engineers on an extraordinarily tight deadline last spring, it was where they went to work — a shared virtual world they used to tell their story and a milestone for the entire company.

Designed to inform and entertain, NVIDIA’s GTC keynote is filled with cutting-edge demos highlighting advancements in supercomputing, deep learning and graphics.



“GTC is, first and foremost, our opportunity to highlight the amazing work that our engineers and other teams here at NVIDIA have done all year long,” said Rev Lebaredian, vice president of Omniverse engineering and simulation at NVIDIA.

With this short documentary, “Connecting in the Metaverse: The Making of the GTC Keynote,” viewers get the story behind the story. It’s a tale of how NVIDIA Omniverse, a tool for connecting to and describing the metaverse, brought it all together this year.


Creating a Story in Omniverse

It starts with building a great narrative. Bringing forward a keynote-worthy presentation always takes intense collaboration. But this was unlike any other — packed not just with words and pictures — but with beautifully rendered 3D models and rich textures.

With Omniverse, NVIDIA’s team was able to collaborate using different industry content-creation tools like Autodesk Maya or Substance Painter while in different places.



Keynote slides were packed with beautifully rendered 3D models and rich textures. “There are already great tools out there that people use every day in every industry that we want people to continue using,” said Lebaredian. “We want people to take these exciting tools and augment them with our technologies.”

These were enhanced by a new generation of tools, including Universal Scene Description (USD), Material Design Language (MDL) and NVIDIA RTX real-time ray-tracing technologies. Together, they allowed NVIDIA’s team to collaborate to create photorealistic scenes with physically accurate materials and lighting.

An NVIDIA DGX Station A100 Animation

Omniverse can create more than beautiful stills. The documentary shows how, accompanied by industry tools such as Autodesk Maya, Foundry Nuke, Adobe Photoshop, Adobe Premiere, and Adobe After Effects, it could stage and render some of the world’s most complex machines to create realistic cinematics.

With Omniverse, NVIDIA was able to turn a CAD model of the NVIDIA DGX Station A100 into a physically accurate virtual replica Huang used to give the audience a look inside.



Typically this type of project would take a team months to complete and weeks to render. But with Omniverse, the animation was chiefly completed by a single animator and rendered in less than a day.

Omniverse Physics Montage

More than just machines, though, Omniverse can model the way the world works by building on existing NVIDIA technologies. PhysX, for example, has been a staple in the NVIDIA gaming world for well over a decade. But its implementation in Omniverse brings it to a new level.

For a demo highlighting the current capabilities of PhysX 5 in Omniverse, plus a preview of advanced real-time physics simulation research, the Omniverse engineering and research teams re-rendered a collection of older PhysX demos in Omniverse.

The demo highlights key PhysX technologies such as Rigid Body, Soft Body Dynamics, Vehicle Dynamics, Fluid Dynamics, Blast’s Destruction and Fracture, and Flow’s combustible fluid, smoke and fire. As a result, viewers got a look at core Omniverse technologies that can do more than just show realistic-looking effects — they are true to reality, obeying the laws of physics in real-time.

DRIVE Sim, Now Built on Omniverse

Simulating the world around us is key to unlocking new technologies, and Omniverse is crucial to NVIDIA’s self-driving car initiative. With its PhysX and Photorealistic worlds, Omniverse creates the perfect environment for training autonomous machines of all kinds.

For this year’s DRIVE Sim on Omniverse demo, the team imported a map of the area surrounding a Mercedes plant in Germany. Then, using the same software stack that runs NVIDIA’s fleet of self-driving cars, they showed how the next generation of Mercedes cars would perform autonomous functions in the real world.

With DRIVE Sim, the team was able to test numerous lighting, weather and traffic conditions quickly — and show the world the results.

Creating the Factory of the Future with BMW Group

The idea of a “digital twin” has far-reaching consequences for almost every industry.

This year’s GTC featured a spectacular visionary display that exemplifies what the idea can do when unleashed in the auto industry.

The BMW Factory of the Future demo shows off the digital twin of a BMW assembly plant in Germany. Every detail, including layout, lighting and machinery, is digitally replicated with physical accuracy.

This “digital simulation” provides ultra-high fidelity and accurate, real-time simulation of the entire factory. With it, BMW can reconfigure assembly lines to optimize worker safety and efficiency, train factory robots to perform tasks, and optimize every aspect of plant operations.

Virtual Kitchen, Virtual CEO

The surprise highlight of GTC21 was a perfect virtual replica of Huang’s kitchen — the setting of the past three pandemic-era “kitchen keynotes” — complete with a digital clone of the CEO himself.

The demo is the epitome of what GTC represents: It combined the work of NVIDIA’s deep learning and graphics research teams with several engineering teams and the company’s incredible in-house creative team.



To create a virtual Jensen, teams did a full face and body scan to create a 3D model, then trained an AI to mimic his gestures and expressions and applied some AI magic to make his clone realistic.

Digital Jensen was then brought into a replica of his kitchen that was deconstructed to reveal the holodeck within Omniverse, surprising the audience and making them question how much of the keynote was real, or rendered.

“We built Omniverse first and foremost for ourselves here at NVIDIA,” Lebaredian said. “We started Omniverse with the idea of connecting existing tools that do 3D together for what we are now calling the metaverse.”

More and more of us will be able to do the same, accelerating more of what we do together. “If we do this right, we’ll be working in Omniverse 20 years from now,” Lebaredian said.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/12/2021 7:53:29 PM
2 Recommendations   of 2632
 
I have become moderator of the NVIDIA (NVDA) board. Read the Introduction header as I have updated it substantially. If you have any concerns, suggestions or questions please PM me.

Cheers,
Frank Sully

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/13/2021 1:21:14 AM
   of 2632
 
Nvidia has 80% share of AI processors, Omdia says, except…

by Matt Hamblen |
Aug 12, 2021 1:36pm



Intel Xeon processors are CPUs. While they don't technically fit under some definitions of an AI processor for data centers and cloud, CPUs are the most widely used chips for AI acceleration. However, Nvidia holds the top spot for AI processors with its GPUs that integrate distinct subsystems dedicated to AI processing. It's all how you count it. (Intel)
Analyst firm Omdia recently ranked Nvidia at the top of the AI processor market with an 80% share in 2020, well ahead of its competitors.

The tabulation puts Nvidia AI processor revenue at $3.2 billion in 2020, an improvement from $1.8 billion in 2019. Omdia ranked Xilinx second with its FPGA products.

Google finished third with its Tensor Processing Unit, while Intel finished fourth with its Habana AI ASSPs and its FPGAs for AI cloud and data center servers. AMD ranked fifth with AI ASSPs for cloud and data center.

The report is notable for leaving out Intel Xeon CPUs in the Omdia tabulation even though Xeons are used extensively for AI acceleration in cloud and data center operations, as Omdia admits. Xeon does not meet the Omdia definition of an AI processor which includes “only those chips that integrate distinct subsystems dedicated to AI processing,” Omdia said.

As far back as 2019, Intel began a heavy push for Xeon for a variety of AI workloads alongside other compute tasks and has seen steady progress. In April, Intel launched its third generation Ice Lake Xeon Scalable processor with what Intel called built-in AI.

At the time, Intel said it already had shipped 200,000 of the 10nm processors in the first quarter and boasted it offers 1.5 times higher performance across 20 AI workloads when compared to AMD EPYC7763 and 1.3 times higher performance versus the Nvidia A100 GPU. Cisco announced in April it was using Xeon third gen in three new UCS servers.

Jack Gold, an analyst at J. Gold Associates, said it is hard to know accurately Intel revenues for Xeon chips used for AI jobs because of how Intel reports earnings. But he estimated at least half of the latest Data Center Group revenues for second quarter of $5.6 billion were from Xeon, or more than $2.8 billion. (Xeon is also used in 5G implementations, not a part of the Data Center Group.) Based on Gold’s estimate, Xeon for data centers could be as much as three to four times the size of the Nvidia AI processor revenue in 2020, although not all the Xeon CPUs would be used for AI work.

Omdia’s definition of AI processors only includes GPU-derived AI application-specific standard products (ASSPs) , proprietary core AI application-specific standard products, AI application-specific integrated circuits (AI ASICS) and field programmable gate arrays (FPGAs)

“Despite the onslaught of new competitors and new types of chips, Nvidia’s GPU-based devices have remained the default choice for cloud hyperscalers and on-premises data centers, partly because of their familiarity to users,” said Omdia principal analyst Jonathan Cassell. GPU-based chips were the first type of AI processor widely used for AI acceleration.

Cassell told Fierce Electronics that CPUs are the most widely used chip for AI acceleration in the cloud and data center, even though Omdia doesn’t classify them as AI processors. If they were counted as AI processors, Intel would be the leading supplier in AI in 2020, beating Nvidia, he added.

CPUs are mainly used for AI inference work in cloud and data center environments as opposed to AI training. In an older Omdia report from 2020, the analyst firm calculated that CPUs accounted for the majority of market share for 2019 and were estimated to lead the market through 2025.

The market for AI processors is growing rapidly, with global revenues for cloud and data center AI processors reaching $4 billion in 2020. Revenue should grow nine times by 2026 to $37 billion, according to the definition used by Omdia.

fierceelectronics.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10