We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor. We ask that you disable ad blocking while on Silicon
Investor in the best interests of our community. If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Talk about a magic trick. One moment, NVIDIA CEO Jensen Huang was holding forth from behind his sturdy kitchen counter.
The next, the kitchen and everything in it slid away, leaving Huang alone with the audience and NVIDIA’s DGX Station A100, a glimpse at an alternate digital reality.
For most, the metaverse is something seen in sci-fi movies. For entrepreneurs, it’s an opportunity. For gamers, a dream.
For NVIDIA artists, researchers and engineers on an extraordinarily tight deadline last spring, it was where they went to work — a shared virtual world they used to tell their story and a milestone for the entire company.
Designed to inform and entertain, NVIDIA’s GTC keynote is filled with cutting-edge demos highlighting advancements in supercomputing, deep learning and graphics.
“GTC is, first and foremost, our opportunity to highlight the amazing work that our engineers and other teams here at NVIDIA have done all year long,” said Rev Lebaredian, vice president of Omniverse engineering and simulation at NVIDIA.
With this short documentary, “Connecting in the Metaverse: The Making of the GTC Keynote,” viewers get the story behind the story. It’s a tale of how NVIDIA Omniverse, a tool for connecting to and describing the metaverse, brought it all together this year.
Creating a Story in Omniverse
It starts with building a great narrative. Bringing forward a keynote-worthy presentation always takes intense collaboration. But this was unlike any other — packed not just with words and pictures — but with beautifully rendered 3D models and rich textures.
With Omniverse, NVIDIA’s team was able to collaborate using different industry content-creation tools like Autodesk Maya or Substance Painter while in different places.
Keynote slides were packed with beautifully rendered 3D models and rich textures. “There are already great tools out there that people use every day in every industry that we want people to continue using,” said Lebaredian. “We want people to take these exciting tools and augment them with our technologies.”
These were enhanced by a new generation of tools, including Universal Scene Description (USD), Material Design Language (MDL) and NVIDIA RTX real-time ray-tracing technologies. Together, they allowed NVIDIA’s team to collaborate to create photorealistic scenes with physically accurate materials and lighting.
An NVIDIA DGX Station A100 Animation
Omniverse can create more than beautiful stills. The documentary shows how, accompanied by industry tools such as Autodesk Maya, Foundry Nuke, Adobe Photoshop, Adobe Premiere, and Adobe After Effects, it could stage and render some of the world’s most complex machines to create realistic cinematics.
With Omniverse, NVIDIA was able to turn a CAD model of the NVIDIA DGX Station A100 into a physically accurate virtual replica Huang used to give the audience a look inside.
Typically this type of project would take a team months to complete and weeks to render. But with Omniverse, the animation was chiefly completed by a single animator and rendered in less than a day.
Omniverse Physics Montage
More than just machines, though, Omniverse can model the way the world works by building on existing NVIDIA technologies. PhysX, for example, has been a staple in the NVIDIA gaming world for well over a decade. But its implementation in Omniverse brings it to a new level.
For a demo highlighting the current capabilities of PhysX 5 in Omniverse, plus a preview of advanced real-time physics simulation research, the Omniverse engineering and research teams re-rendered a collection of older PhysX demos in Omniverse.
The demo highlights key PhysX technologies such as Rigid Body, Soft Body Dynamics, Vehicle Dynamics, Fluid Dynamics, Blast’s Destruction and Fracture, and Flow’s combustible fluid, smoke and fire. As a result, viewers got a look at core Omniverse technologies that can do more than just show realistic-looking effects — they are true to reality, obeying the laws of physics in real-time.
DRIVE Sim, Now Built on Omniverse
Simulating the world around us is key to unlocking new technologies, and Omniverse is crucial to NVIDIA’s self-driving car initiative. With its PhysX and Photorealistic worlds, Omniverse creates the perfect environment for training autonomous machines of all kinds.
For this year’s DRIVE Sim on Omniverse demo, the team imported a map of the area surrounding a Mercedes plant in Germany. Then, using the same software stack that runs NVIDIA’s fleet of self-driving cars, they showed how the next generation of Mercedes cars would perform autonomous functions in the real world.
With DRIVE Sim, the team was able to test numerous lighting, weather and traffic conditions quickly — and show the world the results.
Creating the Factory of the Future with BMW Group
The idea of a “digital twin” has far-reaching consequences for almost every industry.
This year’s GTC featured a spectacular visionary display that exemplifies what the idea can do when unleashed in the auto industry.
The BMW Factory of the Future demo shows off the digital twin of a BMW assembly plant in Germany. Every detail, including layout, lighting and machinery, is digitally replicated with physical accuracy.
This “digital simulation” provides ultra-high fidelity and accurate, real-time simulation of the entire factory. With it, BMW can reconfigure assembly lines to optimize worker safety and efficiency, train factory robots to perform tasks, and optimize every aspect of plant operations.
Virtual Kitchen, Virtual CEO
The surprise highlight of GTC21 was a perfect virtual replica of Huang’s kitchen — the setting of the past three pandemic-era “kitchen keynotes” — complete with a digital clone of the CEO himself.
The demo is the epitome of what GTC represents: It combined the work of NVIDIA’s deep learning and graphics research teams with several engineering teams and the company’s incredible in-house creative team.
To create a virtual Jensen, teams did a full face and body scan to create a 3D model, then trained an AI to mimic his gestures and expressions and applied some AI magic to make his clone realistic.
Digital Jensen was then brought into a replica of his kitchen that was deconstructed to reveal the holodeck within Omniverse, surprising the audience and making them question how much of the keynote was real, or rendered.
“We built Omniverse first and foremost for ourselves here at NVIDIA,” Lebaredian said. “We started Omniverse with the idea of connecting existing tools that do 3D together for what we are now calling the metaverse.”
More and more of us will be able to do the same, accelerating more of what we do together. “If we do this right, we’ll be working in Omniverse 20 years from now,” Lebaredian said.
I have become moderator of the NVIDIA (NVDA) board. Read the Introduction header as I have updated it substantially. If you have any concerns, suggestions or questions please PM me.
Intel Xeon processors are CPUs. While they don't technically fit under some definitions of an AI processor for data centers and cloud, CPUs are the most widely used chips for AI acceleration. However, Nvidia holds the top spot for AI processors with its GPUs that integrate distinct subsystems dedicated to AI processing. It's all how you count it. (Intel) Analyst firm Omdia recently ranked Nvidia at the top of the AI processor market with an 80% share in 2020, well ahead of its competitors.
The tabulation puts Nvidia AI processor revenue at $3.2 billion in 2020, an improvement from $1.8 billion in 2019. Omdia ranked Xilinx second with its FPGA products.
Google finished third with its Tensor Processing Unit, while Intel finished fourth with its Habana AI ASSPs and its FPGAs for AI cloud and data center servers. AMD ranked fifth with AI ASSPs for cloud and data center.
The report is notable for leaving out Intel Xeon CPUs in the Omdia tabulation even though Xeons are used extensively for AI acceleration in cloud and data center operations, as Omdia admits. Xeon does not meet the Omdia definition of an AI processor which includes “only those chips that integrate distinct subsystems dedicated to AI processing,” Omdia said.
As far back as 2019, Intel began a heavy push for Xeon for a variety of AI workloads alongside other compute tasks and has seen steady progress. In April, Intel launched its third generation Ice Lake Xeon Scalable processor with what Intel called built-in AI.
At the time, Intel said it already had shipped 200,000 of the 10nm processors in the first quarter and boasted it offers 1.5 times higher performance across 20 AI workloads when compared to AMD EPYC7763 and 1.3 times higher performance versus the Nvidia A100 GPU. Cisco announced in April it was using Xeon third gen in three new UCS servers.
Jack Gold, an analyst at J. Gold Associates, said it is hard to know accurately Intel revenues for Xeon chips used for AI jobs because of how Intel reports earnings. But he estimated at least half of the latest Data Center Group revenues for second quarter of $5.6 billion were from Xeon, or more than $2.8 billion. (Xeon is also used in 5G implementations, not a part of the Data Center Group.) Based on Gold’s estimate, Xeon for data centers could be as much as three to four times the size of the Nvidia AI processor revenue in 2020, although not all the Xeon CPUs would be used for AI work.
Omdia’s definition of AI processors only includes GPU-derived AI application-specific standard products (ASSPs) , proprietary core AI application-specific standard products, AI application-specific integrated circuits (AI ASICS) and field programmable gate arrays (FPGAs)
“Despite the onslaught of new competitors and new types of chips, Nvidia’s GPU-based devices have remained the default choice for cloud hyperscalers and on-premises data centers, partly because of their familiarity to users,” said Omdia principal analyst Jonathan Cassell. GPU-based chips were the first type of AI processor widely used for AI acceleration.
Cassell told Fierce Electronics that CPUs are the most widely used chip for AI acceleration in the cloud and data center, even though Omdia doesn’t classify them as AI processors. If they were counted as AI processors, Intel would be the leading supplier in AI in 2020, beating Nvidia, he added.
CPUs are mainly used for AI inference work in cloud and data center environments as opposed to AI training. In an older Omdia report from 2020, the analyst firm calculated that CPUs accounted for the majority of market share for 2019 and were estimated to lead the market through 2025.
The market for AI processors is growing rapidly, with global revenues for cloud and data center AI processors reaching $4 billion in 2020. Revenue should grow nine times by 2026 to $37 billion, according to the definition used by Omdia.
Nvidia CEO Jensen Huang appears in a graphics simulation wearing his signature leather jacket, a reflection of his founding the company on graphics technology in 1993. SIA awarded him the top honor for outstanding contributions to the chip industry. (Nvidia)
The founder of Nvidia has won the semiconductor industry’s top annual honor, the Robert Noyce Award, granted by the Semiconductor Industry Association.
Jensen Huang, founder and CEO of Nvidia, is a “trailblazer in building accelerated computing platforms,” SIA said in a statement on Thursday. He will accept the award Nov. 18. The annual award goes to a leader who has made outstanding contributions to the semiconductor industry in technology or public policy.
“Jensen Huang’s extraordinary vision and tireless execution have greatly strengthened our industry, revolutionized computing and advanced artificial intelligence,” said John Neuffer, SIA president and CEO. His work impacts gaming, scientific computing and self-driving cars, among other innovations, he added.
Huang founded Nvidia in 1993, starting with 3D graphics used heavily in the gaming market. GPUs used in gaming have advanced to uses in a variety of computers, robots and more. He holds bachelor’s and master’s degrees in electrical engineering from Oregon State and Stanford, respectively.
In receiving the honor, Huang said it recognizes the body of work of his colleagues at Nvidia. “It has been the greatest joy and privilege to have grown up with the semiconductor and computing industries,” he said. “As we enter the era of AI, robotics, digital biology and the metaverse, we will see super-exponential technology advances. There’s never been a more exciting or important time to be in the semiconductor and computer industries.”
SIA represents nearly all the companies in the U.S. chip industry and two-thirds of non-U.S. chip firms based on revenues. The Noyce Award is named in honor of semi industry pioneer Robert N. Noyce, founder of Fairchild Semiconductor and Intel.
Analysts widely respect Huang and Nvidia’s financial and technology performance. Of the award, Patrick Moorhead of Moor Insights & Strategy tweeted after the announcement, “Well deserved.
Nvidia's Jetson TX2i can deliver up to 1.3TFLOPS of AI performance to space applications. Aitech S-A1760 Venus (Image credit: Aitech)
Aitech, a maker of rugged computers for military, aerospace and space applications, has tapped Nvidia's Jetson TX2i system-on-module (SoM) for a new radiation-characterized system, it announced recently. The Aitech S-A1760 Venus is a commercial off-the-shelf (COTS) system that can be used for spacecraft and small satellites and takes advantage of around 1 FP32 TFLOPS of "AI performance," as Nvidia puts it.
There is a growing need for advanced imaging and data processing in various space applications, but equipping a small satellite with a high-performance, rad-hardened computer is extremely expensive, since tiny satellites are supposed to be light and tiny. This is where Aitech’s S-A1760 Venus system comes into play.
According to the Aitech, the S-A1760 Venus is targets "short duration spaceflight" as well as near earth orbit (NEO) and low earth orbit (LEO) satellite applications. Its best use is "video and signal processing in distributed systems."
At the heart of Nvidia's Jetson TX2i SoM sits the company's Tegra X2 system-on-chip ( SoC) that integrates two or four Cortex-A57 general purpose CPU cores. It also uses the GP10B GPU, which is based on the Nvidia's Pascal architecture featuring 256 CUDA cores that offer up to 1.26 FP32 TFLOPS performance (around 1TFLOPS in case of the S-A1760 Venus) for AI or image processing. The Tegra X2 can also connect up to six cameras (12 via virtual channels) and encode/decode up to 1/2 4Kp60 or 4/20 1080p60 HEVC video streams concurrently.
Nvidia's Tegra X2 SoC is not radiation-hardened, but with proper protection it can still be used for some space applications. Aitech's S-A1760 Venus small-form-factor system has passed the Series 300 level qualification standard that identifies the rad-tolerant needs of space components and systems not used in deep space or long-haul applications.
The Jetson TX2i module for industrial applications and harsh environments carries 8GB of 128-bit LPDDR4 memory and 32GB of eMMC 5.1 storage, which appears to be good enough for space applications that tend to be custom made and spend resources economically. Meanwhile, the Aitech’s S-A1760 Venus has GbE, UART Serial, USB 2.0, CANbus, and a DVI/HDMI output. Video capture capabilities include an HD-SDI input with a dedicated H.264 encoder and eight RS-170A (NTSC)/PAL composite channels.
“With the growing need for advanced imaging and data processing throughout space-rated applications, transitioning our powerful GPGPU-based AI supercomputers to this industry was a logical choice," said Dan Mor, Director, Video and GPGPU Product Line at Aitech, in a statement. "By validating these space-rated, COTS-based systems with a clearly defined and recognized qualification level, we're helping lead the charge in the development of commercial space applications and small sat cluster innovations."
Nvidia in SpaceOne interesting thing to note about Aitech’s S-A1760 Venus system is that it will be the first Nvidia SoC-based solution that will power devices like satellites. But it's not the first time we've seen Nvidia technology that's space-ready.
Select Lenovo's ThinkPads are certified for use on the International Space Station, and historically these PCs have used graphics processors from ATI Technologies (now AMD) and Nvidia. Of course, displaying graphics and perhaps doing some simulations is a different than powering a satellite or a unit within a spacecraft.
For 14 seconds of Nvidia CEO Jensen Huang’s “kitchen keynote” address at the company’s GTC 2021 virtual conference in April, it turns out that viewers were not actually seeing Huang.
Instead, what they were seeing in that brief stretch was a complex digitalization of Huang that the company created to show off its prowess in GPUs, HPC and cutting-edge technologies.
Nvidia revealed its “ magic trick” in an August 11 post on the Nvidia Blog, explaining that during one scene of the GTC21 keynote at the 1:02:41 mark of the one hour and 48 minute event, that 14 seconds of video purporting to be Huang was actually digitally created using Nvidia Omniverse.
It was innocuous at the time – Huang was introducing the company’s Nvidia Grace chip, the company’s first Arm-based CPU designed for terabyte scale accelerated computing – when Huang’s real video image was diced and hacked by video manipulation and then recomposed – this time as a digitalization of the real Huang. It was like watching Captain James T. Kirk being beamed up and down from a planet into the transporter room on the Starship Enterprise in “Star Trek."
It only last 14 seconds, but it was notable because of the techno-trickery that was needed to make it happen.
It was the third time Huang has hosted one of Nvidia’s trademark GPU Technology Conference (GTC) events virtually since 2020 due to the COVID-19 pandemic, each time from his home kitchen.
Only this time, the Nvidia team digitized his kitchen, turning it into a digital twin created from thousands of still photographs, millions of bits of information and more, according to the company.
“Typically, this type of project would take a team months to complete and weeks to render,” the post continued. “But with Omniverse, the animation was chiefly completed by a single animator and rendered in less than a day.”
In this spring’s April keynote, as Huang gave his introduction of Grace, the kitchen behind him and everything in it slid away, leaving Huang alone with the audience to show off an Nvidia DGX Station A100 machine containing Grace CPUs.
It was part of a shared virtual 3D world, or metaverse made up of interactive, immersive and collaborative components. Usually, such things are seen in sci-fi films, but here it was right in a virtual keynote hosted by GPU maker Nvidia.
The work was done by Nvidia artists, researchers and engineers on a tight deadline in the spring, and what they produced was a shared virtual world to tell the latest chapter in the company’s story, according to the blog post.
There are some of the gesture training screen captures created of Nvidia CEO Jensen Huang for the 14 seconds of digitized video in his GTC21 keynote.
“GTC is, first and foremost, our opportunity to highlight the amazing work that our engineers and other teams here at Nvidia have done all year long,” Rev Lebaredian, the vice president of Omniverse engineering and simulation at Nvidia, said in the post.
To create that virtual Huang, the company’s teams did a full face and body scan of him to create a 3D model, then trained an AI model to mimic his gestures and expressions, while applying some AI magic to make his clone realistic, the blog post reported. “Digital Jensen was then brought into a replica of his kitchen that was deconstructed to reveal the holodeck within Omniverse, surprising the audience and making them question how much of the keynote was real or rendered.”
The company also produced a short video documentary, “Connecting in the Metaverse: The Making of the GTC Keynote,” to graphically show the complex steps taken to produce the GTC21 milestone and its metaverse experiment.
Sure, it was techno-wizardry, but for the company, it was also a way of displaying some of the things it has been up to as it works to bring even more powerful computing to the enterprise.
Marc Staimer, president and chief data scientist analyst at Dragon Slayer Consulting, told EnterpriseAI that it may have been gimmicky, but it made an impact.
“It is also entertaining and keeps the audience engaged,” said Staimer. “The metaverse has potential as does augmented reality (AR) and virtual reality (VR). We are just at the very bleeding edge of achieving that potential because of AI and machine learning.”
Ultimately, Nvidia’s use of the technology in the keynote makes sense, said Staimer. “They provide hardware that makes it possible, so they are smart to associate themselves with it. Make no mistake, that technology will become ubiquitous before we know it.”
Another analyst, Karl Freund, founder and principal analyst for machine learning, HPC and AI at Cambrian AI Research, agreed.
“The idea of AR and VR collaboration is pretty compelling, and COVID-19 will linger and make digital collaboration even more important,” said Freund. “It is very cool that nobody even knew that a portion of Jensen’s presentation was digital.”
Addison Snell, the CEO of Intersect360 Research, told EnterpriseAI that it was a notable moment.
“The trick of instantly switching to digital twins of Jensen’s kitchen—and Jensen himself—was the sort of eye-popping-wow performance that we have come to expect from Nvidia with GTC,” Snell said. “Having graphics that are indistinguishable from magic is cool, but more importantly, it inspires innovation, not only in visual effects for movies and games, but for industries that rely on digital simulations, such as manufacturing, pharmaceuticals, and energy.”
Rob Enderle, principal analyst with Enderle Group, was similarly impressed.
“Nvidia’s Omniverse product is a front-end to creating the metaverse, which, eventually, will become a digital twin for much of the world,” said Enderle. “It has the potential to be bigger than the internet largely because it could contain it. This level of technology that can digitally create everything that was is or will be – and anything that can be imagined – has the potential to change how we view and interact with the world.”
Once this technology is further coupled with AI, the resulting metaverse extending to universal or microscopic scale “will allow exploring new places, new worlds, and theoretical places virtually based on actual, projected or imagined constructs and images,” said Enderle. “We can now, relatively cheaply, recreate almost anything digitally, even the impossible. We are about to enter the age of imagination, and Nvidia’s Omniverse is one of the significant foundational elements of the coming world of tomorrow.”
Huang was recently recognized by the Semiconductor Industry Association (SIA) as the recipient of its Robert N. Noyce Award, which is given annually in recognition of a leader who has made outstanding contributions to the semiconductor industry in technology or public policy.
With this partnership, the open-source 3D animation tool Blender will have universal scene description ( USD) support, enabling artists to access Omniverse production pipelines. Adobe will be collaborating with NVIDIA on a Substance 3D plugin that will bring Substance Material support to Omniverse, unlocking new material editing capabilities for Substance 3D users and Omniverse.
Scaling NVIDIA Omniverse
Pixar’s open-source USD is at the core of Omniverse’s industry adoption, enabling large teams to work simultaneously across multiple software applications on a shared 3D location. This open standard foundation gives software partners various ways to connect and extend to Omniverse, whether through USD adoption and support, building a plugin, or via an Omniverse Connector.
Companies like Apple, Pixar and NVIDIA have partnered to bring advanced physics capabilities to USD. Now, Blender and NVIDIA have collaborated to provide USD support to the upcoming release of Blender 3.0 and the millions of artists who use the software. NVIDIA contributes USD and materials support in Blender 3.0 alpha USD, which will be available soon for all. “Thanks to USD, Blender artists can have high-quality access to studio pipelines and collaboration platforms such as Omniverse,” said Ton Roosendaal, Chairman of the Blender Foundation.
On the other hand, Adobe and NVIDIA are collaborating on a Substance 3D plugin that will unlock new material editing abilities for Omniverse and Substance 3D users. In other words, creators and artists will work directly with Substance materials either sourced from the Substance 3D Asset platform or created in Substance 3D applications.
Sebastian Deguy, vice president of 3D at Adobe, said as the industry standard, Substance will strengthen the Omniverse ecosystem by empowering 3D creators with access to Substance 3D materials. Besides Adobe and Blender, other collaborators include Autodesk, Bentley Systems, Blender, Clo Virtual Fashion, Epic Games, Esri, Golaem, Graphisoft, Lightmap, Maxon, McNeel & Associates, Onshape from PTC, Reallusion, Trimble, and Wrench, Inc.
Use cases
In December last year, NVIDIA releasedOmniverse to open beta, and in April this year, NVIDIA launched Omni verse Enterprise. Since then, a wide variety of industries have quickly put it to work, revealed the NVIDIA team. Omniverse has been downloaded by over 50K individual creators since its open beta version launched in December.
Here are some of the examples of companies using Omniverse:
Based in New York, SHoP Architects is using Omniverse for real-time visualisation and collaboration.
The design and architecture firm Foster + Partners that designed Apple’s headquarters and London’s famed 30 St Mary Axe office tower, also known as ‘the Gherkin,’ designers in 14 countries across the globe created buildings together in their Omniverse shared virtual space.
Visual effects pioneer Industrial Light & Magic is experimenting with Omniverse to bring together multiple studios’ internal and external tool pipelines.
Telecom company Ericsson uses Omniverse to simulate 5G wave propagation in real-time, thereby minimising multi-path interference in dense city environments.
Infra engineering software company Bentley Systems is using Omniverse to create a suite of applications on its platform. Its platform makes a 4D infra ‘digital twin’ to simulate an infrastructure asset’s construction, monitor and optimise its performance, etc.
Interestingly, the long-running, Emmy Award-winning animated television series South Park is also exploring Omniverse to enable several artists to collaborate on scenes and optimise their production time. J. J. Franzen, CIO at South Park, said NVIDIA Omniverse would give multiple animators at their studio the ability to collaborate on a single scene simultaneously. “Using our NVIDIA RTX A6000 GPUs and Omniverse will provide our creative geniuses more opportunities to cause ever greater trouble on the show,” he added. analyticsindiamag.com
This year's virtual edition of SIGGRAPH, the premiere computer graphics conference, formally ended yesterday after a few days of announcements and sessions.
At SIGGRAPH 2021, NVIDIA presented a few of its latest researches in how to advance real-time graphics. Arguably the most interesting demonstration was that of Neural Radiance Caching, a new technique specifically designed for path traced global illumination.
Neural Radiance Caching combines RTX’s neural network acceleration hardware (NVIDIA TensorCores) and ray tracing hardware (NVIDIA RTCores) to create a system capable of fully-dynamic global illumination that works with all kinds of materials, be they diffuse, glossy, or volumetric. It handles fine-scale textures such as albedo, roughness, or bump maps, and scales to large, outdoor environments neither requiring auxiliary data structures nor scene parameterizations.
Combined with NVIDIA’s state-of-the-art direct lighting algorithm, ReSTIR, Neural Radiance Caching can improve rendering efficiency of global illumination by up to a factor of 100—two orders of magnitude.
At the heart of the technology is a single tiny neural network that runs up to 9x faster than TensorFlow v2.5.0. Its speed makes it possible to train the network live during gameplay in order to keep up with arbitrary dynamic content. On an NVIDIA RTX 3090 graphics card, Neural Radiance Caching can provide over 1 billion global illumination queries per second.
NVIDIA shared the CUDA source code for Neural Radiance Caching so that developers and researchers may further experiment with it. There's also a detailed technical paper available for purview if you want to learn the math behind this technology.
It's not all about lighting, however. NVIDIA's graphics researchers have also worked on the so-called neural reflectance field textures, or NeRF-Tex for short, which aim to model complex materials such as fur, fabric, or grass in a more accurate way.
Instead of using classical graphics primitives to model the structure, we propose to employ a versatile volumetric primitive represented by a neural reflectance field (NeRF-Tex), which jointly models the geometry of the material and its response to lighting. The NeRF-Tex primitive can be instantiated over a base mesh to “texture” it with the desired meso and microscale appearance. We condition the reflectance field on user-defined parameters that control the appearance. A single NeRF texture thus captures an entire space of reflectance fields rather than one specific structure. This increases the gamut of appearances that can be modeled and provides a solution for combating repetitive texturing artifacts.
The related technical paper can be found here. For a visual demonstration of NVIDIA's latest graphics research projects, including Neural Radiance Caching and NeRF-Tex, check out the footage below.
NVIDIA Brings Metaverse Momentum, Research Breakthroughs and New Pro GPU to SIGGRAPH August 13, 2021 by Brian Caulfield
Award-winning research, stunning demos, a sweeping vision for how NVIDIA Omniverse will accelerate the work of millions more professionals, and a new pro RTX GPU were the highlights at this week’s SIGGRAPH pro graphics conference.
Kicking off the week, NVIDA’s SIGGRAPH special address featuring Richard Kerris, vice president, Omniverse, and Sanja Fidler, senior director, AI research, with an intro by Pixar co-founder Alvy Ray Smith gathered more than 1.6 million views in just 48 hours.
A documentary launched Wednesday, “Connecting in the Metaverse: The Making of the GTC Keynote” – a behind-the-scenes view into how a small team of artists were able to blur the lines between real and rendered in NVIDIA’s GTC21 keynote achieved more than 360,000 views within the first 24 hours.
And the inaugural gathering of the NVIDIA Omniverse User Group brought more than 400 graphics professionals from all over the world together to learn about what’s coming next for Omniverse, to celebrate the work of the community, and announce the winners of the second #CreatewithMarbles: Marvelous Machine contest.
“Your work fuels what we do,” Rev Lebaredian, vice president of Omniverse engineering and simulation at NVIDIA told the scores of Omniverse users gathered for the event.
NVIDIA has been part of the SIGGRAPH community since 1993, with close to 150 papers accepted and NVIDIA employees leading more than 200 technical talks.
And SIGGRAPH has been the venue for some of NVIDIA’s biggest announcements — from OptiX in 2010 to the launch of NVIDIA RTX real-time ray tracing in 2018.
NVIDIA RTX A2000 Makes RTX More Accessible to More Pros Since then, thanks to its powerful real-time ray tracing and AI acceleration capabilities, NVIDIA RTX technology has transformed design and visualization workflows for the most complex tasks.
Introduced Tuesday, the new NVIDIA RTX A2000 — our most compact, power-efficient GPU — makes it easier to access RTX from anywhere. With the unique packaging of the A2000, there are many new form factors, from backs of displays to edge devices, that are now able to incorporate RTX technology.
The RTX A2000 is designed for everyday workflows, so more professionals can develop photorealistic renderings, build physically accurate simulations and use AI-accelerated tools.
The GPU has 6GB of memory capacity with error correction code, or ECC, to maintain data integrity for uncompromised computing accuracy and reliability.
With remote work part of the new normal, simultaneous collaboration with colleagues on projects across the globe is critical.
NVIDIA RTX technology powers Omniverse, our collaboration and simulation platform that enables teams to iterate together on a single 3D design in real time while working across different software applications.
The A2000 will serve as a portal into this world for millions of designers.
Building the Metaverse NVIDIA also announced a major expansion of NVIDIA Omniverse — the world’s first simulation and collaboration platform — through new integrations with Blender and Adobe that will open it to millions more users.
Omniverse makes it possible for designers, artists and reviewers to work together in real-time across leading software applications in a shared virtual world from anywhere.
Blender, the world’s leading open-source 3D animation tool, will now have Universal Scene Description, or USD, support, enabling artists to access Omniverse production pipelines.
Adobe is collaborating with NVIDIA on a Substance 3D plugin that will bring Substance Material support to Omniverse, unlocking new material editing capabilities for Omniverse and Substance 3D users.
So far, professionals at over 500 companies, including BMW, Volvo, SHoP Architects, South Park and Lockheed Martin, are evaluating the platform. Since the launch of its open beta in December, Omniverse has been downloaded by over 50,000 individual creators.
NVIDIA Research Showcases Digital Avatars at SIGGRAPH More innovations are coming.
Highlighting their ongoing contributions to cutting-edge computer graphics, NVIDIA researchers put four AI models to work to serve up a stunning digital avatar demo for SIGGRAPH 2021’s Real-Time Live showcase.
The demo featured tools to generate digital avatars from a single photo, animate avatars with natural 3D facial motion and convert text to speech.
The demo was just one highlight among a host of contributions from the more than 200 scientists who make up the NVIDIA Research team at this year’s conference.
These innovations quickly become tools that NVIDIA is hustling to bring to graphics professionals.
Created to help professionals and students master skills that will help them quickly advance their work, NVIDIA’s Deep Learning Institute held sessions covering a range of key technologies at SIGGRAPH.
NVIDIA also showcased how its technology is transforming workflows in several demos, including:Factory of the Future: Participants explored the next era of manufacturing with this demo, which showcases BMW Group’s factory of the future — designed, simulated, operated and maintained entirely in NVIDIA Omniverse.
Multiple Artists, One Server: SIGGRAPH attendees could learn how teams can accelerate visual effects production with the NVIDIA EGX platform, which enables multiple artists to work together on a powerful, secure server from anywhere.
3D Photogrammetry on an RTX Mobile Workstation: Participants got to watch how NVIDIA RTX-powered mobile workstations help drive the process of 3D scanning using photogrammetry, whether in a studio or a remote location.
Interactive Volumes with NanoVDB in Blender Cycles: Attendees learned how NanoVDB makes volume rendering more GPU memory efficient, meaning larger and more complex scenes can be interactively adjusted and rendered with NVIDIA RTX-accelerated ray tracing and AI denoising.
NVIDIA Corporation to report Q2 earnings on Wednesday Aug. 18
Semiconductor giant, NVIDIA Corporation (NASDAQ: NVDA) will report its Q2 earnings on Wednesday, Aug. 18 after the market close. Analysts expect the chipmaker to produce an EPS of $1.02 on revenues of $6.32 billion. NVDA Weekly TTM
During the past three months, shares of NVIDIA have gained momentum, as the demand for chips remained strong. The stock, which went through a 4:1 split in July, closed at $201.88 on Friday, after surging 55% this year.
The Santa Clara, California-based chip manufacturer is the biggest producer of graphics chips used in personal computer gaming. Over the past few years, NVDA has successfully adapted its technology for the Artificial Intelligence market, creating an additional, new, multi-billion-dollar line of business.
In May, NVIDIA provided a bullish forecast on demand for chips used in gaming PCs, data centers and cryptocurrency mining. Revenue in the current quarter will be about $6.3 billion, plus or minus 2%. A $400 million chunk of second-quarter revenue will come from special chips the company has created for use by cryptocurrency miners.