SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Frank Sully8/9/2021 10:29:23 AM
   of 2632
 
Can You Trust The Solutions That AI Technologies Deliver? With Mathematical Optimization, You Can



Edward Rothberg
Forbes Councils Member

Forbes Technology CouncilCOUNCIL POST| Membership (fee-based)
Innovation

Edward Rothberg is CEO and Co-Founder of Gurobi Optimization, which produces the world’s fastest mathematical optimization solver.

Every day, more and more enterprises are using AI technologies — such as machine learning, mathematical optimization and heuristics — to make high-stakes decisions in industries like healthcare, electric power, logistics, financial services and in public sector areas like the military, infrastructure and criminal justice.

But as our reliance on these AI technologies to make critical decisions has increased, concerns over the reliability of the solutions delivered by these technologies have grown.

Numerous high-profile incidents — like self-driving cars failing to recognizeslightly modified stop signs, machine learning-based scoring systems demonstrating racial bias when predicting the likelihood of criminals committing future crimes, Google Trends wrongly predicting flu outbreaks based on search data and the algorithms used by Apple to determine credit-worthiness apparently discriminating against women — have shone a spotlight on some of the inherent shortcomings and unintended biases of AI technologies, shaking our confidence in the accuracy of these solutions.

Indeed, these and other incidents have left many wondering: Can we really trust the solutions delivered by AI technologies?

The Importance Of Interpretability

The root of the problem is that many of these AI tools are black boxes – meaning that users have little or no understanding of their inner workings and how they arrive at a given solution or make a particular automated decision.

The opaqueness of many AI technologies — which are based on sophisticated algorithms and complex mathematical models — has fueled concerns that AI may be producing inaccurate solutions and perpetuating bias in decision-making — and sowed distrust in AI-based solutions and decisions. This has spurred demands for greater transparency and accountability (with some formally calling for a “ right to explanation” for decisions made by algorithms) and illuminated the importance of interpretability in AI.

Interpretability — the capability to understand how an AI system works and explain why it generated a solution or made a decision — is a hot topic in the business world and an area of active research, with developers across the AI software space striving to make technologies such as machine learning more interpretable. There has been significant progress with introduction of new approaches that help improve the interpretability of machine learning and other AI tools.

There are, however, some AI technologies that are inherently interpretable — and mathematical optimization is, without a doubt, one of these technologies.

Indeed, interpretability is (and always has been) an essential characteristic and a key strength of mathematical optimization. As the CEO of a mathematical optimization software firm, I witness every day how organizations across the business world depend on this prescriptive analytics technology to deliver solutions they can understand, trust and use to make pivotal decisions.

Assessing Interpretability

How can we gauge the interpretability of AI technologies?

The US National Institute of Standards and Technology (NIST) has developed four principles that encompass the “core concepts of explainable AI” — and this framework provides a useful lens through which we can explore and evaluate the interpretability of AI technologies.

Let’s take a look at how mathematical optimization stacks up against these four NIST principles:

1. Knowledge Limits

AI systems should only operate within the “limits” and “under conditions” they were designed for.

Mathematical optimization systems consist of two components: a mathematical optimization solver (an algorithmic engine) and a mathematical model (a detailed, customized representation of your business problem, which encapsulates all of your decision-making processes, business objectives and constraints). Users of mathematical optimization can design their models and thereby define what the “limits” and “conditions” of their systems are.

2. Explanation

AI systems should “supply evidence, support or reasoning” for each output or solution.

A mathematical optimization model is essentially a digital twin of your business environment, and the constraints (or business rules) that must be satisfied are embedded into that model. Any solution generated by your mathematical optimization system can easily be checked against those constraints.

3. Explanation Accuracy

The explanations provided by AI systems should “accurately describe how the system came to its conclusion.”

Mathematical optimization is a problem-solving technology that can rapidly and comprehensively comb through trillions or more possible solutions to incredibly complex business problems and find the optimal one. Since mathematical optimization conducts this search in a systematic fashion, the solutions delivered by this AI technology come with a mathematically backed guarantee of quality — and this fact can be audited and validated.

4. Meaningful

AI systems should “provide explanations that are understandable to individual users.”

Most AI tools — like neural networks and random forests — run on black box models. You feed them data, they work their magic and automatically spit out a solution. It’s essentially impossible (even for many developers) to gain insight into how these systems actually work or why they are making specific predictions or decisions. Mathematical optimization models, in contrast, are transparent and interpretable and are meaningful by design (as they capture the fundamental features of your real-world operating environment). The models themselves (and the solutions they deliver) reflect reality — and are thus understandable for users.

As you can see, mathematical optimization fulfills all four NIST principles and excels in the area of interpretability. With mathematical optimization, you can attain a deep understanding of how and why the AI system makes certain decisions — and thereby gain trust in those decisions.

It’s important to note that mathematical optimization is not the only AI tool that can deliver interpretable solutions — there are a number of other AI technologies that have this capability (and other technologies that are developing in this area).

When you’re deciding whether or not to invest in one of these AI technologies, one critical factor to consider is their interpretability — and the NIST principles provide a good framework through which to assess this.

Understanding “The Why” Of AI

The issue of interpretability in AI continues to captivate the business world, with “explainable AI” trends and technological breakthroughs grabbing news headlines and “explainable AI” initiatives topping the agendas of IT executives. While many companies in the AI space grapple with questions of how to make their technologies more transparent and trustworthy, there are already AI tools out there — like mathematical optimization — that are innately equipped to deliver interpretable, reliable and optimal solutions to today’s problems.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/9/2021 11:14:38 AM
   of 2632
 
Synopsys, Cadence, Google And NVIDIA All Agree: Use AI To Help Design Chips



Karl Freund
Contributor
Enterprise Tech

Synopsys created a buzz in 2020, and now Google, NVIDIA, and Cadence Design have joined the party. What lies ahead?

Introduction

Designing modern semiconductors can take years and scores of engineers armed with state-of-the-art EDA design tools. But the semiconductor landscape and the world around us is being revolutionized by hundreds of new chips, primarily driven by AI. Some entrepreneurial thought leaders believe that the expensive and lengthy chip design process could shrink from 2-3 years to 2-3 months if hardware development was to become more agile, more autonomous. And chief among a new breed of agile design tools is AI itself.

The Semiconductor Design Landscape

This discussion began in earnest when the EDA leader Synopsys announced DSO.ai, Design Space Optimization AI, an software product that could more autonomously identify optimal ways to arrange silicon components (layouts) on a chip to reduce the area and reduce power consumption, even while increasing performance. Using reinforcement learning, DSO.ai could evaluate billions of alternatives against design goals, and produce a design that was significantly better than that produced by talented engineers. The size of the problem/solution space DSO.ai addresses is staggering: there are something like 1090,000possible ways to place components on a chip. That compares to 10360 possible moves in the game of Go which was mastered by Google AI in 2016. Since reinforcement learning can play Go better than the world champion, one could conceivably design a better chip if one is willing to spend the compute time to do it.

Results are quite impressive, realizing 18% faster operating frequency at 21% lower power, while reducing engineering time from six months to as little as one. In a recent interview, Synopsys’ Founder and Co-CEO Aart de Geus disclosed that Samsung have a working chip in-house today that was designed with DSO.ai. This would indeed be the world’s first use of AI to create a chip layout in production – from RTL to tapeout.

Recently Google published results of doing something similar, as has NVIDIA. And Cadence Design Systems just announced an AI-based optimization platform similar to Synopsys DSO.ai. Before we take a look at these efforts, lets back up a little and look at the entire semiconductor design space. A good place to start is the Gajski-Kuhn Chart that outlines all the steps of chip design along three axes: the Behavioral level where architects defines what the chip is supposed to do, the Structural level where they determine how the chip is organized, and the Geometry level where engineers define how the chip is laid out.

Based on this model, each step towards the center (which is when the team “tapes out” the chip to the fabrication partner) feeds the work in the next phase in a clockwise direction. To date, all the application of AI has been in the geometry space, or physical design, to address the waning of Moore’s Law.

Synopsys DSO.ai

As I covered at launch, Synopsys DSO.ai was the first entrant to apply AI to the physical design process, producing floor plans that consumed lower power, ran at higher frequencies, and occupied less space than the best an experienced design could produce. What really attracted my attention was the profound effect of AI on productivity; DSO.ai users were able to achieve in a few days what it used to take teams of experts, many weeks.

Google Research and NVIDIA Research

Both companies have produced research papers that describe the use of reinforcement learning to assist in the physical design of the floor plan. In Google’s case, AI is being used to lay out the floor plan of the next generation TPU Chip and the company is investigating additional uses of AI such as in architectural optimization.

NVIDIA similarly has focused on that same low-hanging fruit: floorplanning, and with all the compute capacity they have in-house, I’d expect NVDIA to continue to eat their own dogfood and use AI to design better AI chips.

google.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/9/2021 4:54:21 PM
   of 2632
 
NVIDIA Graphics Research

One Minute Video


Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/9/2021 7:51:12 PM
   of 2632
 
NVIDIA maintains tight grip on market for AI processors in cloud and data centers

OMDIA’s AI Processors for Cloud and Data Center Forecast Reporthas declared NVIDIA as one of the frontrunners in terms of how it uses artificial intelligence (AI) in the cloud and data centers.

According to the report, NVIDIA swept up an impressive 80.6% share of all global revenue - a total of $3.2 billion in 2020, up from $1.8 billion in 2019.

The report labels NVIDIA’s market dominance as ‘supremacy in the market for GPU-derived chips’ - the likes of which are commonly deployed in servers, workstations, and expansion cards across cloud and data center equipment.

Omdia principal analyst advanced computing, Jonathan Cassell, says NVIDIA employs key strategies to maintain its growth.

“With their capability to accelerate deep-learning applications, GPU-based semiconductors became the first type of AI processor widely employed for AI acceleration. And as the leading supplier of GPU-derived chips, NVIDIA has established itself and bolstered its position as the AI processor market leader for the key cloud and data center market,” explains Cassell.

NVIDIA’s dominance is also leading to intense market competition as suppliers battle it out and claim their share of the total $4 billion market revenue for cloud and data center AI processors. Total market revenue could reach $37.6 billion by 2026.

“Despite the onslaught of new competitors and new types of chips, NVIDIA’s GPU-based devices have remained the default choice for cloud hyperscalers and on-premises data centers, partly because of their familiarity to users,” says Cassell.

Cassell points to the NVIDIA Compute Unified Device Architecture (CUDA) Toolkit, which is used extensively in the AI software development community. This, by default, provides a boost for NVIDIA’s associated products such as GPU chips.

But market competition will only increase, particularly as the market looks towards other, non NVIDIA-based GPU chips and other AI processors in the future.

According to Omdia’s research, other major market players in the cloud and data center AI processor market include Xilinx, Google, Intel, and AMD.

Xilinx ranked second behind NVIDIA. Xilinx provides field-programmable gate array FPGA products commonly used for AI inferencing in cloud and data center servers.

Google ranked third. Its Tensor Processing Unit (TPU) AI ASIC is employed extensively in its own hyperscale cloud operations.

Intel ranked fourth. Its Habana AI proprietary-core AI ASSPs and its FPGA products are designed for AI cloud and data center servers.

AMD ranked fifth for its GPU-derived AI ASSPs for cloud and data center servers.

channellife.com.au

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 2:06:46 PM
   of 2632
 
Nvidia expands Omniverse with a new GPU, new collaborations

The new RTX A2000 is a powerful, low-profile, dual-slot GPU for professionals, with 6GB of ECC graphics memory in a compact form factor. Nvidia says it will expand access to Omniverse, its 3D collaboration platform.



By Stephanie Condon for Between the Lines | August 10, 2021 -- 16:00 GMT (09:00 PDT) | Topic: Processors

Nvidia on Tuesday announced a series of ways it plans to bring the Omniverse design and collaboration platform to a vastly larger audience. Those plans include new integrations with Blender and Adobe, companies that will extend the potential reach of Omniverse by millions. Nvidia is also introducing the new RTX A2000 GPU, bringing the RTX technology that powers Omniverse to a wide range of mainstream computers.

Nvidia rolled out Omniverse in open beta back in December, giving 3D designers a shared virtual world from which they can collaborate across different software applications and from different geographic locations. Earlier this year, the company introduced Omniverse Enterprise, bringing the platform to the enterprise community via a familiar licensing model.

"We are building Omniverse to be the connector of the physical and virtual worlds," Richard Kerris, VP of Omniverse for Nvidia, said to reporters last week. "We believe that there will be more content and experiences shared in virtual worlds than in physical worlds. And we believe that there will be amazing exchange markets and economic situations that will be first built in the virtual world... Omniverse is an exchange of these vital worlds. We connect everything and everyone, through a baseline architecture that is familiar to existing tools that are out there and existing workflows."

Nvidia unveiled the RTX A2000 GPU to bring RTX technology to mainstream workstations.

In a blog post, Nvidia VP Bob Pette wrote that the new A2000 GPU "would serve as a portal" to Omniverse "for millions of designers." The A2000 is Nvidia's most compact, power-efficient GPU for standard and small-form-factor workstations.

The GPU has 6GB of memory capacity with an error correction code (ECC) to maintain data integrity -- a feature especially important for industries such as healthcare and financial services.

Based on the Nvidia Ampere architecture, it features 2nd Gen RT Cores, enabling real-time ray tracing for professional workflows. It offers up to 5x the rendering performance from the previous generation with RTX on. It also features 3rd Gen tensor cores to enable AI-augmented tools and applications, as well as CUDA cores with up to 2x the FP32 throughput of the previous generation.

Speaking to reporters, Pette said the A200 would enable RTX in millions of additional mainstream computers. More designers will have access to the real-time ray tracing and AI acceleration capabilities that RTX offers. "This is the first foray of RTX into what is the largest volume segment of GPUs for Nvidia," Pette said.

Among the first customers using the RTX A2000 are Avid, Cuhaci & Peterson and Gilbane Building Company.

The A2000 desktop GPU will be available in workstations from manufacturers including ASUS, BOXX Technologies, Dell Technologies, HP and Lenovo, as well as Nvidia's global distribution partners, starting in October.

Meanwhile, Nvidia is encouraging the adoption of Omniverse by supporting Universal Scene Description (USD), an interchange framework invented by Pixar in 2012. USD was released as open-source software in 2016, providing a common language for defining, packaging, assembling and editing 3D data.

Omniverse is built on the USD framework, giving other software makers different ways to connect to the platform. Nvidia announced Tuesday that it's collaborating with Blender, the world's leading open-source 3D animation tool, to provide USD support to the upcoming release of Blender 3.0. This will give Blender's millions of users access to Omniverse production pipelines. Nvidia is contributing USD and materials support in Blender 3.0 alpha USD, which will be available soon.

Nvidia has also collaborated with Pixar and Apple to define a common approach for expressing physically accurate models in USD. More specifically, they've developed a new schema for rigid-body physics, the math that describes how solids behave in the real world (for example, how marbles would roll down a ramp). This will help developers create and share realistic simulations in a standard way.

Nvidia also announced a new collaboration with Adobe on a Substance 3D plugin that will bring Substance Material support to Omniverse. This will give Omniverse and Substance 3D users new material editing capabilities.

EXPANDING OMNIVERSE ACCESSNvidia on Tuesday also announced that Omniverse Enterprise, currently in limited early access, will be available later this year on a subscription basis from its partner network. That includes ASUS, BOXX Technologies, Dell Technologies, HP, Lenovo, PNY and Supermicro.

The company is also extending its Developer Program to include Omniverse. This means the developer community for Nvidia will have access to Omniverse with custom extensions, microservices, source code, examples, resources and training.

zdnet.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 2:15:40 PM
1 Recommendation   of 2632
 
Nvidia Has Turned $10,000 Into $250,000. Here's Why It Can Do It Again

The graphics specialist can zoom higher thanks to a solid set of catalysts.



Harsh Chauhan
(TMFTechJunk13)

Aug 10, 2021 at 6:30AM

Key Points

Nvidia can grow at a faster pace in the coming years as compared to the last five years.The company's domination of the graphics card market opens up a huge opportunity that could supercharge sales growth.Additional catalysts in the form of the data center and automotive markets strengthen the bullish case.

If you had $10,000 to invest at the beginning of 2016 and bought shares of Nvidia( NASDAQ:NVDA) using that money, your initial investment would be worth just about $250,000 right now.



NVDA DATA BY YCHARTS

Nvidia has beaten the broader market handsomely over the years thanks to its strong suite of products, which has helped it attract millions of customers and dominate a fast-growing space. It is now offering products that deliver more bang for their buck to existing customers, allowing it to increase average selling prices (ASPs) and bring new customers into the fold. As such, Nvidia can repeat -- or improve upon -- its terrific stock market performance once again in the coming years.

Let's look at the reasons why.

Nvidia's outstanding growth can last for a long timeNvidia's dominance of the GPU (graphics processing unit) market has accelerated its revenue and earnings growth over the years. In fiscal 2016, the company had reported just $5 billion in annual revenue and $929 million in adjusted net income. In fiscal 2021, which ended in January this year, Nvidia's annual revenue had ballooned to $16.7 billion and adjusted net income had jumped to $6.2 billion.

This translates into a compound annual revenue growth rate of 27% over the five-year period, while the non- GAAP net income grew 46% per year. Of late, Nvidia has been outpacing its historical growth. Its revenue for the first quarter of fiscal 2022 surged 84% year over year to $5.66 billion, while adjusted net income was up 107% to $2.3 billion.

Analysts forecast that Nvidia's surge will continue this fiscal year, with revenue expected to jump 49% over last year and earnings per share anticipated to increase to the tune of 58%. I think that Nvidia can sustain its outstanding growth beyond this year because of a few simple reasons.

First, Nvidia controls 80% of the discrete GPU market, according to Jon Peddie Research. The discrete GPU market is expected to generate $54 billion in annual revenue by 2025 as compared to $23.6 billion last year. Nvidia's market share puts it in pole position to corner a major chunk of the additional revenue opportunity, and it is unlikely to yield ground to smaller rival Advanced Micro Devices( NASDAQ:AMD) because of its technological advantage.

According to a survey by game distribution service Steam, Nvidia's flagship RTX 3090 card is outselling AMD's latest RX 6000 series cards by a ratio of 11:1. What's surprising is that the RTX 3090 is outperforming AMD's entire lineup despite its high starting price of $1,499, which makes it significantly more expensive than AMD's flagship RX 6900 XT that starts at $999.

The reason why this may be the case is that Nvidia's latest RTX 30 series cards outperform AMD's offerings as per independent benchmarks and are also priced competitively. For instance, the RTX 3080 that starts at $699 reportedly outperforms the pricier RX 6900 XT. Meanwhile, Nvidia takes a substantial lead when it comes to identically priced offerings. The RTX 3070 Ti that starts at $599 is reportedly 21% faster than AMD's RX 6800 that starts at $579, indicating that Nvidia offers more bang for the buck.

As a result, Nvidia is commanding a higher price from customers for its RTX 30 series cards. The average selling price of Nvidia's RTX 30 series cards hit $360 in the first six months of their launch, up 20% from the previous generation Turing cards.

With the gaming business producing 49% of Nvidia's total revenue in Q1, the bright prospects of this segment will play a critical role in helping the company deliver solid upside to investors in the future. This, however, isn't the only catalyst to look out for.

Two more reasons to go long

While Nvidia's gaming business will provide the base for its long-term growth, its data center and automotive businesses will bring additional catalysts into play.

The data center segment, for instance, generated just $339 million in revenue in fiscal 2016. The segment's fiscal 2021 revenue jumped to $6.7 billion and accounted for 40% of the total revenue. Nvidia investors can expect the data center business to keep booming as the company is attacking a substantial opportunity in this space through multiple chip platforms.

Nvidia's data center GPUs are already a hit among major cloud service providers thanks to their ability to tackle artificial intelligence and machine learning workloads. The company is now doubling down on new opportunities such as data processing units (DPUs) and CPUs (central processing units), strengthening its position in the data center accelerator space.

According to a third-party estimate, the data center accelerator market was worth $4.2 billion last year. That number is expected to jump to $53 billion by 2025, opening another huge opportunity for the chipmaker. And finally, the automotive segment is expected to turn into another happy hunting ground for Nvidia.

The company generated just $536 million in revenue from the automotive segment last year. However, Nvidia says that it has built an automotive design win pipeline worth over $8 billion through fiscal 2027, partnering with several well-known OEMs (original equipment manufacturers) such as Mercedes-Benz, Audi, Hyundai, Volvo, Navistar, and others. So, Nvidia's automotive business seems ready to step on the gas.

In the end, it can be said that Nvidia now has bigger growth drivers compared to 2016. More importantly, it is in a strong position to tap into the multibillion-dollar end-market opportunities, making it a top growth stock investors can comfortably buy and hold for at least the next five years.

fool.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 9:46:53 PM
   of 2632
 
A Code for the Code: Simulations Obey Laws of Physics with USD

Universal Scene Description now sports an extension so objects can be simulated to act just like they would in the real world thanks to the collaboration of Apple, NVIDIA and Pixar Animation Studios.

August 10, 2021 by Richard Kerris



Life in the metaverse is getting more real.

Starting today, developers can create and share realistic simulations in a standard way. Apple, NVIDIA and Pixar Animation Studios have defined a common approach for expressing physically accurate models in Universal Scene Description (USD), the common language of virtual 3D worlds. Pixar released USD and described it in 2016 at SIGGRAPH. It was originally designed so artists could work together, creating virtual characters and environments in a movie with the tools of their choice.

Fast forward, and USD is now pervasive in animation and special effects. USD?is?spreading to other professions?like architects who can benefit from?their tools to design and test everything from skyscrapers to sports cars and smart cities.

Playing on the Big Screen To serve the needs of this expanding community, USD needs to stretch in many directions. The good news is Pixar designed USD to be open and flexible.

So, it’s fitting the SIGGRAPH 2021 keynote provides a stage to describe USD’s latest extension. In technical terms, it’s a new schema for rigid-body physics, the math that describes how solids behave in the real world.

For example, when you’re simulating a game where marbles roll down ramps, you want them to respond just as you would expect when they hit each other. To do that, developers need physical details like the weight of the marbles and the smoothness of the ramp. That’s what this new extension supplies.



USD Keeps Getting Better

The initial HTML 1.0 standard, circa 1993, defined how web pages used text and graphics. Fifteen years later HTML5 extended the definition to include video so any user on any device could watch videos and movies.

Apple and NVIDIA were both independently working on ways to describe physics in simulations. As members of the SIGGRAPH community, we came together with Pixar to define a single approach as a new addition to USD.

In the spirit of flexibility, the extension lets developers choose whatever solvers they prefer as they can all be driven from the same set of USD-data. This presents a unified set of data suitable for off-line simulation for film, to games, to augmented reality.

That’s important because solvers for real-time uses like gaming prioritize speed over accuracy, while architects, for example, want solvers that put accuracy ahead of speed.

An Advance That Benefits All Together the three companies wrote a white paper describing their combined proposal and shared it with the USD community. The reviews are in and it’s a hit. Now the extension is part of the standard USD distribution, freely available for all developers.

The list of companies that stand to benefit reads like credits for an epic movie. It includes architects, building managers, product designers and manufacturers of all sorts, companies that design games — even cellular providers optimizing layouts of next-generation networks. And, of course, all the vendors that provide the digital tools to do the work.

“USD is a major force in our industry because it allows for a powerful and consistent representation of complex, 3D scene data across workflows,” said Steve May,?Chief Technology Officer at Pixar.

“Working with NVIDIA and Apple, we have developed a new physics extension that makes USD even more expressive and will have major implications for entertainment and other industries,” he added.

Making a Metaverse Together It’s a big community we aim to serve with NVIDIA Omniverse, a collaboration environment that’s been described as an operating system for creatives or “like Google Docs for 3D graphics.”

We want to make it easy for any company to create lifelike simulations with the tools of their choice. It’s a goal shared by dozens of organizations now evaluating Omniverse Enterprise, and close to 400 companies and tens of thousands of individual creators who have downloaded Omniverse open beta since its release in December 2020.

We envision a world of interconnected virtual worlds — a metaverse — where someday anyone can share their life’s work.

Making that virtual universe real will take a lot of hard work. USD will need to be extended in many dimensions to accommodate the community’s diverse needs.

A Virtual Invitation To get a taste of what’s possible, watch a panel discussion from GTC (free with registration), where 3D experts from nine companies including Pixar, BMW Group, Bentley Systems, Adobe and Foster + Partners talked about the opportunities and challenges ahead.

We’re happy we could collaborate with engineers and designers at Apple and Pixar on this latest USD extension. We’re already thinking about a sequel for soft-body physics and so much more.

Together we can build a metaverse where every tool is available for every job.

For more details, watch a talk on the USD physics extension from NVIDIA’s Adam Moravanszky and attend a USD birds-of-a-feather session hosted by Pixar.

blogs.nvidia.com


Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 10:21:15 PM
   of 2632
 
What Is the Metaverse?

With NVIDIA Omniverse we can (finally) connect to it to do real work - here’s how.

August 10, 2021 by Brian Caulfield



What is the metaverse? The metaverse is a shared virtual 3D world, or worlds, that are interactive, immersive, and collaborative.

Just as the physical universe is a collection of worlds that are connected in space, the metaverse can be thought of as a bunch of worlds, too.

Massive online social games, like battle royale juggernaut Fortnite and user-created virtual worlds like Minecraft and Roblox, reflect some elements of the idea.

Video-conferencing tools, which link far-flung colleagues together amidst the global COVID pandemic, are another hint at what’s to come.

But the vision laid out by Neal Stephenson’s 1992 classic novel “Snow Crash” goes well beyond any single game or video-conferencing app.

The metaverse will become a platform that’s not tied to any one app or any single place — digital or real, explains Rev Lebaredian, vice president of simulation technology at NVIDIA.

And just as virtual places will be persistent, so will the objects and identities of those moving through them, allowing digital goods and identities to move from one virtual world to another, and even into our world, with augmented reality.



The metaverse will become a platform that’s not tied to any one place, physical or digital. “Ultimately we’re talking about creating another reality, another world, that’s as rich as the real world,” Lebaredian says.

Those ideas are already being put to work with NVIDIA Omniverse, which, simply put, is a platform for connecting 3D worlds into a shared virtual universe.

Omniverse is in use across a growing number of industries for projects such as design collaboration and creating “digital twins,” simulations of real-world buildings and factories.



BMW Group uses NVIDIA Omniverse to create a future factory, a perfect “digital twin” designed entirely in digital and simulated from beginning to end in NVIDIA Omniverse. How NVIDIA Omniverse Creates, Connects Worlds Within the Metaverse So how does Omniverse work? We can break it down into three big parts.




NVIDIA Omniverse weaves together the Universal Scene Description interchange framework invented by Pixar with technologies for modeling physics, materials, and real-time path tracing. The first is Omniverse Nucleus. It’s a database engine that connects users and enables the interchange of 3D assets and scene descriptions.

Once connected, designers doing modeling, layout, shading, animation, lighting, special effects or rendering can collaborate to create a scene.

Omniverse Nucleus relies on USD, or Universal Scene Description, an interchange framework invented by Pixar in 2012.

Released as open-source software in 2016, USD provides a rich, common language for defining, packaging, assembling and editing 3D data for a growing array of industries and applications.

Lebardian and others say USD is to the emerging metaverse what hyper-text markup language, or HTML, was to the web — a common language that can be used, and advanced, to support the metaverse.

Multiple users can connect to Nucleus, transmitting and receiving changes to their world as USD snippets.

The second part of Omniverse is the composition, rendering and animation engine — the simulation of the virtual world.




Simulation of virtual worlds in NVIDIA DRIVE Sim on Omniverse. Omniverse is a platform built from the ground up to be physically based. Thanks to NVIDIA RTX graphics technologies, it is fully path traced, simulating how each ray of light bounces around a virtual world in real-time.

Omniverse simulates physics with NVIDIA PhysX. It simulates materials with NVIDIA MDL, or material definition language.



Built in NVIDIA Omniverse Marbles at Night is a physics-based demo created with dynamic, ray-traced lights and over 100 million polygons. And Omniverse is fully integrated with NVIDIA AI (which is key to advancing robotics, more on that later).

Omniverse is cloud-native, scales across multiple GPUs, runs on any RTX platform and streams remotely to any device.

The third part is NVIDIA CloudXR, which includes client and server software for streaming extended reality content from OpenVR applications to Android and Windows devices, allowing users to portal into and out of Omniverse.



NVIDIA Omniverse promises to blend real and virtual realities. You can teleport into Omniverse with virtual reality, and AIs can teleport out of Omniverse with augmented reality.

Metaverses Made Real NVIDIA released Omniverse to open beta in December, and NVIDIA Omniverse Enterprise in April. Professionals in a wide variety of industries quickly put it to work.

At Foster + Partners, the legendary design and architecture firm that designed Apple’s headquarters and London’s famed 30 St Mary Axe office tower — better known as “the Gherkin” — designers in 14 countries worldwide create buildings together in their Omniverse shared virtual space.

Visual effects pioneer Industrial Light & Magic is testing Omniverse to bring together internal and external tool pipelines from multiple studios. Omniverse lets them collaborate, render final shots in real-time and create massive virtual sets like holodecks.



Multinational networking and telecommunications company Ericsson uses Omniverse to simulate 5G wave propagation in real-time, minimizing multi-path interference in dense city environments.



Ericsson uses Omniverse to do real-time 5G wave propagation simulation in dense city environments. Infrastructure engineering software company Bentley Systems is using Omniverse to create a suite of applications on the platform. Bentley’s iTwin platform creates a 4D infrastructure digital twin to simulate an infrastructure asset’s construction, then monitor and optimize its performance throughout its lifecycle.

The Metaverse Can Help Humans and Robots Collaborate These virtual worlds are ideal for training robots.

One of the essential features of NVIDIA Omniverse is that it obeys the laws of physics. Omniverse can simulate particles and fluids, materials and even machines, right down to their springs and cables.



Modeling the natural world in a virtual one is a fundamental capability for robotics.

It allows users to create a virtual world where robots — powered by AI brains that can learn from their real or digital environments — can train.

Once the minds of these robots are trained in the Omniverse, roboticists can load those brains onto a NVIDIA Jetson, and connect it to a real robot.

Those robots will come in all sizes and shapes — box movers, pick-and-place arms, forklifts, cars, trucks and even buildings.



In the future, a factory will be a robot, orchestrating many robots inside, building cars that are robots themselves. How the Metaverse, and NVIDIA Omniverse, Enable Digital Twins NVIDIA Omniverse provides a description for these shared worlds that people and robots can connect to — and collaborate in — to better work together.

It’s an idea that automaker BMW Group is already putting to work.

The automaker produces more than 2 million cars a year. In its most advanced factory, the company makes a car every minute. And each vehicle is customized differently.

BMW Group is using NVIDIA Omniverse to create a future factory, a perfect “digital twin.” It’s designed entirely in digital and simulated from beginning to end in Omniverse.



The Omniverse-enabled factory can connect to enterprise resource planning systems, simulating the factory’s throughput. It can simulate new plant layouts. It can even become the dashboard for factory employees, who can uplink into a robot to teleoperate it.

The AI and software that run the virtual factory are the same as what will run the physical one. In other words, the virtual and physical factories and their robots will operate in a loop. They’re twins.

No Longer Science Fiction Omniverse is the “plumbing,” on which metaverses can be built.

It’s an open platform with USD universal 3D interchange, connecting them into a large network of users. NVIDIA has 12 Omniverse Connectors to major design tools already, with another 40 on the way. The Omniverse Connector SDK sample code, for developers to write their own Connectors, is available for download now.

The most important design tool platforms are signed up. NVIDIA has already enlisted partners from the world’s largest industries — media and entertainment; gaming; architecture, engineering and construction; manufacturing; telecommunications; infrastructure; and automotive.

And the hardware needed to run it is here now.

Computer makers worldwide are building NVIDIA-Certified workstations, notebooks and servers, which all have been validated for running GPU-accelerated workloads with optimum performance, reliability and scale. And starting later this year, Omniverse Enterprise will be available for enterprise license via subscription from the NVIDIA Partner Network.



With NVIDIA Omniverse teams are able to collaborate in real-time, from different places, using different tools, on the same project. Thanks to NVIDIA Omniverse, the metaverse is no longer science fiction.

Back to the Future So what’s next?

Humans have been exploiting how we perceive the world for thousands of years, NVIDIA’s Lebaredian points out. We’ve been hacking our senses to construct virtual realities through music, art and literature for millennia.

Next, add interactivity and the ability to collaborate, he says. Better screens, head-mounted displays like the Oculus Quest, and mixed-reality devices like Microsoft’s Hololens are all steps toward fuller immersion.

All these pieces will evolve. But the most important one is here already: a high-fidelity simulation of our virtual world to feed the display. That’s NVIDIA Omniverse.

To steal a line from science-fiction master William Gibson: the future is already here; it’s just not very evenly distributed.

The metaverse is the means through which we can distribute those experiences more evenly. Brought to life by NVIDIA Omniverse, the metaverse promises to weave humans, AI and robots together in fantastic new worlds.

Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/10/2021 11:20:28 PM
   of 2632
 
All AI Do Is Win: NVIDIA Research Nabs ‘Best in Show’ with Digital Avatars at SIGGRAPH

August 10, 2021 by Isha Salian

The video at the end, although it's 30 minutes long, is well worth watching, giving the state of the art of graphics, AI and the Omniverse.



In a turducken of a demo, NVIDIA researchers stuffed four AI models into a serving of digital avatar technology for SIGGRAPH 2021’s Real-Time Live showcase — winning the Best in Show award.
The showcase, one of the most anticipated events at the world’s largest computer graphics conference, held virtually this year, celebrates cutting-edge real-time projects spanning game technology, augmented reality and scientific visualization. It featured a lineup of jury-reviewed interactive projects, with presenters hailing from Unity Technologies, Rensselaer Polytechnic Institute, the NYU Future Reality Lab and more.

Broadcasting live from our Silicon Valley headquarters, the NVIDIA Research team presented a collection of AI models that can create lifelike virtual characters for projects such as bandwidth-efficient video conferencing and storytelling.

The demo featured tools to generate digital avatars from a single photo, animate avatars with natural 3D facial motion and convert text to speech.

“Making digital avatars is a notoriously difficult, tedious and expensive process,” said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA, in the presentation. But with AI tools, “there is an easy way to create digital avatars for real people as well as cartoon characters. It can be used for video conferencing, storytelling, virtual assistants and many other applications.”

AI Aces the Interview In the demo, two NVIDIA research scientists played the part of an interviewer and a prospective hire speaking over video conference. Over the course of the call, the interviewee showed off the capabilities of AI-driven digital avatar technology to communicate with the interviewer.

The researcher playing the part of interviewee relied on an NVIDIA RTX laptop throughout, while the other used a desktop workstation powered by RTX A6000 GPUs. The entire pipeline can also be run on GPUs in the cloud.

While sitting in a campus coffee shop, wearing a baseball cap and a face mask, the interviewee used the Vid2Vid Cameo model to appear clean-shaven in a collared shirt on the video call (seen in the image above). The AI model creates realistic digital avatars from a single photo of the subject — no 3D scan or specialized training images required.

“The digital avatar creation is instantaneous, so I can quickly create a different avatar by using a different photo,” he said, demonstrating the capability with another two images of himself.

Instead of transmitting a video stream, the researcher’s system sent only his voice — which was then fed into the NVIDIA Omniverse Audio2Face app. Audio2Face generates natural motion of the head, eyes and lips to match audio input in real time on a 3D head model. This facial animation went into Vid2Vid Cameo to synthesize natural-looking motion with the presenter’s digital avatar.

Not just for photorealistic digital avatars, the researcher fed his speech through Audio2Face and Vid2Vid Cameo to voice an animated character, too. Using NVIDIA StyleGAN, he explained, developers can create infinite digital avatars modeled after cartoon characters or paintings.



The models, optimized to run on NVIDIA RTX GPUs, easily deliver video at 30 frames per second. It’s also highly bandwidth efficient, since the presenter is sending only audio data over the network instead of transmitting a high-resolution video feed.

Taking it a step further, the researcher showed that when his coffee shop surroundings got too loud, the RAD-TTS model could convert typed messages into his voice — replacing the audio fed into Audio2Face. The breakthrough text-to-speech, deep learning-based tool can synthesize lifelike speech from arbitrary text inputs in milliseconds.

RAD-TTS can synthesize a variety of voices, helping developers bring book characters to life or even rap “The Real Slim Shady” by Eminem, as the research team showed in the demo’s finale.

SIGGRAPH continues through Aug. 13. Check out the full lineup of NVIDIA events at the conference and catch the premiere of our documentary, “ Connecting in the Metaverse: The Making of the GTC Keynote,” on Aug. 11.

Scroll ahead to 13:00




Share RecommendKeepReplyMark as Last Read


From: Frank Sully8/11/2021 12:33:05 PM
   of 2632
 
Chinese chipmaking upstarts race to rival Nvidia

By Staff -

11/08/2021



COMPARTIR:

Chinese venture capital is pouring into the development of next-generation microprocessors as Chinese startups race to challenge the dominance of U.S. chipmaking giant Nvidia.

Investment in the new general purpose graphics processing units (GPGPUs) — an advanced computing chip — has been booming as venture capital bets on the growing Chinese industry. While traditional graphic processing units (GPUs) render images on computers, GPGPUs are designed to harness data processing power for artificial intelligence computing.

Several Chinese front-runners have jumped into the race, attracting investor attention. Beijing has been pushing for more self-reliance in semiconductors, and a global chip shortage has created an opportunity for Chinese companies to make breakthroughs. Aiming to leapfrog to the next generation of integrated circuit technology, the crowded field of Chinese startups has been recruiting veterans of Nvidia itself and other leading semiconductor companies.

One is artificial intelligence chipmaker Iluvatar CoreX, founded in 2015. In March, it unveiled China’s first GPGPU built with advanced 7-nanometer technology.

Another is Shanghai-based Biren Technology, with a valuation of more than 10 billion yuan ($1.5 billion). It managed to raise 4.7 billion yuan from more than 40 investors since its founding in 2019. Investors included Hillhouse Group, Walden International China and BAI Capital.

MetaX Integrated Circuit was set up in 2020 and attracted investment from Lightspeed China Partners, Sequoia Capital and ZhenFund. The two founders both previously worked for the U.S. semiconductor giant Advanced Micro Devices. Newcomer Moore Threads Technology raised several billion yuan in two rounds of financing within 100 days of its founding.

Although the upstarts have generated plenty of enthusiasm while raising significant funds, investors and leaders of the companies acknowledge that taking on the likes of Nvidia is no easy task. No more than one or two of the companies will survive because it will take billions of dollars to build up a software ecosystem comparable to Nvidia’s, according to one chip industry investor.

First, we should develop a product and start product iteration,” said Diao Shijing, chairman and CEO of Iluvatar CoreX. “I don’t think anyone will surprise the world or disrupt the industry with its very first product, and it definitely will need constant refinement.”

Growing trend

At the same time, the Chinese startups stand on the edge of a tremendous opportunity, said Wang Endong, executive president and chief scientist of cloud computing and big data provider Inspur. He predicted that demand for AI computing chips to power deep machine learning will grow exponentially.

In 2020, computing by AI accelerator chips — special integrated circuits designed for artificial intelligence applications such as GPGPUs — surpassed computing by the conventional central processing units (CPUs) that have powered computers for decades, Wang said.

“AI accelerator chips will account for more than 80% of overall computing power by 2025,” Wang said.

Nvidia captured the lead in AI technology over the past decade, making GPUs a standard for artificial intelligence processing. The company’s share price rose more than 25-fold in the past five years, giving it a market value of more than $450 billion, second behind Taiwan Semiconductor Manufacturing Co.

“We have invested tens of billions of dollars in GPUs over the past 30 years, and only in this one area,” Nvidia founder Jensen Huang said in a June 2 video interview with Caixin at the company’s U.S. headquarters. “I can certainly understand why it will spawn so many competitors in light of such a huge market.”

In 2020, China’s semiconductor industry attracted venture capital of more than 140 billion yuan, surpassing the internet as the most attractive sector, according to data from the U.S. corporate law firm Katten Muchin Rosenman. In the first five months of 2021, about 164 Chinese semiconductor companies received investments with total financing of more than 40 billion yuan, close to the level of the full year 2019, according to Katten Muchin.

Nvidia challengers

The new wave of Chinese chip startups is led by Biren. Founder Michael Zhang is the former president of the AI startup Sense Time and a former U.S. lawyer. Zhang recruited his core management from industry veterans in the U.S., including Chief Technical Officer and Chief Architect Mike Hong. Hong helped build Huawei’s GPU research and development team in the U.S. in 2016 and also worked for Nvidia.

“This team’s previous work experience covers the entire chipmaking process,” said Xing Yaopeng, senior investment manager at BAI Capital, an investor in Biren.

Biren started operation in November 2019 with a plan to introduce its first product, a 7-nm GPGPU, in 2022, according to co-founder Xu Lingjie, who previously worked for Nvidia, Samsung and Alibaba.

“The demand for AI computing from major internet companies is still growing at more than 40% a year,” Xu said. “Even if only one-third of the servers are replaced every year, procurement from companies and government will still be huge.”

Like Biren, Shanghai-based MetaX and Iluvatar CoreX also started in the GPGPU arena with founding teams that previously worked for AMD. Headquartered in Beijing, Moore Threads, founded in October 2020, jump-started with GPUs. CEO Jams Zhang is the former China general manager of Nvidia.

Battle for the future

The GPGPU market is still nascent, and it will take time for the Chinese startups to meet the standards of major customers such as Alibaba and Tencent, according to Liu Hongchun, founding partner of Winreal Investment, a venture capital company.

Other challenges facing the upstart chipmakers include intellectual property and relationships with China’s tech giants, such as Huawei, Alibaba and Baidu, which all have their own chipmaking businesses. It is hard to tell whether these tech giants will regard the startups as friends or foes, an industry expert said.

In the longer run, analysts said the chipmaking startups need to establish a software ecology in China that can rival Nvidia’s. The American company released its parallel computing platform CUDA in 2006, including a development toolkit that enabled anyone with a laptop equipped with an Nvidia GPU to develop software. Over the past decade or so, Nvidia has promoted CUDA in schools and research institutes, enabling software such as climate simulation and seismic data processing to be developed based on the platform.

Currently, Chinese GPU makers are adopting a CUDA-compatible strategy and aim to build their own software ecosystem on top of it. However, once users are accustomed to CUDA, they will be unlikely to migrate to other platforms, said a midlevel manager at AI startup Enflame Technology. Enflame makes chips designed to process huge amounts of data to train artificial intelligence systems.

“No ecosystem is built in one day,” said Jeffrey Wang, managing director at Lenovo Capital, an accelerator and venture capital unit of Lenovo Group. “It will take time.”

While the Chinese companies are cutting their teeth in the GPU sector, Nvidia is eyeing the CPU and Data Processing Unit (DPU) field. The company is pursuing a $40 billion bid for U.K.-based Arm Holdings from Japan’s SoftBank. The deal, which would be one of the biggest semiconductor takeovers ever, is pending approval from regulators in the U.K, the U.S. and China.

Nvidia released its first DPU last October after completing its $6.9 billion acquisition of Israel’s chipmaker Mellanox in April 2020.

While Nvidia is busy with the CPU and DPU market, it may become a great opportunity for Chinese GPU companies to play catch-up, industry experts told Caixin.

Source: Asia Nikkei

tynmagazine.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10