SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Frank Sully9/22/2021 11:20:20 AM
   of 2485
 
NVIDIA CEO Jensen Huang Special Address | NVIDIA Cambridge-1 Inauguration


Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 11:21:14 AM
   of 2485
 
NVIDIA Calls UK AI Strategy “Important Step,” Will Open Cambridge-1 Supercomputer to UK Healthcare Startups September 22, 2021

Sept. 22, 2021 — NVIDIA today called the U.K. government’s launch of its AI Strategy an important step forward, and announced a program to open the Cambridge-1 supercomputer to U.K. healthcare startups.

David Hogan, vice president of Enterprise EMEA at NVIDIA, said, “Today is an important step in furthering the U.K.’s strategic advantage as a global leader in AI. NVIDIA is proud to support the U.K.’s AI ecosystem with Cambridge-1, the country’s most powerful supercomputer, and our Inception program that includes more than 500 of the U.K.’s most dynamic AI startups.”

NVIDIA will also today announce the next phase for Cambridge-1, in which U.K.-based startups will be able to submit applications to harness the system’s capabilities, during a talk at the Wired Health: Tech conference by Kimberly Powell, NVIDIA’s vice president of Healthcare.

“AI and digital biology are reshaping the drug discovery process, and startups are by definition on the bleeding edge of innovation,” she said. “Cambridge-1 is the modern instrument for science and we look forward to opening the possibilities for discovery even wider to the U.K. startup ecosystem.”

Powell will also describe work underway with U.K. biotech company and NVIDIA Inception member Peptone, which will have access to Cambridge-1. Peptone is developing a protein engineering system that blends generative AI models and computational molecular physics to discover therapies to fight inflammatory diseases like COPD, psoriasis and asthma.

“Access to the compute power of Cambridge-1 will be a game-changer in our effort to fuse computation with laboratory experiments to change the way protein drugs are engineered,” said Dr. Kamil Tamiola, Peptone CEO and founder. “We plan to use Cambridge-1 to vastly improve the design of antibodies to help treat numerous inflammatory diseases.”

NVIDIA anticipates that giving U.K. startups the opportunity to use Cambridge-1 will accelerate their work, enabling them to bring innovative products and services to market faster, as well as ensure that the U.K. remains a compelling location in which to develop and scale up their businesses.

Startups that are selected for the new program will not only gain access to Cambridge-1. They will also be invited to meet with the system’s founding partners to amplify collaboration potential, and access membership benefits of NVIDIA Inception, a global program designed to nurture startups, which has more U.K. startups as members than from any other country in Europe.

Founding partners of Cambridge-1 are: AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore Technologies.

NVIDIA Inception provides startups with critical go-to-market support, training, and technology. Benefits include access to hands-on, cloud-based training through the NVIDIA Deep Learning Institute, preferred pricing on hardware, invitations to exclusive networking events, opportunities to engage with venture capital partners and more. Startups in NVIDIA Inception remain supported throughout their entire life cycle, helping them accelerate both platform development and time to market.

Startup applications can be submitted here before December 30 at midnight GMT, with the announcement of those selected expected early in 2022.

About Cambridge-1

Launched in July 2021, Cambridge-1 is the U.K.’s most powerful supercomputer. It is the first NVIDIA supercomputer designed and built for external research access. NVIDIA will collaborate with researchers to make much of this work available to the greater scientific community.

Featuring 80 DGX A100 systems integrating NVIDIA A100 GPUs, Bluefield-2 DPUs and NVIDIA HDR InfiniBand networking, Cambridge-1 is an NVIDIA DGX SuperPOD that delivers more than 400 petaflops of AI performance and 8 petaflops of Linpack performance. The system is located at a facility operated by NVIDIA partner Kao Data.

Cambridge-1 is the first supercomputer NVIDIA has dedicated to advancing industry-specific research in the U.K. The company also intends to build an AI Center for Excellence in Cambridge featuring a new Arm-based supercomputer, which will support more industries across the country.

Cambridge-1 was launched with five founding partners: AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore.

About NVIDIA

NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market and has redefined modern computer graphics, high performance computing and artificial intelligence. The company’s pioneering work in accelerated computing and AI is reshaping trillion-dollar industries, such as transportation, healthcare and manufacturing, and fueling the growth of many others. More information at https://nvidianews.nvidia.com.

hpcwire.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 12:30:20 PM
   of 2485
 
This Catalyst Could Give Nvidia Stock a Big Boost

The graphics card specialist is making waves in a lucrative market.



Harsh Chauhan

Key Points
  • Cloud gaming adoption is increasing at a terrific pace.
  • Nvidia has already made a solid dent in the cloud gaming space with the GeForce NOW service.
  • GeForce NOW's increased coverage, affordable pricing, and big game library will be tailwinds for Nvidia in this market.
The video gaming industry has been a big catalyst for Nvidia ( NASDAQ:NVDA) in recent years, helping the company clock terrific revenue and earnings growth and boosting its stock price as gamers have lapped up its powerful graphics cards to elevate their gaming experience.

The good news for Nvidia investors is that graphics card demand is going to boom in the coming years, and the company is in a solid position to take advantage of that thanks to its dominant market share. However, there is an additional catalyst that could give Nvidia's video gaming business a big shot in the arm over the next few years: cloud gaming. Let's take a closer look at the cloud gaming market, and check how Nvidia is looking to make the most of this multibillion-dollar opportunity.



NVDA DATA BY YCHARTS

Cloud gaming adoption is growing rapidly

Newzoo, a provider of market research and analytics for video gaming and esports, estimates that the global cloud gaming market is on track to generate $1.6 billion in revenue this year, with the number of paying users jumping to 23.7 million. That may not look like a big deal for Nvidia right now given that it has generated nearly $22 billion in revenue in the trailing 12 months. However, the pace at which the cloud gaming market is growing means that it could soon reach a point where it moves the needle in a big way for Nvidia.

Newzoo estimates that the cloud gaming market could hit $6.5 billion in revenue by 2024, growing more than four-fold compared to this year's estimated revenue. The research firm also points out that the addressable user market for cloud gaming could be as big as 165 million by the end of 2021, indicating that there are millions of users out there that could buy cloud gaming subscriptions.

In fact, Newzoo points out that 94% of the gamers it surveyed have either tried cloud gaming already or are willing to try it, which means that the market could quickly expand. Nvidia is becoming a dominant player in the cloud gaming space, which could add billions of dollars to its revenue in the long run.



IMAGE SOURCE: GETTY IMAGES

Nvidia is pulling the right strings to tap this massive opportunity

Nvidia pointed out in March this year that its GeForce NOW cloud gaming service was nearing 10 million members. This is impressive considering that the service was launched in February 2020 with a subscription costing $5 per month. The company is now offering a premium subscription service priced at $9.99 per month or $99.99 a year.

The introductory $5-a-month subscription will remain available to members who were already on that plan before the new Priority membership was rolled out. This effectively means that the new GeForce NOW customers will increase Nvidia's revenue per user from the cloud gaming business. It wouldn't be surprising to see the service gain traction among gamers because of the benefits on offer.

The premium subscription will give gamers access to ray-tracing-enabled games, as well as its deep learning super sampling (DLSS) feature that upscales selected games to a higher resolution for a more immersive experience. What's more, Nvidia has a library of 1,000 PC (personal computer) games on the GeForce NOW platform, giving gamers a wide range of titles to choose from.

It is also worth noting that Nvidia is rapidly opening new data centers and upgrading the capacity of existing ones to capture more of the cloud gaming market. The company has 27 data centers that enable GeForce NOW in 75 countries.

Another important insight worth noting is that 65% of Nvidia's 10 million GeForce NOW members play games on underpowered PCs or Chromebooks. Those users wouldn't have been able to run resource-hungry games without Nvidia's data centers, which do the heavy lifting and transmit the gameplay to users' screens. Nvidia says that 80% of the gaming sessions on GeForce NOW take place on devices that wouldn't have been able to run those games locally because of weak hardware or incompatibility.

This explains why the demand for cloud gaming has spiked substantially -- consumers need not invest in expensive hardware, nor do they need to buy game titles separately. They can simply buy subscriptions from Nvidia and choose from over a thousand games that the GeForce NOW library provides.

More importantly, Nvidia is expanding into new markets such as Southeast Asia, while bolstering its presence in other areas such as Latin America and the Middle East. As such, the company's GeForce NOW subscriber count could keep growing at a fast clip in the future.

Gauging the financial impact

With paying users of cloud gaming expected to hit nearly 24 million this year and Nvidia already having scored 10 million GeForce NOW subscribers, the company has got off to a good start in this market.

The addressable market that Nvidia could tap into is also expected to hit 165 million potential subscribers by the end of 2021, as discussed earlier. If Nvidia manages to corner half of those potential paying cloud gaming subscribers in the next few years and get $100 a year from each subscriber (based on the annual GeForce NOW subscription plan), the company could be looking at substantial annual revenue from the cloud gaming business. This should give investors yet another reason to buy this growth stock that is already winning big in graphics cards and data centers.

fool.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 2:01:11 PM
   of 2485
 
NVIDIA Extends AI Inference Performance Leadership, with Debut Results on Arm-based Servers

The latest MLPerf benchmarks show NVIDIA has extended its high watermarks in performance and energy efficiency for AI inference to Arm as well as x86 computers.

September 22, 2021 by DAVE SALVATOR

NVIDIA delivers the best results in AI inference using either x86 or Arm-based CPUs, according to benchmarks released today.

It’s the third consecutive time NVIDIA has set records in performance and energy efficiency on inference tests from MLCommons, an industry benchmarking group formed in May 2018.

And it’s the first time the data-center category tests have run on an Arm-based system, giving users more choice in how they deploy AI, the most transformative technology of our time.

Tale of the Tape

NVIDIA AI platform-powered computers topped all seven performance tests of inference in the latest round with systems from NVIDIA and nine of our ecosystem partners including Alibaba, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, Nettrix and Supermicro.

And NVIDIA is the only company to report results on all MLPerf tests in this and every round to date.



Inference is what happens when a computer runs AI software to recognize an object or make a prediction. It’s a process that uses a deep learning model to filter data, finding results no human could capture.

MLPerf’s inference benchmarks are based on today’s most popular AI workloads and scenarios, covering computer vision, medical imaging, natural language processing, recommendation systems, reinforcement learning and more.

So, whatever AI applications they deploy, users can set their own records with NVIDIA.

Why Performance Matters

AI models and datasets continue to grow as AI use cases expand from the data center to the edge and beyond. That’s why users need performance that’s both dependable and flexible to deploy.

MLPerf gives users the confidence to make informed buying decisions. It’s backed by dozens of industry leaders, including Alibaba, Arm, Baidu, Google, Intel and NVIDIA, so the tests are transparent and objective.

Flexing Arm for Enterprise AI

The Arm architecture is making headway into data centers around the world, in part thanks to its energy efficiency, performance increases and expanding software ecosystem.

The latest benchmarks show that as a GPU-accelerated platform, Arm-based servers using Ampere Altra CPUs deliver near-equal performance to similarly configured x86-based servers for AI inference jobs. In fact, in one of the tests, the Arm-based server out-performed a similar x86 system.

NVIDIA has a long tradition of supporting every CPU architecture, so we’re proud to see Arm prove its AI prowess in a peer-reviewed industry benchmark.

“Arm, as a founding member of MLCommons, is committed to the process of creating standards and benchmarks to better address challenges and inspire innovation in the accelerated computing industry,” said David Lecomber, a senior director of HPC and tools at Arm.

“The latest inference results demonstrate the readiness of Arm-based systems powered by Arm-based CPUs and NVIDIA GPUs for tackling a broad array of AI workloads in the data center,” he added.

Partners Show Their AI Powers

NVIDIA’s AI technology is backed by a large and growing ecosystem.

Seven OEMs submitted a total of 22 GPU-accelerated platforms in the latest benchmarks.

Most of these server models are NVIDIA-Certified, validated for running a diverse range of accelerated workloads. And many of them support NVIDIA AI Enterprise, software officially released last month.

Our partners participating in this round included Dell Technologies, Fujitsu, Hewlett Packard Enterprise, Inspur, Lenovo, Nettrix and Supermicro as well as cloud-service provider Alibaba.

The Power of Software

A key ingredient of NVIDIA’s AI success across all use cases is our full software stack.

For inference, that includes pre-trained AI models for a wide variety of use cases. The NVIDIA TAO Toolkit customizes those models for specific applications using transfer learning.

Our NVIDIA TensorRT software optimizes AI models so they make best use of memory and run faster. We routinely use it for MLPerf tests, and it’s available for both x86 and Arm-based systems.

We also employed our NVIDIA Triton Inference Server software and Multi-Instance GPU ( MIG) capability in these benchmarks. They deliver for all developers the kind of performance that usually requires expert coders.

Thanks to continuous improvements in this software stack, NVIDIA achieved gains up to 20 percent in performance and 15 percent in energy efficiency from previous MLPerf inference benchmarksjust four months ago.

All the software we used in the latest tests is available from the MLPerf repository, so anyone can reproduce our benchmark results. We continually add this code into our deep learning frameworks and containers available on NGC, our software hub for GPU applications.

It’s part of a full-stack AI offering, supporting every major processor architecture, proven in the latest industry benchmarks and available to tackle real AI jobs today.

To learn more about the NVIDIA inference platform, check out our NVIDIA Inference Technology Overview.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 2:30:09 PM
   of 2485
 
In The Latest AI Benchmarks, Nvidia Remains The Champ, But Qualcomm Is Rising Fast



Karl Freund
Contributor
Enterprise Tech

NVIDIA rules the performance roost, Qualcomm demonstrates exceptional power efficiency, and Intel demonstrates the power of software.

Every three months, the not-for-profit group MLCommons publishes a slew of peer-reviewed MLPerf benchmark results for deep learning, alternating between training and inference processing. This time around, it was Inference Processing V1.1. Over 50 members agree on a set of benchmarks and data sets they feel are representative of real AI workloads such as image and language processing. And then the fun begins.

From what I hear from vendors, these benchmarks are increasingly being used in Requests for Proposals for AI gear, and also serve as a robust test bed for engineers of new chip designs and optimization software. So everyone wins, whether or not they publish. This time around NVIDIA, Intel, and Qualcomm added new models and configurations, and results were submitted from Dell, HPE, Lenovo, NVIDIA, Inspur, Gigbyte, Supermicro, and Netrix.

And the winner is…

Before we get to the acccelerators, a few comments about inference processing. Unlike training, where the AI job is the job, inference is usually a small part of an application, which of course runs on an x86 server. Consequently, Intel pretty much owns the data center inference market; most models perform quite will on Xeons. Aware of this, Intel has continually updated the hardware and software to run faster to keep those customers happy and on the platform. We will cover more details on this in a moment, but for models requiring more performance or dedicated throughput for more complex models, accelerators are the way to go.

On that front, Nvidia remains the the fastest AI accelerator for every workload, on a single chip basis. On a system level, however, a 16-card Qualcomm delivered the fastest ResNet-50 performance with 342K images per second, over an 8-GPU Inspur server at 329K. While Qualcomm increased their model coverage, Nvidia was the only firm to submit benchmark results for every AI model. Intel submitted results for nearly all models in data center processing.

Qualcomm shared results for both the Snapdragon at the edge, and the Cloud AI100, both offering rock solid performance with absolute leadership power efficiency across the board. Here’s some of the data.



Nvidia, as usual, clobbered the inference[-]incumbant, which is the Intel Xeon CPU. Qualcomm out-performed the NVIDIA A30, however.

NVIDIA

Every year, Nvidia demonstrates improved performance, even on the same hardware, thanks to continuous improvements in its software stack. In particular, TensorRT improves performance and efficiency by pre-processing the neural network, performing functions such as quantization to lower-precision formats and arithmetic. But the star of the Nvidia software show for inference is increasingly the Triton Inference Server, which manages the run-time optimizations and workload balancing using Kubernetes. Nvidia has open-sourced Triton, and it now supports x86 CPUs as well as Nvidia GPUs. In fact, since it is open, Triton could be extended with backends for other accelerators and GPUs, saving startups a lot of software development work.



Nvidia demonstrated nearly 50% better performance[+]NVIDIA

In some applications, power efficiency is a critical factor for success, but only if the platform achieves the required performance and latency for the models being deployed. Thanks to years of research in AI and Qualcomm’s mobile processor legacy, the AI engines in Snapdragon and the Cloud AI100 deliver both, with up to half the power per transaction for many models versus the Nvidia A100, and nearly four times the efficiency of the Nvidia A10.



Qualcomm Cloud AI100 offered the best performance[-]per watt of all other submissions.

QUALCOMM

Back to Intel, the new Ice Lake Xeon performed quite well, with up to 3X improvment over the previous Cooper Lake CPU for DLRM (recommendation engines), and 1.5X on other models. Recommendation engines represent a huge market in which Xeon rules, and for which other contenders are investing heavily, so this is a very good move for Intel.



Intel demonstrated 50%-300% better performance[-]with Ice Lake vs, the previous Cooper Lake results. This was accomplished in part by the improvements the engineering team has realized in the software stack.

INTEL

Intel also demonstrated significant performance improvements in the development stack for Xeon. The most dramatic improvement was again for DLRM, in which sparce data and weights are common. In this case, Intel delivered over 5X performance improvement on the same hardware.



Intel shared the five-fold performance improvement[-]from sparsity, on the same chip.

INTEL

Conclusions

As followers of Cambrian-AI know, we believe that MLPerf presents vendors and users with valuable benchmarks, a real-world testing platform, and of course keeps analysts quite busy slicing and dicing all the data. In three months we expect a lot of exciting results, and look forward to sharing our insights and analysis with you then.

forbes.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 2:48:55 PM
   of 2485
 
Nvidia cosies up to Open Robotics for hardware-accelerated ROS

Hopes to tempt roboticists over to its Jetson platform with new simulation features, drop-in acceleration code

Gareth HalfacreeWed 22 Sep 2021

Nvidia has linked up with Open Robotics to drive new artificial intelligence capabilities in the Robot Operating System (ROS).

The non-exclusive agreement will see Open Robotics extending ROS 2, the latest version of the open-source robotics framework, to better support Nvidia hardware – and in particular its Jetson range, low-power parts which combine Arm cores with the company's own GPU and deep-learning accelerator cores to drive edge and embedded artificial intelligence applications.

"Our users have been building and simulating robots with Nvidia hardware for years, and we want to make sure that ROS 2 and Ignition work well on those platforms," Brian Gerkey, Open Robotics' chief exec, told The Register.

"We get most excited by two things: robots and open source. This partnership has both. We're working together with Nvidia to improve the developer experience for the global robotics community by extending the open source software on which roboticists rely. We're excited to work directly with Nvidia and have their support as we extend our software to take maximum advantage of their hardware."

The team-up will see Open Robotics working on ROS to improve the data flow between the various processors – CPU, GPU, NVDLA, and Tensor Cores – on Nvidia's Jetson hardware as a means to boost processing of high-bandwidth data.

As part of that, Open Robotics' Ignition and Nvidia's Isaac Sim simulation environments are to gain interoperability – meaning robot and environment models can be moved from one to the other, at least when the software is finished some time early next year.

As for why Nvidia's accelerated computing portfolio, and in particular its embedded Jetson family of products, should appeal to robot-makers, Gerkey said: "Nvidia has invested heavily in compute hardware that's relevant for modern robotics and AI workloads. Robots ingest and process large data volumes from sensors such as cameras and lasers. Nvidia's architecture allows that data flow to happen incredibly efficiently."

Murali Gopalakrishna, head of product management, Intelligent Machines, at Nvidia said of the hookup: "Nvidia's GPU-accelerated computing platform is at the core of many AI robot applications and many of those are developed using ROS, so it is logical that we work closely with open robotics to advance the field of robotics.

The work also brings with it some new Isaac GEMs, hardware-accelerated packages for ROS designed to replace code which would otherwise run on the CPU. The latest GEMs include packages for handling stereo imaging and point cloud data, colour space conversion, lens distortion correction, and the detection and processing of AprilTags – QR Code-style 2D fiducial tags developed at the University of Michigan.

The partnership doesn't mean the two are going steady, though. "We are eager to extend ROS 2 in similar ways on other accelerated hardware," Gerkey told us of planned support for other devices like Intel's Myriad X and Google's TPU– to say nothing of GPU hardware from Nvidia rival AMD.

"In fact, we plan for the work we do together with Nvidia to lay the foundation for additional extensions for additional architectures. To other hardware manufacturers: please contact us to talk about extensions for your platform!"

The latest Isaac GEMs are available on Nvidia's GitHub repository now; the interoperable simulation environments, meanwhile, aren't expected to release until the (northern hemisphere) spring of 2022.

Nvidia's Gopalakrishna said it was possible for ROS developers to begin experimenting before the release date. "The simulator already has a ROS 1 and ROS 2 bridge and has examples of using many of the popular ROS packages for navigation (nav2) and manipulation (MoveIT). Many of these developers are also leveraging Isaac Sim to generate synthetic data to train the perception stack in their robots. Our spring release will bring additional functionality like interoperability between Gazebo Ignition and Isaac Sim."

When we asked what performance uplift could users expect from the new Isaac GEMs compared to CPU-only packages, we were told: "The amount of performance gain will vary depending on how much inherent parallelism exists in a given workload. But we can say that we are seeing an order of magnitude increase in performance for perception and AI related workloads. By using the appropriate processor to accelerate the different tasks, we see increased performance and better power efficiency."

As for additional features in the pipeline, Gopalakrishna said: "Nvidia is working with Open Robotics to make the ROS framework more streamlined for hardware acceleration and we will also continue to release multiple new Isaac GEMs, our hardware accelerated software packages for ROS.

"Some of these will be DNNs which are commonly used in robotics perception stacks. On the simulator side, we are working to add support for more sensors and robots and release more samples that are relevant to the ROS community." ®

theregister.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 2:55:11 PM
   of 2485
 
Next Generation: ‘Teens in AI’ Takes on the Ada Lovelace Hackathon

September 22, 2021

by LIZ AUSTIN

Jobs in data science and AI are among the fastest growing in the entire workforce, according to LinkedIn’s 2021 Jobs Report.

Teens in AI, a London-based initiative, is working to inspire the next generation of AI researchers, entrepreneurs and leaders through a combination of hackathons, accelerators, networking events and bootcamps.

In October, the organization, with support from NVIDIA, will host the annual Ada Lovelace Hackathon, created for young women ages 11-18 to get a glimpse of all that can be done in the world of AI.

Inspired By AI

The need to embolden young women to join the tech industry is great.

Only 30 percent of the world’s science researchers are women. And fewer than one in five authors at leading AI conferences are women, about the same ratio of those teaching AI-related subjects, according to the AI Now Institute.

Founded by social entrepreneur Elena Sinel, Teens in AI is trying to change that. It aims to give young people — especially young women — early exposure to AI that’s being developed and deployed to promote social good.

The organization, which was launched at the 2018 AI for Good Global Summit at the United Nations, has an expansive network of mentors from some of the world’s leading companies. These volunteers work with students and inspire them to use AI to address social, humanitarian and environmental challenges.

“A shortage of STEM skills costs businesses billions of dollars every year, impacting UK businesses alone by about £1.5 billion a year,” Sinel said. “Yet with so few girls — especially those from disadvantaged backgrounds — studying STEM, we are?depriving ourselves?of?potential?talent.”

Sinel said that Teens in AI makes STEM education approachable and increases exposure to female role?models,?showing young women that a bright? STEM ?career isn’t reserved for only males.

“We can’t do this on our own, so we’re constantly on the lookout for like-minded corporate partners like NVIDIA who will work with us to grow this community of young people who want to make the world more inclusive and sustainable,” she said.

Ada Lovelace Hackathon

With the company’s support, the Ada Lovelace Hackathon — named for the 19th century mathematician who is often regarded as the first computer programmer — showcases speakers and mentors to encourage young women to pursue a career in AI. This year’s event is expected to reach more than 1,000 girls from 20+ countries.

Participants will have the opportunity to receive prizes and get access to NVIDIA Deep Learning Institute credits for more advanced hands-on training and experience.

NVIDIA employees around the world will serve as mentors and judges.

Kate Kallot, head of emerging areas at NVIDIA, judged last year’s Ada Lovelace Hackathon, as well as August’s Global AI Accelerator Program for Teens in AI.

“I hope to inform and inspire young people in how they can help fuel applications and the AI revolution,” Kallot said. “While there’s a heavy demand for people with technical skills, what’s also needed is a future AI workforce that is truly reflective of our diverse world.”

Kallot talked more about the importance of fighting racial biases in the AI industry on a recent Teens in AI podcast episode.

Championing Diversity

NVIDIA’s support of Teens in AI is part of our broader commitment to bringing more diversity to tech, expanding access to AI education and championing opportunities for traditionally underrepresented groups.

This year, we announced a partnership with the Boys & Girls Clubs of Western Pennsylvania to develop an open-source AI and robotics curriculum for high school students. The collaboration has given hundreds of Jetson Nano developer kits to educators in schools and nonprofits, through the NVIDIA Jetson Nano 2GB Developer Kit Grant Program.

NVIDIA also works with minority-serving institutions and diversity-focused professional organizations to offer training opportunities — including free seats for hands-on certification courses through the NVIDIA Deep Learning Institute.

Driving the Future of AI

AI is growing at an incredible rate. The AI market is predicted to be worth $360 billion by 2028, up from just $35 billion in 2020, and is expected to add $880 billionto the U.K. economy by 2035.

Over 90 percent of leading businesseshave an ongoing investment in AI, 23 percent of customer service organizations are using AI-powered chatbots and 46 percent of people are using AI every single day.

In such a landscape, encouraging young people across the globe to embark on their AI journeys is all the more important.

Learn more about Teens in AI and the NVIDIA Jetson Grant Program.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 9:49:07 PM
   of 2485
 
Robotics Gets A Jolt From An Open Robotics-Nvidia Partnership

Jim McGregor
Contributor



Materials handling robot using AI perception

NVIDIA

Nvidia and Open Robotics announced a partnership to enhance the ROS 2 (Robot Operating System) development suite. The partnership essentially combines the two most powerful robotics development environments and the two largest groups of robotics developers.

First released in 2010, ROS has been a key open-source platform for robotics developers supported by various companies in a variety of industries and government research organizations like DARPA and NASA. While the platform has continued to grow and includes the Ignition simulation environment, it has been primarily targeting traditional CPU computing models. Over the past several years, however, Nvidia has pioneered heterogeneous and AI computing for IoT and edge applications through the development of its Jetson platforms, software development kits (SDKs) like Isaac for robotics, toolkits like Nvidia TAO (Train, Adapt, and Optimize) for simplifying AI model development and deployment, and Omniverse Isaac Sim for synthetic data generation and robotics simulation. Both environments are open to developers, provide valuable code, models, data sets, and simulation resources. Now the two can be combined into Nvidia’s Omniverse collaborative development environment to allow developers to simultaneously develop everything from the physical robot to synthetic data sets to train the robot.



The Jetson product family and Isaac Robotics[-]platform

NVIDIA

For the ROS developers, this opens a world of possibilities. Pulling ROS into the Nvidia environment offers the developer the ability to leverage offload/acceleration engines like a GPU, shared memory, and predesigned hardware acceleration algorithms Nvidia calls Isaac Gems. Thus far, Nvidia is offering three Gems for image processing and DNN-based perception models, including SGM Stereo Disparity and Point Cloud, Color Space Conversion and Lens Distortion Correction, and AprilTags Detection. The performance lift from offloading depends on the specific algorithm, but Nvidia expects that some will result in an order of magnitude improvement in performance versus the same implementation on a CPU. In addition, the Isaac Sim includes support for ROS and ROS2 algorithms, including ROS April Tag, ROS Stereo Camera, ROS Services, Movelt Motion Planning Framework, Native Python ROS Usage, and ROS2 Navigation. The Isaac Sim can also be used to generate synthetic data to train and test perception models. The predesigned algorithms combined the synthetic data allow even the most novice developer or startup to quickly develop robotic platforms.

ROS developers seeking to add AI technologies to their products will also be able to leverage other Nvidia SDKs, such as Fleet Command for remote system management, Riva for conversational AI, and Deepstream for video streaming analytics. Most importantly, from Tirias Research’s perspective is the ability to leverage the Omniverse environment, which allows multiple simultaneous users with seamless interaction between tools, and the massive amounts of new data and machine learning (ML) models being developed by Nvidia.

Although, Nvidia has SDKs for various applications, such as Isaac for robotics, Clara for healthcare, and Drive for autonomous vehicles, the ML models for each of these segments are increasingly overlapping. When discussing this point the Nvidia’s General Manager of Robotics Murali Gopalakrishna, Mr. Gopalakrishna indicated that there is considerable crossover in the development of the SDKs and models for many of the applications. According to Mr. Gopalakrishna “the only difference is the data; the decisions are still the same.” As a result, the advances in one market or application typically benefits multiple markets and applications.



Worldwide forecast for robots

NVIDIA, STATISA

According to data from Statista, the robotics market is projected to grow at over a 25% rate annually, an increase from approximately 20% prior to COVID. COVID is pushing the use of robotics in everything from healthcare and manufacturing to agriculture and food delivery. Leveraging the advancements in AI, sensors, wireless communications (5G), and semiconductor technology, robotics is rapidly moving into the mainstream of society. By 2025, the global robotics market will reach $210 billion, but that is a fraction of the value of the products and services that will be generated by robotics. Having evaluated various development platforms and tools, I can attest to the value of the resources that the Nvidia Isaac and ROS platforms offer developers. Both make it easy for developers to begin developing new robotic platforms but the combination of the two, for lack of a better way to describe it, democratizes robotic development and AI for robotics. The joining of the two environments also brings together the two largest robotics develop communities, both focused on open-source collaboration.

google.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 9:58:59 PM
   of 2485
 
Gaming Market Worth $545.98 Billion by [2021-2028] | Fortune Business Insights™

Top companies covered in the gaming market report are Microsoft Corporation (Redmond, Washington, United States), Nintendo Co., Ltd (Kyoto, Japan), Rovio Entertainment Corporation (Espoo, Finland), Nvidia Corporation (California, United States), Valve Corporation (Washington, United States), PlayJam Ltd (London, United Kingdom), Electronic Arts Inc (California, United States), Sony Group Corporation (Tokyo, Japan), Bandai Namco Holdings Inc (Tokyo, Japan), Activision Blizzard, Inc (California, United States) and more players profiled.

September 22, 2021 16:00 ET| Source: Fortune Business Insights

Pune, India, Sept. 22, 2021 (GLOBE NEWSWIRE) -- The gaming market is fragmented by major companies that are focusing on maintaining their presence. They are doing so by proactively investing in R&D activities to develop engaging online video games. Additionally, other key players are adopting organic and inorganic strategies to maintain a stronghold that will contribute to the growth of the market during the forecast period. The global gaming market size is expected to gain momentum by reaching USD 545.98 billion by 2028 while exhibiting a CAGR of 13.20% between 2021 and 2028. In its report titled "Gaming Market Size, Share & Forecast 2021-2028",Fortune Business Insights mentions that the market stood at USD 203.12 billion in 2020.

Online video games have become more prevalent in recent years. Most people find online games attractive and a modest way to find free time from their hectic schedules. Moreover, during the pandemic, the inclination toward gaming increased dramatically. Many companies such as Nintendo and Tencent witnessed an increase in their sales during the first quarter. The former showcased a profit of 41%, as it sold many of its games digitally. The demand for online games will be persistent in upcoming years, and this market is anticipated to boom during the forecast period.

List of the Companies Profiled in the Global Gaming Market:
  • Microsoft Corporation (Redmond, Washington, United States)
  • Nintendo Co., Ltd (Kyoto, Japan)
  • Rovio Entertainment Corporation (Espoo, Finland)
  • Nvidia Corporation (California, United States)
  • Valve Corporation (Washington, United States)
  • PlayJam Ltd (London, United Kingdom)
  • Electronic Arts Inc (California, United States)
  • Sony Group Corporation (Tokyo, Japan)
  • Bandai Namco Holdings Inc (Tokyo, Japan)
  • Activision Blizzard, Inc (California, United States)

Market Segmentation:

Based on game type, the market is divided into shooter, action, sports, role-playing, and others.

Based on game type, the shooter segment held a gaming market share of about 23.35% in 2020. The segment is expected to experience considerable growth since it provides 3D realistic graphics. It makes players experience a whole new experience of the virtual world. This fascinating atmosphere provided by battle games is driving the segment market.

globenewswire.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/23/2021 1:22:57 PM
   of 2485
 
Doing the Math: Michigan Team Cracks the Code for Subatomic Insights

In less than 18 months, and thanks to GPUs, a team from the University of Michigan got 20x speedups on a program using complex math that’s fundamental to quantum science.

September 23, 2021 by RICK MERRITT

In record time, Vikram Gavini’s lab crossed a big milestone in viewing tiny things.

The three-person team at the University of Michigan crafted a program that uses complex math to peer deep into the world of the atom. It could advance many fields of science, as well as the design for everything from lighter cars to more effective drugs.

The code, available in the group’s open source repository, got a 20x speedup in just 18 months thanks to GPUs.

A Journey to the Summit

In mid-2018 the team was getting ready to release a version of the code running on CPUs when it got an invite to a GPU hackathon at Oak Ridge National Lab, the home of Summit, one of the world’s fastest supercomputers.

“We thought, let’s go see what we can achieve,” said Gavini, a professor of mechanical engineering and materials science.

“We quickly realized our code could exploit the massive parallelism in GPUs,” said Sambit Das, a post-doc from the lab who attended the five-day event.

Before it was over, Das and another lab member, Phani Motamarri, got 5x speedups moving the code to CUDA and its libraries. They also heard the promise of much more to come.

From 5x to 20x Speedups in Six Months

Over the next few months, the lab continued to tune its program for analyzing 100,000 electrons in 10,000 magnesium atoms. By early 2019, it was ready to run on Summit.

Taking an iterative approach, the lab ran increasing portions of its code on more and more of Summit’s nodes. By April, it was using most of the system’s 27,000 GPUs, getting nearly 46 petaflops of performance, 20x prior work.

It was an unheard-of result for a program based on density functional theory (DFT), the complex math that accounts for quantum interactions among subatomic particles.

Distributed Computing for Difficult Calculations

DFT calculations are so complex and fundamental that they currently consume a quarter of the time on all public research computers. They are the subject of 12 of the 100 most-cited scientific papers, used to analyze everything from astrophysics to DNA strands.

Initially, the lab reported its program used nearly 30 percent of Summit’s peak theoretical capability, an unusually high efficiency rate. By comparison, most other DFT codes don’t even report efficiency because they have difficulty scaling beyond use of a few processors.

“It was really exciting to get to that point because it was unprecedented,” said Gavini.

Recognition for a Math Milestone

In late 2019, the group was named a finalist for a Gordon Bell award. It was the lab’s first submission for the award that’s the equivalent of a Nobel in high performance computing.

“That provided a lot of visibility for our lab and our university, and I think this effort is just the beginning,” Gavini said.

Indeed, since the competition, the lab pushed the code’s performance to 64 petaflops and 38 percent efficiency on Summit. And it’s already exploring its use on other systems and applications.

Seeking More Apps, Performance

The initial work analyzed magnesium, a metal much lighter than the steel and aluminum used in cars and planes today, promising significant fuel savings. Last year, the lab teamed up with another group exploring how electrons move in DNA, work that could help other researchers develop more effective drugs.

The next big step is running the code on Perlmutter, a supercomputer using the latest NVIDIA A100 Tensor Core GPUs. Das reports he’s already getting 4x speedups compared to the Summit GPUs thanks to the A100 GPUs’ support for TensorFloat-32, a mixed-precision format that delivers both fast results and high accuracy.

The lab’s program already offers 100x speedups compared to other DFT codes, but Gavini’s not stopping there. He’s already thinking about testing it on Fugaku, an Arm-based system that’s currently the world’s fastest supercomputer.

“It’s always exciting to see how far you can get, and there’s always a next milestone. We see this as the beginning of a journey,” he said.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10