SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Frank Sully9/22/2021 2:30:09 PM
   of 2485
 
In The Latest AI Benchmarks, Nvidia Remains The Champ, But Qualcomm Is Rising Fast



Karl Freund
Contributor
Enterprise Tech

NVIDIA rules the performance roost, Qualcomm demonstrates exceptional power efficiency, and Intel demonstrates the power of software.

Every three months, the not-for-profit group MLCommons publishes a slew of peer-reviewed MLPerf benchmark results for deep learning, alternating between training and inference processing. This time around, it was Inference Processing V1.1. Over 50 members agree on a set of benchmarks and data sets they feel are representative of real AI workloads such as image and language processing. And then the fun begins.

From what I hear from vendors, these benchmarks are increasingly being used in Requests for Proposals for AI gear, and also serve as a robust test bed for engineers of new chip designs and optimization software. So everyone wins, whether or not they publish. This time around NVIDIA, Intel, and Qualcomm added new models and configurations, and results were submitted from Dell, HPE, Lenovo, NVIDIA, Inspur, Gigbyte, Supermicro, and Netrix.

And the winner is…

Before we get to the acccelerators, a few comments about inference processing. Unlike training, where the AI job is the job, inference is usually a small part of an application, which of course runs on an x86 server. Consequently, Intel pretty much owns the data center inference market; most models perform quite will on Xeons. Aware of this, Intel has continually updated the hardware and software to run faster to keep those customers happy and on the platform. We will cover more details on this in a moment, but for models requiring more performance or dedicated throughput for more complex models, accelerators are the way to go.

On that front, Nvidia remains the the fastest AI accelerator for every workload, on a single chip basis. On a system level, however, a 16-card Qualcomm delivered the fastest ResNet-50 performance with 342K images per second, over an 8-GPU Inspur server at 329K. While Qualcomm increased their model coverage, Nvidia was the only firm to submit benchmark results for every AI model. Intel submitted results for nearly all models in data center processing.

Qualcomm shared results for both the Snapdragon at the edge, and the Cloud AI100, both offering rock solid performance with absolute leadership power efficiency across the board. Here’s some of the data.



Nvidia, as usual, clobbered the inference[-]incumbant, which is the Intel Xeon CPU. Qualcomm out-performed the NVIDIA A30, however.

NVIDIA

Every year, Nvidia demonstrates improved performance, even on the same hardware, thanks to continuous improvements in its software stack. In particular, TensorRT improves performance and efficiency by pre-processing the neural network, performing functions such as quantization to lower-precision formats and arithmetic. But the star of the Nvidia software show for inference is increasingly the Triton Inference Server, which manages the run-time optimizations and workload balancing using Kubernetes. Nvidia has open-sourced Triton, and it now supports x86 CPUs as well as Nvidia GPUs. In fact, since it is open, Triton could be extended with backends for other accelerators and GPUs, saving startups a lot of software development work.



Nvidia demonstrated nearly 50% better performance[+]NVIDIA

In some applications, power efficiency is a critical factor for success, but only if the platform achieves the required performance and latency for the models being deployed. Thanks to years of research in AI and Qualcomm’s mobile processor legacy, the AI engines in Snapdragon and the Cloud AI100 deliver both, with up to half the power per transaction for many models versus the Nvidia A100, and nearly four times the efficiency of the Nvidia A10.



Qualcomm Cloud AI100 offered the best performance[-]per watt of all other submissions.

QUALCOMM

Back to Intel, the new Ice Lake Xeon performed quite well, with up to 3X improvment over the previous Cooper Lake CPU for DLRM (recommendation engines), and 1.5X on other models. Recommendation engines represent a huge market in which Xeon rules, and for which other contenders are investing heavily, so this is a very good move for Intel.



Intel demonstrated 50%-300% better performance[-]with Ice Lake vs, the previous Cooper Lake results. This was accomplished in part by the improvements the engineering team has realized in the software stack.

INTEL

Intel also demonstrated significant performance improvements in the development stack for Xeon. The most dramatic improvement was again for DLRM, in which sparce data and weights are common. In this case, Intel delivered over 5X performance improvement on the same hardware.



Intel shared the five-fold performance improvement[-]from sparsity, on the same chip.

INTEL

Conclusions

As followers of Cambrian-AI know, we believe that MLPerf presents vendors and users with valuable benchmarks, a real-world testing platform, and of course keeps analysts quite busy slicing and dicing all the data. In three months we expect a lot of exciting results, and look forward to sharing our insights and analysis with you then.

forbes.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 2:48:55 PM
   of 2485
 
Nvidia cosies up to Open Robotics for hardware-accelerated ROS

Hopes to tempt roboticists over to its Jetson platform with new simulation features, drop-in acceleration code

Gareth HalfacreeWed 22 Sep 2021

Nvidia has linked up with Open Robotics to drive new artificial intelligence capabilities in the Robot Operating System (ROS).

The non-exclusive agreement will see Open Robotics extending ROS 2, the latest version of the open-source robotics framework, to better support Nvidia hardware – and in particular its Jetson range, low-power parts which combine Arm cores with the company's own GPU and deep-learning accelerator cores to drive edge and embedded artificial intelligence applications.

"Our users have been building and simulating robots with Nvidia hardware for years, and we want to make sure that ROS 2 and Ignition work well on those platforms," Brian Gerkey, Open Robotics' chief exec, told The Register.

"We get most excited by two things: robots and open source. This partnership has both. We're working together with Nvidia to improve the developer experience for the global robotics community by extending the open source software on which roboticists rely. We're excited to work directly with Nvidia and have their support as we extend our software to take maximum advantage of their hardware."

The team-up will see Open Robotics working on ROS to improve the data flow between the various processors – CPU, GPU, NVDLA, and Tensor Cores – on Nvidia's Jetson hardware as a means to boost processing of high-bandwidth data.

As part of that, Open Robotics' Ignition and Nvidia's Isaac Sim simulation environments are to gain interoperability – meaning robot and environment models can be moved from one to the other, at least when the software is finished some time early next year.

As for why Nvidia's accelerated computing portfolio, and in particular its embedded Jetson family of products, should appeal to robot-makers, Gerkey said: "Nvidia has invested heavily in compute hardware that's relevant for modern robotics and AI workloads. Robots ingest and process large data volumes from sensors such as cameras and lasers. Nvidia's architecture allows that data flow to happen incredibly efficiently."

Murali Gopalakrishna, head of product management, Intelligent Machines, at Nvidia said of the hookup: "Nvidia's GPU-accelerated computing platform is at the core of many AI robot applications and many of those are developed using ROS, so it is logical that we work closely with open robotics to advance the field of robotics.

The work also brings with it some new Isaac GEMs, hardware-accelerated packages for ROS designed to replace code which would otherwise run on the CPU. The latest GEMs include packages for handling stereo imaging and point cloud data, colour space conversion, lens distortion correction, and the detection and processing of AprilTags – QR Code-style 2D fiducial tags developed at the University of Michigan.

The partnership doesn't mean the two are going steady, though. "We are eager to extend ROS 2 in similar ways on other accelerated hardware," Gerkey told us of planned support for other devices like Intel's Myriad X and Google's TPU– to say nothing of GPU hardware from Nvidia rival AMD.

"In fact, we plan for the work we do together with Nvidia to lay the foundation for additional extensions for additional architectures. To other hardware manufacturers: please contact us to talk about extensions for your platform!"

The latest Isaac GEMs are available on Nvidia's GitHub repository now; the interoperable simulation environments, meanwhile, aren't expected to release until the (northern hemisphere) spring of 2022.

Nvidia's Gopalakrishna said it was possible for ROS developers to begin experimenting before the release date. "The simulator already has a ROS 1 and ROS 2 bridge and has examples of using many of the popular ROS packages for navigation (nav2) and manipulation (MoveIT). Many of these developers are also leveraging Isaac Sim to generate synthetic data to train the perception stack in their robots. Our spring release will bring additional functionality like interoperability between Gazebo Ignition and Isaac Sim."

When we asked what performance uplift could users expect from the new Isaac GEMs compared to CPU-only packages, we were told: "The amount of performance gain will vary depending on how much inherent parallelism exists in a given workload. But we can say that we are seeing an order of magnitude increase in performance for perception and AI related workloads. By using the appropriate processor to accelerate the different tasks, we see increased performance and better power efficiency."

As for additional features in the pipeline, Gopalakrishna said: "Nvidia is working with Open Robotics to make the ROS framework more streamlined for hardware acceleration and we will also continue to release multiple new Isaac GEMs, our hardware accelerated software packages for ROS.

"Some of these will be DNNs which are commonly used in robotics perception stacks. On the simulator side, we are working to add support for more sensors and robots and release more samples that are relevant to the ROS community." ®

theregister.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 2:55:11 PM
   of 2485
 
Next Generation: ‘Teens in AI’ Takes on the Ada Lovelace Hackathon

September 22, 2021

by LIZ AUSTIN

Jobs in data science and AI are among the fastest growing in the entire workforce, according to LinkedIn’s 2021 Jobs Report.

Teens in AI, a London-based initiative, is working to inspire the next generation of AI researchers, entrepreneurs and leaders through a combination of hackathons, accelerators, networking events and bootcamps.

In October, the organization, with support from NVIDIA, will host the annual Ada Lovelace Hackathon, created for young women ages 11-18 to get a glimpse of all that can be done in the world of AI.

Inspired By AI

The need to embolden young women to join the tech industry is great.

Only 30 percent of the world’s science researchers are women. And fewer than one in five authors at leading AI conferences are women, about the same ratio of those teaching AI-related subjects, according to the AI Now Institute.

Founded by social entrepreneur Elena Sinel, Teens in AI is trying to change that. It aims to give young people — especially young women — early exposure to AI that’s being developed and deployed to promote social good.

The organization, which was launched at the 2018 AI for Good Global Summit at the United Nations, has an expansive network of mentors from some of the world’s leading companies. These volunteers work with students and inspire them to use AI to address social, humanitarian and environmental challenges.

“A shortage of STEM skills costs businesses billions of dollars every year, impacting UK businesses alone by about £1.5 billion a year,” Sinel said. “Yet with so few girls — especially those from disadvantaged backgrounds — studying STEM, we are?depriving ourselves?of?potential?talent.”

Sinel said that Teens in AI makes STEM education approachable and increases exposure to female role?models,?showing young women that a bright? STEM ?career isn’t reserved for only males.

“We can’t do this on our own, so we’re constantly on the lookout for like-minded corporate partners like NVIDIA who will work with us to grow this community of young people who want to make the world more inclusive and sustainable,” she said.

Ada Lovelace Hackathon

With the company’s support, the Ada Lovelace Hackathon — named for the 19th century mathematician who is often regarded as the first computer programmer — showcases speakers and mentors to encourage young women to pursue a career in AI. This year’s event is expected to reach more than 1,000 girls from 20+ countries.

Participants will have the opportunity to receive prizes and get access to NVIDIA Deep Learning Institute credits for more advanced hands-on training and experience.

NVIDIA employees around the world will serve as mentors and judges.

Kate Kallot, head of emerging areas at NVIDIA, judged last year’s Ada Lovelace Hackathon, as well as August’s Global AI Accelerator Program for Teens in AI.

“I hope to inform and inspire young people in how they can help fuel applications and the AI revolution,” Kallot said. “While there’s a heavy demand for people with technical skills, what’s also needed is a future AI workforce that is truly reflective of our diverse world.”

Kallot talked more about the importance of fighting racial biases in the AI industry on a recent Teens in AI podcast episode.

Championing Diversity

NVIDIA’s support of Teens in AI is part of our broader commitment to bringing more diversity to tech, expanding access to AI education and championing opportunities for traditionally underrepresented groups.

This year, we announced a partnership with the Boys & Girls Clubs of Western Pennsylvania to develop an open-source AI and robotics curriculum for high school students. The collaboration has given hundreds of Jetson Nano developer kits to educators in schools and nonprofits, through the NVIDIA Jetson Nano 2GB Developer Kit Grant Program.

NVIDIA also works with minority-serving institutions and diversity-focused professional organizations to offer training opportunities — including free seats for hands-on certification courses through the NVIDIA Deep Learning Institute.

Driving the Future of AI

AI is growing at an incredible rate. The AI market is predicted to be worth $360 billion by 2028, up from just $35 billion in 2020, and is expected to add $880 billionto the U.K. economy by 2035.

Over 90 percent of leading businesseshave an ongoing investment in AI, 23 percent of customer service organizations are using AI-powered chatbots and 46 percent of people are using AI every single day.

In such a landscape, encouraging young people across the globe to embark on their AI journeys is all the more important.

Learn more about Teens in AI and the NVIDIA Jetson Grant Program.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 9:49:07 PM
   of 2485
 
Robotics Gets A Jolt From An Open Robotics-Nvidia Partnership

Jim McGregor
Contributor



Materials handling robot using AI perception

NVIDIA

Nvidia and Open Robotics announced a partnership to enhance the ROS 2 (Robot Operating System) development suite. The partnership essentially combines the two most powerful robotics development environments and the two largest groups of robotics developers.

First released in 2010, ROS has been a key open-source platform for robotics developers supported by various companies in a variety of industries and government research organizations like DARPA and NASA. While the platform has continued to grow and includes the Ignition simulation environment, it has been primarily targeting traditional CPU computing models. Over the past several years, however, Nvidia has pioneered heterogeneous and AI computing for IoT and edge applications through the development of its Jetson platforms, software development kits (SDKs) like Isaac for robotics, toolkits like Nvidia TAO (Train, Adapt, and Optimize) for simplifying AI model development and deployment, and Omniverse Isaac Sim for synthetic data generation and robotics simulation. Both environments are open to developers, provide valuable code, models, data sets, and simulation resources. Now the two can be combined into Nvidia’s Omniverse collaborative development environment to allow developers to simultaneously develop everything from the physical robot to synthetic data sets to train the robot.



The Jetson product family and Isaac Robotics[-]platform

NVIDIA

For the ROS developers, this opens a world of possibilities. Pulling ROS into the Nvidia environment offers the developer the ability to leverage offload/acceleration engines like a GPU, shared memory, and predesigned hardware acceleration algorithms Nvidia calls Isaac Gems. Thus far, Nvidia is offering three Gems for image processing and DNN-based perception models, including SGM Stereo Disparity and Point Cloud, Color Space Conversion and Lens Distortion Correction, and AprilTags Detection. The performance lift from offloading depends on the specific algorithm, but Nvidia expects that some will result in an order of magnitude improvement in performance versus the same implementation on a CPU. In addition, the Isaac Sim includes support for ROS and ROS2 algorithms, including ROS April Tag, ROS Stereo Camera, ROS Services, Movelt Motion Planning Framework, Native Python ROS Usage, and ROS2 Navigation. The Isaac Sim can also be used to generate synthetic data to train and test perception models. The predesigned algorithms combined the synthetic data allow even the most novice developer or startup to quickly develop robotic platforms.

ROS developers seeking to add AI technologies to their products will also be able to leverage other Nvidia SDKs, such as Fleet Command for remote system management, Riva for conversational AI, and Deepstream for video streaming analytics. Most importantly, from Tirias Research’s perspective is the ability to leverage the Omniverse environment, which allows multiple simultaneous users with seamless interaction between tools, and the massive amounts of new data and machine learning (ML) models being developed by Nvidia.

Although, Nvidia has SDKs for various applications, such as Isaac for robotics, Clara for healthcare, and Drive for autonomous vehicles, the ML models for each of these segments are increasingly overlapping. When discussing this point the Nvidia’s General Manager of Robotics Murali Gopalakrishna, Mr. Gopalakrishna indicated that there is considerable crossover in the development of the SDKs and models for many of the applications. According to Mr. Gopalakrishna “the only difference is the data; the decisions are still the same.” As a result, the advances in one market or application typically benefits multiple markets and applications.



Worldwide forecast for robots

NVIDIA, STATISA

According to data from Statista, the robotics market is projected to grow at over a 25% rate annually, an increase from approximately 20% prior to COVID. COVID is pushing the use of robotics in everything from healthcare and manufacturing to agriculture and food delivery. Leveraging the advancements in AI, sensors, wireless communications (5G), and semiconductor technology, robotics is rapidly moving into the mainstream of society. By 2025, the global robotics market will reach $210 billion, but that is a fraction of the value of the products and services that will be generated by robotics. Having evaluated various development platforms and tools, I can attest to the value of the resources that the Nvidia Isaac and ROS platforms offer developers. Both make it easy for developers to begin developing new robotic platforms but the combination of the two, for lack of a better way to describe it, democratizes robotic development and AI for robotics. The joining of the two environments also brings together the two largest robotics develop communities, both focused on open-source collaboration.

google.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 9:58:59 PM
   of 2485
 
Gaming Market Worth $545.98 Billion by [2021-2028] | Fortune Business Insights™

Top companies covered in the gaming market report are Microsoft Corporation (Redmond, Washington, United States), Nintendo Co., Ltd (Kyoto, Japan), Rovio Entertainment Corporation (Espoo, Finland), Nvidia Corporation (California, United States), Valve Corporation (Washington, United States), PlayJam Ltd (London, United Kingdom), Electronic Arts Inc (California, United States), Sony Group Corporation (Tokyo, Japan), Bandai Namco Holdings Inc (Tokyo, Japan), Activision Blizzard, Inc (California, United States) and more players profiled.

September 22, 2021 16:00 ET| Source: Fortune Business Insights

Pune, India, Sept. 22, 2021 (GLOBE NEWSWIRE) -- The gaming market is fragmented by major companies that are focusing on maintaining their presence. They are doing so by proactively investing in R&D activities to develop engaging online video games. Additionally, other key players are adopting organic and inorganic strategies to maintain a stronghold that will contribute to the growth of the market during the forecast period. The global gaming market size is expected to gain momentum by reaching USD 545.98 billion by 2028 while exhibiting a CAGR of 13.20% between 2021 and 2028. In its report titled "Gaming Market Size, Share & Forecast 2021-2028",Fortune Business Insights mentions that the market stood at USD 203.12 billion in 2020.

Online video games have become more prevalent in recent years. Most people find online games attractive and a modest way to find free time from their hectic schedules. Moreover, during the pandemic, the inclination toward gaming increased dramatically. Many companies such as Nintendo and Tencent witnessed an increase in their sales during the first quarter. The former showcased a profit of 41%, as it sold many of its games digitally. The demand for online games will be persistent in upcoming years, and this market is anticipated to boom during the forecast period.

List of the Companies Profiled in the Global Gaming Market:
  • Microsoft Corporation (Redmond, Washington, United States)
  • Nintendo Co., Ltd (Kyoto, Japan)
  • Rovio Entertainment Corporation (Espoo, Finland)
  • Nvidia Corporation (California, United States)
  • Valve Corporation (Washington, United States)
  • PlayJam Ltd (London, United Kingdom)
  • Electronic Arts Inc (California, United States)
  • Sony Group Corporation (Tokyo, Japan)
  • Bandai Namco Holdings Inc (Tokyo, Japan)
  • Activision Blizzard, Inc (California, United States)

Market Segmentation:

Based on game type, the market is divided into shooter, action, sports, role-playing, and others.

Based on game type, the shooter segment held a gaming market share of about 23.35% in 2020. The segment is expected to experience considerable growth since it provides 3D realistic graphics. It makes players experience a whole new experience of the virtual world. This fascinating atmosphere provided by battle games is driving the segment market.

globenewswire.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/23/2021 1:22:57 PM
   of 2485
 
Doing the Math: Michigan Team Cracks the Code for Subatomic Insights

In less than 18 months, and thanks to GPUs, a team from the University of Michigan got 20x speedups on a program using complex math that’s fundamental to quantum science.

September 23, 2021 by RICK MERRITT

In record time, Vikram Gavini’s lab crossed a big milestone in viewing tiny things.

The three-person team at the University of Michigan crafted a program that uses complex math to peer deep into the world of the atom. It could advance many fields of science, as well as the design for everything from lighter cars to more effective drugs.

The code, available in the group’s open source repository, got a 20x speedup in just 18 months thanks to GPUs.

A Journey to the Summit

In mid-2018 the team was getting ready to release a version of the code running on CPUs when it got an invite to a GPU hackathon at Oak Ridge National Lab, the home of Summit, one of the world’s fastest supercomputers.

“We thought, let’s go see what we can achieve,” said Gavini, a professor of mechanical engineering and materials science.

“We quickly realized our code could exploit the massive parallelism in GPUs,” said Sambit Das, a post-doc from the lab who attended the five-day event.

Before it was over, Das and another lab member, Phani Motamarri, got 5x speedups moving the code to CUDA and its libraries. They also heard the promise of much more to come.

From 5x to 20x Speedups in Six Months

Over the next few months, the lab continued to tune its program for analyzing 100,000 electrons in 10,000 magnesium atoms. By early 2019, it was ready to run on Summit.

Taking an iterative approach, the lab ran increasing portions of its code on more and more of Summit’s nodes. By April, it was using most of the system’s 27,000 GPUs, getting nearly 46 petaflops of performance, 20x prior work.

It was an unheard-of result for a program based on density functional theory (DFT), the complex math that accounts for quantum interactions among subatomic particles.

Distributed Computing for Difficult Calculations

DFT calculations are so complex and fundamental that they currently consume a quarter of the time on all public research computers. They are the subject of 12 of the 100 most-cited scientific papers, used to analyze everything from astrophysics to DNA strands.

Initially, the lab reported its program used nearly 30 percent of Summit’s peak theoretical capability, an unusually high efficiency rate. By comparison, most other DFT codes don’t even report efficiency because they have difficulty scaling beyond use of a few processors.

“It was really exciting to get to that point because it was unprecedented,” said Gavini.

Recognition for a Math Milestone

In late 2019, the group was named a finalist for a Gordon Bell award. It was the lab’s first submission for the award that’s the equivalent of a Nobel in high performance computing.

“That provided a lot of visibility for our lab and our university, and I think this effort is just the beginning,” Gavini said.

Indeed, since the competition, the lab pushed the code’s performance to 64 petaflops and 38 percent efficiency on Summit. And it’s already exploring its use on other systems and applications.

Seeking More Apps, Performance

The initial work analyzed magnesium, a metal much lighter than the steel and aluminum used in cars and planes today, promising significant fuel savings. Last year, the lab teamed up with another group exploring how electrons move in DNA, work that could help other researchers develop more effective drugs.

The next big step is running the code on Perlmutter, a supercomputer using the latest NVIDIA A100 Tensor Core GPUs. Das reports he’s already getting 4x speedups compared to the Summit GPUs thanks to the A100 GPUs’ support for TensorFloat-32, a mixed-precision format that delivers both fast results and high accuracy.

The lab’s program already offers 100x speedups compared to other DFT codes, but Gavini’s not stopping there. He’s already thinking about testing it on Fugaku, an Arm-based system that’s currently the world’s fastest supercomputer.

“It’s always exciting to see how far you can get, and there’s always a next milestone. We see this as the beginning of a journey,” he said.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/23/2021 8:11:10 PM
   of 2485
 
SambaNova makes a mark in the AI hardware realm

The startup says it is innovating AI hardware systems with its data flow architecture that enterprises can use to be more efficient when processing large AI data sets.

Esther Ajao, News Writer

Published: 23 Sep 2021

As a young startup, SambaNova Systems is already making a mark in the fast-growing AI hardware industry.

The vendor, based in Palo Alto, Calif., started in 2017 with a mission of transforming how enterprises and research labs with high compute power needs deploy AI, and providing high-performance and high-accuracy hardware-software systems that are still easy to use, said Kunle Olukotun, co-founder and chief technologist.

Its technology is being noticed. SambaNova has attracted more than $1.1 billion in venture financing. With a valuation of $5.1 billion, it is one of the most well-funded AI startups and it is already competing with the likes of AI chip giant Nvidia.

What SambaNova offersSambaNova's hallmark is its Dataflow architecture. Using the extensible machine learning services platform, enterprises can specify various configurations, whether grouping kernels together on a single chip, or on multiple chips, in a rack or on multiple racks in the SambaNova data center.

Essentially, the vendor leases to enterprise clients the processing power of its proprietary AI chips and creates machine learning models based on domain data supplied by the customer, or customers can buy SambaNova chips and run their own AI systems on them.

While other vendors have offered either just chips or just the software, SambaNova provides the entire rack, which will make AI more accessible to a wider range of organizations, said R "Ray" Wang, founder and principal analyst at Constellation Research.



SambaNova offers data flow as a service to small enterprises that lack the time, resources and desire to become experts in machine learning or AI.

"The irony of AI automation is that it's massively manual today," Wang said. "What [SambaNova is] trying to do is take away a lot of that manual process and a lot of the human error and make it a lot more accessible to get AI."

Wang added that SambaNova offers AI chips that are among the most powerful on the market.

Software-defined approachWhile it's known in some ways as an AI hardware specialist, SambaNova prides itself in taking a "software-defined approach" to building its AI technology stack.

"We didn't build some hardware thinking: 'OK, now developers go out and figure it out,'" said Marshall Choy, vice president of product at SambaNova. Instead, he said the vendor focused on the problems of scale, performance, accuracy and ease of use for machine learning data flow computing. Then they built the infrastructure engine to support those needs.

Two different types of customersSambaNova breaks up its customers into two groups: the Fortune 50 and the "Fortune everybody else." For the first group, SambaNova's data platform enables enterprise data teams to innovate and generate new models, Choy said.

The other group is made up of enterprises that lack the time, resources or desire to become experts in machine learning and AI. For these organizations, SambaNova offers Dataflow as a service.

SambaNova says this approach helps smaller enterprises by reducing the complexities of buying and maintaining hardware infrastructure and selecting, optimizing and maintaining machine learning models.

This creates a "greater AI equity and accessibility of technology than has previously been held in the hands of only the biggest, most wealthy tech companies," Choy said.

SambaNova has already attracted some big-name customers.

One is the U.S. Department of Energy's Argonne National Laboratory in Illinois.

Using SambaNova's DataScale system, Argonne trained a convolutional neural network (CNN) with images beyond 50k x 50k resolution. Previously, when Argonne tried to train the CNN on GPUs, they found that the images were too large and had to be resized to 50% resolution, according to SambaNova.

"We're seeing new ways of computing," Wang said. "This approach to getting to AI is going to be one of many. I think other people are going to try different approaches, but this one seems very promising."

google.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/23/2021 8:33:46 PM
   of 2485
 
How do databases support AI algorithms?

By Donald Conway

23/09/2021



Databases have always been able to do simple administrative work, such as finding particular records that match some certain criteria, for example, all users who are between 20 and 30 years old. Lately, database companies have been adding artificial intelligence routines to databases so that users can explore the power of these smarter and more sophisticated algorithms on their own data stored in the database.

AI algorithms are also finding a home below the surface, where AI routines help optimize internal tasks like reindexing or query planning. These new features are often billed as an automation addition because they relieve the user of cleaning work. Developers are encouraged to let them do their job and forget about them.

However, there is much more interest in AI routines that are open to users. These machine learning algorithms can classify data and make smarter decisions that evolve and adapt over time. They can unlock new use cases and improve the flexibility of existing algorithms.

In many cases, integration is largely pragmatic and essentially cosmetic. The calculations are no different than what would occur if the data were exported and sent to a separate AI program. Within the database, the AI ??routines are separate and simply take advantage of any internal access to the data. Sometimes this faster access can speed up the process dramatically. When data is important, sometimes just moving it can take a great deal of time.

The integration can also limit the analysis to algorithms that are officially part of the database. If users want to implement a different algorithm, they must go back to the old process of exporting the data in the correct format and importing it into the AI ??routine.

The integration can take advantage of some of the newer in-memory distributed databases that easily distribute the load and data storage across multiple machines. These can easily handle a large amount of data. If a complex analysis is necessary, it may not be difficult to increase the CPU capacity and RAM allocated to each machine.

Some AI-powered databases can also take advantage of GPU chips. Some AI algorithms use the highly parallel architecture of GPUs to train machine learning models and run other algorithms. There are also some custom chips specially designed for AI that can dramatically speed up analysis.

However, one of the biggest advantages may be the standard interface, which is often SQL, a language that is already familiar to many programmers. Many software packages already easily interact with SQL databases. If someone wants more AI analysis, it is no more complex than learning the new SQL statements.

What are established companies doing? Artificial intelligence is a very competitive field now. All the major database companies are exploring integrating algorithms with their tools. In many cases, companies offer so many options that it is impossible to summarize them here.

Oracle has integrated AI is incorporated into its databases in a variety of ways, and the company offers a broad set of options in almost every corner of its stack. At the lower levels, some developers, for example, are running machine learning algorithms in the Python interpreter that is built into the Oracle database. There are also more integrated options like Oracle Machine Learning for R, a version that R uses to analyze data stored in Oracle databases. Many of the services are incorporated at higher levels, for example, as features for analysis in the data science tools or analytics.

IBM also has a number of artificial intelligence tools that are integrated with its various databases, and the company sometimes calls Db2 “the artificial intelligence database.” At the lowest level, the database includes functions in its version of SQL to address common parts of building AI models, such as linear regression. These can be threaded together in custom stored procedures for training. Many IBM AI tools, such as Watson Study, are designed to connect directly to the database to speed up model construction.

Hadoop and its ecosystem of tools are commonly used to analyze large data sets. While they are often viewed as more data processing channels than databases, there is often a database like HBase buried within them. Some people use the Hadoop distributed file system to store data, sometimes in CSV format. A variety of AI tools are already built into the Hadoop pipeline using tools like Submarine, making it a database with built-in AI.

All major cloud companies offer databases and artificial intelligence products. The amount of integration between any particular database and any particular AI varies substantially, but it is often quite easy to connect the two. Amazon Comprehend, a natural language text analysis tool, accepts data from S3 buckets and stores responses in many locations, including some AWS databases. Amazon SageMaker You can access data in S3 buckets or Redshift data lakes, sometimes using SQL through Amazon Athena. While it’s a good question as to whether these count as true integration, there is no question that they simplify the journey.

In the Google cloud, the AutoML tool for automated machine learning can obtain data from BigQuery databases. Firebase ML offers a number of tools to address common challenges for mobile device developers, such as image classification. It will also implement any trained TensorFlow Lite models to work with your data.

Microsoft Azure also offers a collection of databases and artificial intelligence tools. The Databricks tool, for example, is based on the Apache Spark pipeline and comes with connections to Azure Cosmos DB, your Data Lake storage, and other databases like Neo4j or Elasticsearch that may be running within Azure. its Azure Data Factory It is designed to search for data in the cloud, both in databases and in generic storage.

What are the upstarts doing? Several database startups also highlight their direct support for machine learning and other artificial intelligence routines. SingleStore, for example, offers quick analytics to track incoming telemetry in real time. This data can also be annotated Based on various AI models as ingested.

MindsDB add machine learning routines to standard databases such as MariaDB, PostgreSQL, or Microsoft SQL. Extends SQL to include features to learn from the data already in the database to make predictions and classify objects. These functions are also easily accessible in more than a dozen business intelligence applications, such as Salesforce’s Tableau or Microsoft’s Power BI, that work closely with SQL databases.

Many of the companies bury the database deep inside the product and sell only the service itself. Risky, for example, it tracks financial transactions using artificial intelligence models and offers protection to merchants through “chargeback guarantees.” The tool ingests transactions and maintains historical data, but there is little discussion about the database layer.

In many cases, companies that can bill themselves as pure AI companies are also database providers. After all, the data must be somewhere. H2O.aiFor example, it is just one of the cloud AI providers that offers integrated data prep and AI analytics. However, data storage is more hidden and many people think of software like H2O.ai first for its analytical power. Still, you can store and analyze the data.

Is there anything the built-in AI databases can’t do? Adding AI routines directly to a database’s feature set can simplify the lives of database developers and administrators. It can also make the analysis a bit faster in some cases. But beyond the convenience and speed of working with a data set, this does not offer any large and continuous advantage over exporting the data and importing it into a separate program.

The process can limit developers who can choose to explore only algorithms that are implemented directly within the database. If the algorithm is not part of the database, it is not an option.

Of course, many problems cannot be solved with machine learning or artificial intelligence. The integration of AI algorithms with the database does not change the power of the algorithms, it just accelerates them.

insider-voice.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/23/2021 8:50:18 PM
   of 2485
 
IBM Research Says Analog AI Will Be 100X More Efficient. Yes, 100X.



Karl Freund
Contributor
Enterprise Tech

IBM AI Hardware Research Center has delivered signifiant digital AI logic, and now turns their attention to solving AI problems in an entirely new way.



The IBM AI Hardware Research Center is located in[-]the TJ Watson Center near Yorktown Heights, New York. IBM

Gary Fritz, Cambrian-AI Research Analyst, contributed to this article.

AI is showing up in nearly every aspect of business. Larger and more complex Deep Neural Nets (DNNs) keep delivering ever-more-remarkable results. The challenge, as always, is power and performance.

NVIDIA has been the leader to beat for years in the data center, with Qualcomm and Apple leading the way in mobile. NVIDIA got an early start when they realized their multi-core graphics cards were a perfect match for the massive amounts of calculations required to train and execute DNNs. NVIDIA’s tech has spurred huge growth in the sector; NVIDIA chalked up just over $2B last quarter in data center revenue, and AI accounts for a large (although unknown) portion of that high-margin treasure trove.

Here comes Analog Computing

It’s tough to beat NVIDIA at their own game, so several vendors are taking a different approach. Mythic, a Silicon Valley startup, has already released their first analog computation engine, and IBM Research is investing in an analog computation roadmap. But before we dive further into the deep end of the pool, just what is analog computation?

Traditional computers use digital storage and digital math. Data values are stored as binary representations. The typical computer architecture has a compute section (one or more CPUs or GPUs) and a memory bank. The CPU shuffles data into the CPU/GPU for calculation, then shuffles the results back out to memory. This constant data motion greatly increases the performance overhead and energy cost of the operation.

Analog computation is an entirely different approach. Numeric values are represented by continuously variable circuit values (voltage levels, charge level, or other mechanisms) in analog memory cells. Analog calculations are handled by the analog circuitry in the memory array. Each cell is “programmed” with analog circuitry, and the resulting analog value represents the desired answer. Calculations are performed directly in the memory cell, so there is no need to move data back and forth to a CPU. The massively parallel calculations possible with this approach are a perfect match for the enormous matrix calculations required to train or execute a DNN.

IBM Research Sees Analog as the Next Big Thing in AI

IBM’s analog implementation uses memristive technology. Memristors are the fourth fundamental circuit component type in addition to resistors, capacitors, and inductors. IBM uses memristive Phase-Change Memory (PCM) or Resistive Memory (ReRAM) to store analog DNN synaptic weights. Circuits are built on the chip to do the desired calculations with the analog values. This includes forward propagation for DNN inference, and additional backward propagation for weight updates for training.

IBM plans to integrate analog compute engines alongside traditional digital calculations. An analog in-memory calculation engine could handle the large-scale DNN calculations, working in partnership with traditional CPU models.

IBM Research typically delivers technology through two channels. The first, of course, is to license the Intellectual Property to tech companies. The second is to turn their inventions into innovations in their own products. From a hardware standpoint, we could envision IBM building multi-chip modules that attach one or more Analog AI accelerators to systems, possibly using IBM’s future DBHi, or Direct Bonded Heterogeneous Integration, to interconnect the accelerator to a CPU. Also note that IBM recently announced on-die digital AI accelerators as part of the next generation Z system’s Telum processor. The reduced-precision arithmetic core was derived from the technology developed by the IBM Research AI Hardware Center.

Conclusions

This movie isn’t over yet. There remains significant work to do, and a lot of invention especially if IBM wants to train neural networks in analog. But IBM must feel fairly confident in their prospects to start writing blogs about the technology’s prospects. Data Center power efficiency is becoming a really big deal, with some projections forecasting a 5X increase in 10 years, to 10% of worldwide power consumption. We cannot afford that, and analog could make a huge dent in reducing that.

Significant challenges remain, but analog technology has terrific potential. For more information, see our Research Note here.

forbes.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/23/2021 9:01:35 PM
   of 2485
 
NVIDIA Plans to Bring A Suite of Perception Technologies to the Robotics Operating System (ROS) Developer Community

By Shobha Kakkar

September 23, 2021

Source: developer.nvidia.com

All things that move will become autonomous. And all the robots out there are getting smarter, fast! NVIDIA announced its latest initiatives to deliver a suite of perception technologies for developers seeking innovative ways to incorporate cutting-edge computer vision and AI/ML functionality into their ROS-based robotics applications. These new tools reduce time spent developing with ease as they improve performance within your software projects, making them easier than ever before.

NVIDIA and Open Robotics have entered into an agreement to accelerate the performance of ROS 2 on NVIDIA’s Jetson AI platform, as well as GPU-based systems. The two companies will also enable seamless simulation interoperability between Ignition Gazebo’s system and NVIDIA Isaac Sim on Omniverse. Software resulting from this partnership is expected to be released in the spring of 2022.

The Jetson platform is the go-to solution for robotics. It enables high-performance, low latency processing that helps robots be responsive and safe while also being collaborative. Open Robotics will be enhancing the ROS 2 framework to allow for efficient management of data flow and shared memory across GPU processors. This should significantly improve performance when processing applications that rely heavily on high bandwidth, such as lidar sensors in robotic systems. Apart from improved deployment on Jetson, Open Robotics and NVIDIA plan to integrate Ignition Gazebo and NVIDIA Isaac Sim.

By connecting these two simulators together, ROS developers can easily move their robots and environments between Ignition and Isaac Sim to run large-scale simulations. They will be able to use each simulator’s advanced features such as high fidelity dynamics or photorealistic rendering to generate synthetic data when training AI models.

Isaac GEM

Isaac GEMs have just been released for ROS with significant speedup! This is really exciting news and it’s great that you can try out these new features now.

Isaac GEMs for ROS are hardware accelerated packages that make it easier to build high-performance solutions on the Jetson platform. The focus of these GEMs is improving throughput in image processing and DNN based perception models, which have become increasingly important as roboticists work with their technologies more often than ever before. These GEMs reduce load while providing significant performance gain so developers can spend less time worrying about how much power they’re using or what kind of network connection best suits them at any given moment.

For more details, visit: developer.nvidia.com



marktechpost.com



Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10