We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor. We ask that you disable ad blocking while on Silicon
Investor in the best interests of our community. If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Gaming Market Worth $545.98 Billion by [2021-2028] | Fortune Business Insights™
Top companies covered in the gaming market report are Microsoft Corporation (Redmond, Washington, United States), Nintendo Co., Ltd (Kyoto, Japan), Rovio Entertainment Corporation (Espoo, Finland), Nvidia Corporation (California, United States), Valve Corporation (Washington, United States), PlayJam Ltd (London, United Kingdom), Electronic Arts Inc (California, United States), Sony Group Corporation (Tokyo, Japan), Bandai Namco Holdings Inc (Tokyo, Japan), Activision Blizzard, Inc (California, United States) and more players profiled.
Pune, India, Sept. 22, 2021 (GLOBE NEWSWIRE) -- The gaming market is fragmented by major companies that are focusing on maintaining their presence. They are doing so by proactively investing in R&D activities to develop engaging online video games. Additionally, other key players are adopting organic and inorganic strategies to maintain a stronghold that will contribute to the growth of the market during the forecast period. The global gaming market size is expected to gain momentum by reaching USD 545.98 billion by 2028 while exhibiting a CAGR of 13.20% between 2021 and 2028. In its report titled "Gaming Market Size, Share & Forecast 2021-2028",Fortune Business Insights mentions that the market stood at USD 203.12 billion in 2020.
Online video games have become more prevalent in recent years. Most people find online games attractive and a modest way to find free time from their hectic schedules. Moreover, during the pandemic, the inclination toward gaming increased dramatically. Many companies such as Nintendo and Tencent witnessed an increase in their sales during the first quarter. The former showcased a profit of 41%, as it sold many of its games digitally. The demand for online games will be persistent in upcoming years, and this market is anticipated to boom during the forecast period.
List of the Companies Profiled in the Global Gaming Market:
Microsoft Corporation (Redmond, Washington, United States)
Nintendo Co., Ltd (Kyoto, Japan)
Rovio Entertainment Corporation (Espoo, Finland)
Nvidia Corporation (California, United States)
Valve Corporation (Washington, United States)
PlayJam Ltd (London, United Kingdom)
Electronic Arts Inc (California, United States)
Sony Group Corporation (Tokyo, Japan)
Bandai Namco Holdings Inc (Tokyo, Japan)
Activision Blizzard, Inc (California, United States)
Market Segmentation:
Based on game type, the market is divided into shooter, action, sports, role-playing, and others.
Based on game type, the shooter segment held a gaming market share of about 23.35% in 2020. The segment is expected to experience considerable growth since it provides 3D realistic graphics. It makes players experience a whole new experience of the virtual world. This fascinating atmosphere provided by battle games is driving the segment market.
Doing the Math: Michigan Team Cracks the Code for Subatomic Insights
In less than 18 months, and thanks to GPUs, a team from the University of Michigan got 20x speedups on a program using complex math that’s fundamental to quantum science.
In record time, Vikram Gavini’s lab crossed a big milestone in viewing tiny things.
The three-person team at the University of Michigan crafted a program that uses complex math to peer deep into the world of the atom. It could advance many fields of science, as well as the design for everything from lighter cars to more effective drugs.
In mid-2018 the team was getting ready to release a version of the code running on CPUs when it got an invite to a GPU hackathon at Oak Ridge National Lab, the home of Summit, one of the world’s fastest supercomputers.
“We thought, let’s go see what we can achieve,” said Gavini, a professor of mechanical engineering and materials science.
“We quickly realized our code could exploit the massive parallelism in GPUs,” said Sambit Das, a post-doc from the lab who attended the five-day event.
Before it was over, Das and another lab member, Phani Motamarri, got 5x speedups moving the code to CUDA and its libraries. They also heard the promise of much more to come.
From 5x to 20x Speedups in Six Months
Over the next few months, the lab continued to tune its program for analyzing 100,000 electrons in 10,000 magnesium atoms. By early 2019, it was ready to run on Summit.
Taking an iterative approach, the lab ran increasing portions of its code on more and more of Summit’s nodes. By April, it was using most of the system’s 27,000 GPUs, getting nearly 46 petaflops of performance, 20x prior work.
It was an unheard-of result for a program based on density functional theory (DFT), the complex math that accounts for quantum interactions among subatomic particles.
Distributed Computing for Difficult Calculations
DFT calculations are so complex and fundamental that they currently consume a quarter of the time on all public research computers. They are the subject of 12 of the 100 most-cited scientific papers, used to analyze everything from astrophysics to DNA strands.
Initially, the lab reported its program used nearly 30 percent of Summit’s peak theoretical capability, an unusually high efficiency rate. By comparison, most other DFT codes don’t even report efficiency because they have difficulty scaling beyond use of a few processors.
“It was really exciting to get to that point because it was unprecedented,” said Gavini.
Recognition for a Math Milestone
In late 2019, the group was named a finalist for a Gordon Bell award. It was the lab’s first submission for the award that’s the equivalent of a Nobel in high performance computing.
“That provided a lot of visibility for our lab and our university, and I think this effort is just the beginning,” Gavini said.
Indeed, since the competition, the lab pushed the code’s performance to 64 petaflops and 38 percent efficiency on Summit. And it’s already exploring its use on other systems and applications.
Seeking More Apps, Performance
The initial work analyzed magnesium, a metal much lighter than the steel and aluminum used in cars and planes today, promising significant fuel savings. Last year, the lab teamed up with another group exploring how electrons move in DNA, work that could help other researchers develop more effective drugs.
The next big step is running the code on Perlmutter, a supercomputer using the latest NVIDIA A100 Tensor Core GPUs. Das reports he’s already getting 4x speedups compared to the Summit GPUs thanks to the A100 GPUs’ support for TensorFloat-32, a mixed-precision format that delivers both fast results and high accuracy.
The lab’s program already offers 100x speedups compared to other DFT codes, but Gavini’s not stopping there. He’s already thinking about testing it on Fugaku, an Arm-based system that’s currently the world’s fastest supercomputer.
“It’s always exciting to see how far you can get, and there’s always a next milestone. We see this as the beginning of a journey,” he said.
The startup says it is innovating AI hardware systems with its data flow architecture that enterprises can use to be more efficient when processing large AI data sets.
As a young startup, SambaNova Systems is already making a mark in the fast-growing AI hardware industry.
The vendor, based in Palo Alto, Calif., started in 2017 with a mission of transforming how enterprises and research labs with high compute power needs deploy AI, and providing high-performance and high-accuracy hardware-software systems that are still easy to use, said Kunle Olukotun, co-founder and chief technologist.
Its technology is being noticed. SambaNova has attracted more than $1.1 billion in venture financing. With a valuation of $5.1 billion, it is one of the most well-funded AI startups and it is already competing with the likes of AI chip giant Nvidia.
What SambaNova offersSambaNova's hallmark is its Dataflow architecture. Using the extensible machine learning services platform, enterprises can specify various configurations, whether grouping kernels together on a single chip, or on multiple chips, in a rack or on multiple racks in the SambaNova data center.
Essentially, the vendor leases to enterprise clients the processing power of its proprietary AI chips and creates machine learning models based on domain data supplied by the customer, or customers can buy SambaNova chips and run their own AI systems on them.
While other vendors have offered either just chips or just the software, SambaNova provides the entire rack, which will make AI more accessible to a wider range of organizations, said R "Ray" Wang, founder and principal analyst at Constellation Research.
SambaNova offers data flow as a service to small enterprises that lack the time, resources and desire to become experts in machine learning or AI.
"The irony of AI automation is that it's massively manual today," Wang said. "What [SambaNova is] trying to do is take away a lot of that manual process and a lot of the human error and make it a lot more accessible to get AI."
Wang added that SambaNova offers AI chips that are among the most powerful on the market.
Software-defined approachWhile it's known in some ways as an AI hardware specialist, SambaNova prides itself in taking a "software-defined approach" to building its AI technology stack.
"We didn't build some hardware thinking: 'OK, now developers go out and figure it out,'" said Marshall Choy, vice president of product at SambaNova. Instead, he said the vendor focused on the problems of scale, performance, accuracy and ease of use for machine learning data flow computing. Then they built the infrastructure engine to support those needs.
Two different types of customersSambaNova breaks up its customers into two groups: the Fortune 50 and the "Fortune everybody else." For the first group, SambaNova's data platform enables enterprise data teams to innovate and generate new models, Choy said.
The other group is made up of enterprises that lack the time, resources or desire to become experts in machine learning and AI. For these organizations, SambaNova offers Dataflow as a service.
SambaNova says this approach helps smaller enterprises by reducing the complexities of buying and maintaining hardware infrastructure and selecting, optimizing and maintaining machine learning models.
This creates a "greater AI equity and accessibility of technology than has previously been held in the hands of only the biggest, most wealthy tech companies," Choy said.
SambaNova has already attracted some big-name customers.
Using SambaNova's DataScale system, Argonne trained a convolutional neural network (CNN) with images beyond 50k x 50k resolution. Previously, when Argonne tried to train the CNN on GPUs, they found that the images were too large and had to be resized to 50% resolution, according to SambaNova.
"We're seeing new ways of computing," Wang said. "This approach to getting to AI is going to be one of many. I think other people are going to try different approaches, but this one seems very promising."
Databases have always been able to do simple administrative work, such as finding particular records that match some certain criteria, for example, all users who are between 20 and 30 years old. Lately, database companies have been adding artificial intelligence routines to databases so that users can explore the power of these smarter and more sophisticated algorithms on their own data stored in the database.
AI algorithms are also finding a home below the surface, where AI routines help optimize internal tasks like reindexing or query planning. These new features are often billed as an automation addition because they relieve the user of cleaning work. Developers are encouraged to let them do their job and forget about them.
However, there is much more interest in AI routines that are open to users. These machine learning algorithms can classify data and make smarter decisions that evolve and adapt over time. They can unlock new use cases and improve the flexibility of existing algorithms.
In many cases, integration is largely pragmatic and essentially cosmetic. The calculations are no different than what would occur if the data were exported and sent to a separate AI program. Within the database, the AI ??routines are separate and simply take advantage of any internal access to the data. Sometimes this faster access can speed up the process dramatically. When data is important, sometimes just moving it can take a great deal of time.
The integration can also limit the analysis to algorithms that are officially part of the database. If users want to implement a different algorithm, they must go back to the old process of exporting the data in the correct format and importing it into the AI ??routine.
The integration can take advantage of some of the newer in-memory distributed databases that easily distribute the load and data storage across multiple machines. These can easily handle a large amount of data. If a complex analysis is necessary, it may not be difficult to increase the CPU capacity and RAM allocated to each machine.
Some AI-powered databases can also take advantage of GPU chips. Some AI algorithms use the highly parallel architecture of GPUs to train machine learning models and run other algorithms. There are also some custom chips specially designed for AI that can dramatically speed up analysis.
However, one of the biggest advantages may be the standard interface, which is often SQL, a language that is already familiar to many programmers. Many software packages already easily interact with SQL databases. If someone wants more AI analysis, it is no more complex than learning the new SQL statements.
What are established companies doing? Artificial intelligence is a very competitive field now. All the major database companies are exploring integrating algorithms with their tools. In many cases, companies offer so many options that it is impossible to summarize them here.
Oracle has integrated AI is incorporated into its databases in a variety of ways, and the company offers a broad set of options in almost every corner of its stack. At the lower levels, some developers, for example, are running machine learning algorithms in the Python interpreter that is built into the Oracle database. There are also more integrated options like Oracle Machine Learning for R, a version that R uses to analyze data stored in Oracle databases. Many of the services are incorporated at higher levels, for example, as features for analysis in the data science tools or analytics.
IBM also has a number of artificial intelligence tools that are integrated with its various databases, and the company sometimes calls Db2 “the artificial intelligence database.” At the lowest level, the database includes functions in its version of SQL to address common parts of building AI models, such as linear regression. These can be threaded together in custom stored procedures for training. Many IBM AI tools, such as Watson Study, are designed to connect directly to the database to speed up model construction.
Hadoop and its ecosystem of tools are commonly used to analyze large data sets. While they are often viewed as more data processing channels than databases, there is often a database like HBase buried within them. Some people use the Hadoop distributed file system to store data, sometimes in CSV format. A variety of AI tools are already built into the Hadoop pipeline using tools like Submarine, making it a database with built-in AI.
All major cloud companies offer databases and artificial intelligence products. The amount of integration between any particular database and any particular AI varies substantially, but it is often quite easy to connect the two. Amazon Comprehend, a natural language text analysis tool, accepts data from S3 buckets and stores responses in many locations, including some AWS databases. Amazon SageMaker You can access data in S3 buckets or Redshift data lakes, sometimes using SQL through Amazon Athena. While it’s a good question as to whether these count as true integration, there is no question that they simplify the journey.
In the Google cloud, the AutoML tool for automated machine learning can obtain data from BigQuery databases. Firebase ML offers a number of tools to address common challenges for mobile device developers, such as image classification. It will also implement any trained TensorFlow Lite models to work with your data.
Microsoft Azure also offers a collection of databases and artificial intelligence tools. The Databricks tool, for example, is based on the Apache Spark pipeline and comes with connections to Azure Cosmos DB, your Data Lake storage, and other databases like Neo4j or Elasticsearch that may be running within Azure. its Azure Data Factory It is designed to search for data in the cloud, both in databases and in generic storage.
What are the upstarts doing? Several database startups also highlight their direct support for machine learning and other artificial intelligence routines. SingleStore, for example, offers quick analytics to track incoming telemetry in real time. This data can also be annotated Based on various AI models as ingested.
MindsDB add machine learning routines to standard databases such as MariaDB, PostgreSQL, or Microsoft SQL. Extends SQL to include features to learn from the data already in the database to make predictions and classify objects. These functions are also easily accessible in more than a dozen business intelligence applications, such as Salesforce’s Tableau or Microsoft’s Power BI, that work closely with SQL databases.
Many of the companies bury the database deep inside the product and sell only the service itself. Risky, for example, it tracks financial transactions using artificial intelligence models and offers protection to merchants through “chargeback guarantees.” The tool ingests transactions and maintains historical data, but there is little discussion about the database layer.
In many cases, companies that can bill themselves as pure AI companies are also database providers. After all, the data must be somewhere. H2O.aiFor example, it is just one of the cloud AI providers that offers integrated data prep and AI analytics. However, data storage is more hidden and many people think of software like H2O.ai first for its analytical power. Still, you can store and analyze the data.
Is there anything the built-in AI databases can’t do? Adding AI routines directly to a database’s feature set can simplify the lives of database developers and administrators. It can also make the analysis a bit faster in some cases. But beyond the convenience and speed of working with a data set, this does not offer any large and continuous advantage over exporting the data and importing it into a separate program.
The process can limit developers who can choose to explore only algorithms that are implemented directly within the database. If the algorithm is not part of the database, it is not an option.
Of course, many problems cannot be solved with machine learning or artificial intelligence. The integration of AI algorithms with the database does not change the power of the algorithms, it just accelerates them.
IBM AI Hardware Research Center has delivered signifiant digital AI logic, and now turns their attention to solving AI problems in an entirely new way.
The IBM AI Hardware Research Center is located in[-]the TJ Watson Center near Yorktown Heights, New York. IBM
Gary Fritz, Cambrian-AI Research Analyst, contributed to this article.
AI is showing up in nearly every aspect of business. Larger and more complex Deep Neural Nets (DNNs) keep delivering ever-more-remarkable results. The challenge, as always, is power and performance.
NVIDIA has been the leader to beat for years in the data center, with Qualcomm and Apple leading the way in mobile. NVIDIA got an early start when they realized their multi-core graphics cards were a perfect match for the massive amounts of calculations required to train and execute DNNs. NVIDIA’s tech has spurred huge growth in the sector; NVIDIA chalked up just over $2B last quarter in data center revenue, and AI accounts for a large (although unknown) portion of that high-margin treasure trove.
Here comes Analog Computing
It’s tough to beat NVIDIA at their own game, so several vendors are taking a different approach. Mythic, a Silicon Valley startup, has already released their first analog computation engine, and IBM Research is investing in an analog computation roadmap. But before we dive further into the deep end of the pool, just what is analog computation?
Traditional computers use digital storage and digital math. Data values are stored as binary representations. The typical computer architecture has a compute section (one or more CPUs or GPUs) and a memory bank. The CPU shuffles data into the CPU/GPU for calculation, then shuffles the results back out to memory. This constant data motion greatly increases the performance overhead and energy cost of the operation.
Analog computation is an entirely different approach. Numeric values are represented by continuously variable circuit values (voltage levels, charge level, or other mechanisms) in analog memory cells. Analog calculations are handled by the analog circuitry in the memory array. Each cell is “programmed” with analog circuitry, and the resulting analog value represents the desired answer. Calculations are performed directly in the memory cell, so there is no need to move data back and forth to a CPU. The massively parallel calculations possible with this approach are a perfect match for the enormous matrix calculations required to train or execute a DNN.
IBM Research Sees Analog as the Next Big Thing in AI
IBM’s analog implementation uses memristive technology. Memristors are the fourth fundamental circuit component type in addition to resistors, capacitors, and inductors. IBM uses memristive Phase-Change Memory (PCM) or Resistive Memory (ReRAM) to store analog DNN synaptic weights. Circuits are built on the chip to do the desired calculations with the analog values. This includes forward propagation for DNN inference, and additional backward propagation for weight updates for training.
IBM plans to integrate analog compute engines alongside traditional digital calculations. An analog in-memory calculation engine could handle the large-scale DNN calculations, working in partnership with traditional CPU models.
IBM Research typically delivers technology through two channels. The first, of course, is to license the Intellectual Property to tech companies. The second is to turn their inventions into innovations in their own products. From a hardware standpoint, we could envision IBM building multi-chip modules that attach one or more Analog AI accelerators to systems, possibly using IBM’s future DBHi, or Direct Bonded Heterogeneous Integration, to interconnect the accelerator to a CPU. Also note that IBM recently announced on-die digital AI accelerators as part of the next generation Z system’s Telum processor. The reduced-precision arithmetic core was derived from the technology developed by the IBM Research AI Hardware Center.
Conclusions
This movie isn’t over yet. There remains significant work to do, and a lot of invention especially if IBM wants to train neural networks in analog. But IBM must feel fairly confident in their prospects to start writing blogs about the technology’s prospects. Data Center power efficiency is becoming a really big deal, with some projections forecasting a 5X increase in 10 years, to 10% of worldwide power consumption. We cannot afford that, and analog could make a huge dent in reducing that.
Significant challenges remain, but analog technology has terrific potential. For more information, see our Research Note here.
All things that move will become autonomous. And all the robots out there are getting smarter, fast! NVIDIA announced its latest initiatives to deliver a suite of perception technologies for developers seeking innovative ways to incorporate cutting-edge computer vision and AI/ML functionality into their ROS-based robotics applications. These new tools reduce time spent developing with ease as they improve performance within your software projects, making them easier than ever before.
NVIDIA and Open Robotics have entered into an agreement to accelerate the performance of ROS 2 on NVIDIA’s Jetson AI platform, as well as GPU-based systems. The two companies will also enable seamless simulation interoperability between Ignition Gazebo’s system and NVIDIA Isaac Sim on Omniverse. Software resulting from this partnership is expected to be released in the spring of 2022.
The Jetson platform is the go-to solution for robotics. It enables high-performance, low latency processing that helps robots be responsive and safe while also being collaborative. Open Robotics will be enhancing the ROS 2 framework to allow for efficient management of data flow and shared memory across GPU processors. This should significantly improve performance when processing applications that rely heavily on high bandwidth, such as lidar sensors in robotic systems. Apart from improved deployment on Jetson, Open Robotics and NVIDIA plan to integrate Ignition Gazebo and NVIDIA Isaac Sim.
By connecting these two simulators together, ROS developers can easily move their robots and environments between Ignition and Isaac Sim to run large-scale simulations. They will be able to use each simulator’s advanced features such as high fidelity dynamics or photorealistic rendering to generate synthetic data when training AI models.
Isaac GEM
Isaac GEMs have just been released for ROS with significant speedup! This is really exciting news and it’s great that you can try out these new features now.
Isaac GEMs for ROS are hardware accelerated packages that make it easier to build high-performance solutions on the Jetson platform. The focus of these GEMs is improving throughput in image processing and DNN based perception models, which have become increasingly important as roboticists work with their technologies more often than ever before. These GEMs reduce load while providing significant performance gain so developers can spend less time worrying about how much power they’re using or what kind of network connection best suits them at any given moment.
I am neutral on Nvidia ( NVDA), as its strong growth rate and bullish Wall Street consensus are offset by its fairly rich valuation.
Nvidia is an American multinational technology that is credited with the invention of the graphics processing units for gaming.
The company is a pioneer in designing systems on a chip for accelerated computing, self-driving cars, and AI, and is a leader in fueling growth in manufacturing, transportation, healthcare, and other industries.
Strength
Nvidia’s Q2 2021 earnings report announced that NVIDIA RTX is featured in over 130 games and applications, including Minecraft RTX and Adobe ( ADBE) products.
The game-lag reducing NVIDIA Reflex ecosystem is supported in 20 games, including some of the major e-sports titles. The company also announced that it launched NVIDIA Base Command and Fleet Command, which simpliy the management of edge AI through a cloud service, which is transformative for many industries.
Furthermore, Nvidia also launched the NVIDIA Omniverse, a real-time 3D simulation and virtual collaboration platform.
Recent Results
For the second quarter of 2021, Nvidia reported revenue of $6.5 billion, showing gains of 68% from last year, and an increase of 15% from the first quarter.
Gaming revenue came in at $3.1 billion, showing an 85% growth from the previous year. This was driven by an exceptionally strong demand in the Gaming category that outstripped supply, and the introduction of GeForce RTX 3080 Ti and GeForce RTX 3070 Ti graphic cards.
The Professional Visualization sector also recorded second-quarter revenue of $519 million, showing an increase of 40% from the first quarter of 2021, and an increase of 156% from the previous year.
The company announced record Data Center revenue of $2.4 billion, which is up 35% from a year earlier. The Automotive category also showed a revenue increase of 37%, resulting in $152 million in revenue.
Nvidia has a positive outlook for the current quarter, where it expects its revenue to rise to an estimated $6.8 billion, and GAAP and non-GAAP gross margins to rise to 65.2% and 67%.
Valuation Metrics
Nvidia stock does not look particularly cheap or expensive here, as it is priced at a fairly high forward P/E ratio of 53.2x, but is also growing at a very strong clip.
Normalized earnings per share are expected to increase by 61.6% in 2022, and 11.8% in 2023.
Wall Street’s Take
From Wall Street analysts, Nvidia earns a Strong Buy analyst consensus, based on 23 Buy ratings, one Hold rating, and one Sell rating in the past three months. Additionally, the average NVDA price target of $237.27 puts the upside potential at 7.5%.
Summary and Conclusions
Nvidia is enjoying rapid growth, and has very strong support from Wall Street analysts. The stock is not extremely cheap, but is likely not extremely overvalued here either.
Disclosure: At the time of publication, Samuel Smith did not have a position in any of the securities mentioned in this article.
The human hand is one of the fascinating creations of nature, and one of the highly sought goals of artificial intelligence and robotics researchers. A robotic hand that could manipulate objects as we do would be enormously useful in factories, warehouses, offices, and homes.
Yet despite tremendous progress in the field, research on robotics hands remains extremely expensive and limited to a few very wealthy companies and research labs.
Now, new research promises to make robotics research available to resource-constrained organizations. In a paper published on arXiv, researchers at the University of Toronto, Nvidia, and other organizations have presented a new system that leverages highly efficient deep reinforcement learningtechniques and optimized simulated environments to train robotic hands at a fraction of the costs it would normally take.
Training robotic hands is expensive
OpenAI trained an AI-powered robotic hand to solve the Rubik’s Cube (Image source: YouTube)
In 2019, OpenAI presented Dactyl, a robotic hand that could manipulate a Rubik’s cube with impressive dexterity (though still significantly inferior to human dexterity). But it took 13,000 years’ worth of training to get it to the point where it could handle objects reliably.
How do you fit 13,000 years of training into a short period of time? Fortunately, many software tasks can be parallelized. You can train multiple reinforcement learning agents concurrently and merge their learned parameters. Parallelization can help to reduce the time it takes to train the AI that controls the robotic hand.
However, speed comes at a cost. One solution is to create thousands of physical robotic hands and train them simultaneously, a path that would be financially prohibitive even for the wealthiest tech companies. Another solution is to use a simulated environment. With simulated environments, researchers can train hundreds of AI agents at the same time, and then finetune the model on a real physical robot. The combination of simulation and physical training has become the norm in robotics, autonomous driving, and other areas of research that require interactions with the real world.
Simulations have their own challenges, however, and the computational costs can still be too much for smaller firms.
OpenAI, which has the financial backing of some of the wealthiest companies and investors, developed Dactyl using expensive robotic hands and an even more expensive compute cluster comprising around 30,000 CPU cores.
The TriFinger platform reduced the costs of robotic research but still had several challenges. PyBullet, which is a CPU-based environment, is noisy and slow and makes it hard to train reinforcement learning models efficiently. Poor simulated learning creates complications and widens the “sim2real gap,” the performance drop that the trained RL model suffers from when transferred to a physical robot. Consequently, robotics researchers need to go through multiple cycles of switching between simulated training and physical testing to tune their RL models.
“Previous work on in-hand manipulation required large clusters of CPUs to run on. Furthermore, the engineering effort required to scale reinforcement learning methods has been prohibitive for most research teams,” Arthur Allshire, lead author of the paper and a Simulation and Robotics Intern at Nvidia, told TechTalks. “This meant that despite progress in scaling deep RL, further algorithmic or systems progress has been difficult. And the hardware cost and maintenance time associated with systems such as the Shadow Hand [used in OpenAI Dactyl] … has limited the accessibility of hardware to test learning algorithms on.”
Building on top of the work of the TriFinger team, this new group of researchers aimed to improve the quality of simulated learning while keeping the costs low.
Training RL agents with single-GPU simulation The researchers trained their models in the Nvidia Isaac Gym simulated environment and transferred the learning to a remote Europe-based robotics lab The researchers replaced the PyBullet with Nvidia’s Isaac Gym, a simulated environment that can run efficiently on desktop-grade GPUs. Isaac Gym leverages Nvidia’s PhysX GPU-accelerated engine to allow thousands of parallel simulations on a single GPU. It can provide around 100,000 samples per second on an RTX 3090 GPU.
“Our task is suitable for resource-constrained research labs. Our method took one day to train on a single desktop-level GPU and CPU. Every academic lab working in machine learning has access to this level of resources,” Allshire said.
According to the paper, an entire setup to run the system, including training, inference, and physical robot hardware, can be purchased for less than $10,000.
The efficiency of the GPU-powered virtual environment enabled the researchers to train their reinforcement learning models in a high-fidelity simulation without reducing the speed of the training process. Higher fidelity makes the training environment more realistic, reducing the sim2real gap and the need for finetuning the model with physical robots.
The researchers used a sample object manipulation task to test their reinforcement learning system. As input, the RL model receives proprioceptive data from the simulated robot along with eight keypoints that represent the pose of the target object in three-dimensional Euclidean space. The model’s output is the torques that are applied to the motors of the robot’s nine joints.
The system uses the Proximal Policy Optimization (PPO), a model-free RL algorithm. Model-free algorithms obviate the need to compute all the details of the environment, which is computationally very expensive, especially when you’re dealing with the physical world. AI researchers often seek cost-efficient, model-free solutions to their reinforcement learning problems.
The researchers designed the reward of robotic hand RL as a balance between the fingers’ distance from the object, the object’s destination location, and the intended pose.
To further improve the model’s robustness, the researchers added random noise to different elements of the environment during training.
Testing on real robotsOnce the reinforcement learning system was trained in the simulated environment, the researchers tested it in the real world through remote access to the TriFinger robots provided by the Real Robot Challenge. They replaced the proprioceptive and image input of the simulator with the sensor and camera information provided by the remote robot lab.
The trained system transferred its abilities to the real robot a seven-percent drop in accuracy, an impressive sim2real gap improvement in comparison to previous methods.
The keypoint-based object tracking was especially useful in ensuring that the robot’s object-handling capabilities generalized across different scales, poses, conditions, and objects.
“One limitation of our method—deploying on a cluster we did not have direct physical access to—was the difficulty in trying other objects. However, we were able to try other objects in simulation and our policies proved relatively robust with zero-shot transfer performance from the cube,” Allshire said.
The researchers say that the same technique can work on robotic hands with more degrees of freedom. They did not have the physical robot to measure the sim2real gap, but the Isaac Gym simulator also includes complex robotic hands such as the Shadow Hand used in Dactyl.
This system can be integrated with other reinforcement learning systems that address other aspects of robotics, such as navigation and pathfinding, to form a more complete solution to train mobile robots. “For example, you could have our method controlling the low-level control of a gripper while higher level planners or even learning-based algorithms are able to operate at a higher level of abstraction,” Allshire said.
The researchers believe that their work presents “a path for democratization of robot learning and a viable solution through large scale simulation and robotics-as-a-service.”
AI, Robotics and Automation board gets refresh! This year the board has been much more active after I became interested in AI a year ago. Several months ago the board got a new name and new moderator (Glenn Petersen), who is revamping the Introduction header, which was focused on an extensive discussion of the Singularity. He has started with a new logo for the board. I like it.
Autonomous trucks need to lighten the load when it comes to mapping, while still perceiving their surrounding environments reliably.
That’s the approach Kodiak Robotics, a Silicon Valley-based self-driving truck startup, is taking to deploy safer and more efficient delivery and logistics. Today, the company unveiled its fourth-generation vehicle — powered by NVIDIA DRIVE Orin— that uses lightweight mapping and a discreet, modular hardware design to achieve level 4 self-driving capabilities.
By avoiding an over-reliance high-definition maps and focusing on a flexible architecture, Kodiak aims to deploy self-driving systems that are always accurate as well as straightforward to install and modify.
“The way you manufacture and maintain a system is incredibly important for the trucking industry, fleets must be able to stay up and running,” said Don Burnette, co-founder and CEO of Kodiak.
This easy adaptability is crucial for an industry experiencing the dual pressures of high demand for delivery and a low supply of drivers.
E-commerce orders increased nearly 60 percent year-over-year in 2020, according to last-mile technology vendor Convey Inc., with 36 percent of shoppers opting for same-day delivery. At the same time, the trucking industry is experiencing a 92 percent turnover rate — the amount of workers joining or leaving the field in a given year — and the American Trucking Associations estimates it will be short 160,000 drivers by 2028.
This confluence of factors requires an easy solution for trucking companies to adopt while maintaining road safety.
Performing Live
Maps are critical to autonomous driving, helping self-driving vehicles locate themselves in space and plan routes.
Rather than rely on pre-constructed HD maps, which may not be updated in real time to reflect road changes such as construction or new traffic patterns, Kodiak vehicles perceive their environment live while using maps primarily for navigation.
This lightweight mapping strategy requires the vehicle to detect all road objects, signs and more. Such real-time perception requires high-performance, centralized AI compute architected to meet the highest safety standards.
NVIDIA DRIVE Orin achieves over 250 TOPS and is designed to handle the many applications and deep neural networks that run simultaneously in autonomous vehicles, while achieving systematic safety standards such as ISO 26262 ASIL-D.
NVIDIA DRIVE Orin
NVIDIA DRIVE Orin provides the Kodiak Driver with the data and computing power it needs to reliably make and implement decisions — safely and securely.
“NVIDIA DRIVE makes it possible to centralize the vehicle’s compute, helping provide a safe and stable path to full autonomy,” Burnette said.
It’s What’s Not on the Outside That Counts
In keeping with the company’s focus on safety, Kodiak’s autonomous trucks aren’t designed to turn heads.
The fourth-generation trucks feature a modular and discreet sensor suite in just three locations: a slim “center pod” on the front roofline of the truck, and pods integrated into both of the side mirrors. This low-profile sensor placement simplifies installation and maintenance, while increasing safety.
“When you see these trucks, you’re going to ignore them,” Burnette said.
By building this discreet system with the open and scalable NVIDIA DRIVE platform at its core, Kodiak can continue to focus on flexibility and live perception without sacrificing safety and security.