SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Frank Sully10/1/2021 11:01:44 PM
   of 2630
 
On-Demand Robots: Startup Rolls Out Bot Reservation Service for Museums

avatarin is enabling virtual museum visits and testing other destinations for telepresence robots.

September 29, 2021 by SCOTT MARTIN



By day, Akira Fukabori and Kevin Kajitani worked at Japan’s largest airlines holdings company, but by evening the friends liked to develop new concepts. Last year they convinced the company’s board of directors to take a big leap on one of their ideas: robots as a service.

The aerospace engineers enlisted a partner, Charith Fernando, a roboticist, and soon had spun out a robotics company from All Nippon Airways owner ANA Holdings.

Formed in 2020, Tokyo-based avatarinnow has more than a hundred telepresence robots deployed globally, including on-demand robots installed at four museums across Japan.

Robots as a service, or RaaS, is an emerging business model that minimizes the costs and commitment for businesses to deploy robots. Using this model, avatarin offers its robots to businesses, which make them available like on-demand scooters.

avatarin’s robot, dubbed the “newme,” allows people to book a robot ticket for a set time, date and place. Relying on the NVIDIA Jetson edge AI platform for compact supercomputing, newme can be steered remotely from a home computer, providing a low-latency, high-definition tour of sights, like museums and aquariums.

“This is just another step in the sharing economy, like on-demand scooters, providing consumers virtual access to mobility and our customers higher utilization of robotic resources,” said Kajitani, COO of avatarin.

Parent company ANA Holdings has sky-high ambitions — like putting telepresence robots on space missions. The firm early on sponsored an XPRIZE challenge to boost the work of Kajitani and Fukabori, CEO of avatarin.

As a traditional conglomerate, ANA Holdings is making an unusually bold bet on robots as a service, offering a glimpse at the future of corporate innovation and robotics.

The stakes are high. The global robotics market was estimated at $27.7 billion in 2020, a figure that is forecast to reach $74.1 billion by 2026, according to research firm Mordor Intelligence.

Robots as a Service

Demand for robots is growing across industries. That has only accelerated, spurred by workforce shortages from COVID-19 lockdowns, according to Mordor. Robots are being deployed to minimize human-to-human contact and lessen COVID-19 risks, whether for healthcare, food delivery or manufacturing.

avatarin’s newme robots are installed at museums in Japan, such as the Venetian Glass Museum in Hakone, Kanagawa Prefecture, to help accommodate visitors who can’t physically make it to the museum. The robots have a front-facing LED screen so that navigators can appear as an avatar for interactions with people.

The newme sports front-facing 2K stereo cameras for depth perception and streaming visuals to bring remote users lifelike views of places and people. They also have a foot facing navigation camera to help people navigate the robots around places. The video is processed on the NVIDIA Jetson Xavier NX for crisp visuals at 60 frames per second for virtual interactions and AI tasks.

The robot can go six hours on a full charge, thanks to the energy efficiency of the Jetson Xavier NX, which can process as much as 21 trillion operations per second at just 15 watts.

avatarin is running robot service pilots with partners in retail, tourism and education.

Jetson-Driven Autonomy

The robot can get around on its own, as well, so it can autonomously return to its charging station to juice up for the next user.

avatarin enables this autonomy with simultaneous localization and mapping ( SLAM) techniques for the robots so that they can generate their own indoor maps of environments for navigating.

The company had to switch from a CPU-based system to NVIDIA GPUs to support SLAM and future AI ambitions for newme. “The higher frame rates of using NVIDIA GPUs on SLAM make it better to get accurate point cloud information for these maps,” said Fernando, avatarin’s CTO.

SLAM enables robots to use their sensors to build maps — or point clouds — as they move about. And they can compare new sensor data with collected sensor data, using algorithms, to locate their position on the map being created live.

Deploying SLAM is a data-intense, multistage process that requires alignment of sensor data using a variety of algorithms and the parallel processing capabilities of GPUs.

avatarin also relies on NVIDIA DGX systems to train neural networks for computer vision tasks and conversational AI, features for possible future releases to enhance navigation and communication.

“We see a future where people can virtually experience almost any destination in the world, or even the universe or the metaverse, with newme, possibly communicating in real time in just about any language using AI,” said Fukabori.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/4/2021 5:51:30 PM
   of 2630
 
Grab a Front Row Seat to the Autonomous Future at GTC

Hear from the leaders building self-driving vehicles and learn about the AI software and hardware behind them.

October 4, 2021 by KATIE BURKE

Everything that moves will one day be autonomous, transforming industries and creating new business models.

The NVIDIA GPU Technology Conference brings together the leaders, researchers and developers who are building this self-driving future. The virtual conference also features experts from industries transformed by AI, such as healthcare, robotics and finance.

And it’s all free to attend.

GTC attendees will be the first to hear the latest NVIDIA news during the opening keynote on Nov. 9, delivered by CEO and founder Jensen Huang.

The rest of the week features more than 500 sessions covering autonomous vehicles, AI, supercomputing and more. Conference-goers will have the opportunity to network and learn from in-house experts on the latest in AI and self-driving development.

Here’s a sneak peek at what to expect at GTC:

Ecosystem Engagement

The NVIDIA DRIVE ecosystem encompasses automakers, suppliers, software startups, mapmakers and sensor companies building safe and efficient self-driving solutions.

At GTC, attendees can experience this rich network of companies and hear how they’re using GPU technology to transform transportation. These sessions include:
  • Ödgärd Andersson, CEO of Zenseact, explains how the supplier is developing, validating and continuously improving AV software to deliver safer, more efficient transportation.
Fueling Developers’ DRIVE

In addition, conference-goers will have access to in-depth talks from NVIDIA experts on autonomous vehicle development.

The NVIDIA DRIVE Developer Day, which kicks off GTC on Nov. 8, showcases the end-to-end development capabilities of the DRIVE platform and is led by those building the technology within NVIDIA.

These sessions include topics such as software development, deep neural network optimization, automated parking and mapping.

This virtual content is available to anyone — register for free today and seize the opportunity to experience the autonomous

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/4/2021 7:50:45 PM
   of 2630
 
FWIW, Google’s DeepMind is at the forefront of AI research. Here is an article re: their foray into robotics. Thanks to Glenn Petersen for posting this on the AI, Robotics and Automation board.

HOW DEEPMIND IS REINVENTING THE ROBOT

Having conquered Go and protein folding, the company turns to a really hard problem

TOM CHIVERS
IEEE Spectrum
27 SEP 2021
14 MIN READ

ARTIFICIAL INTELLIGENCE has reached deep into our lives, though you might be hard pressed to point to obvious examples of it. Among countless other behind-the-scenes chores, neural networks power our virtual assistants, make online shopping recommendations, recognize people in our snapshots, scrutinize our banking transactions for evidence of fraud, transcribe our voice messages, and weed out hateful social-media postings. What these applications have in common is that they involve learning and operating in a constrained, predictable environment.

But embedding AI more firmly into our endeavors and enterprises poses a great challenge. To get to the next level, researchers are trying to fuse AI and robotics to create an intelligence that can make decisions and control a physical body in the messy, unpredictable, and unforgiving real world.It's a potentially revolutionary objective that has caught the attention of some of the most powerful tech-research organizations on the planet. "I'd say that robotics as a field is probably 10 years behind where computer vision is," says Raia Hadsell, head of robotics at DeepMind, Google's London-based AI partner. (Both companies are subsidiaries of Alphabet.)

Even for Google, the challenges are daunting. Some are hard but straightforward: For most robotic applications, it's difficult to gather the huge data sets that have driven progress in other areas of AI. But some problems are more profound, and relate to longstanding conundrums in AI. Problems like, how do you learn a new task without forgetting the old one? And how do you create an AI that can apply the skills it learns for a new task to the tasks it has mastered before?

Success would mean opening AI to new categories of application. Many of the things we most fervently want AI to do—drive cars and trucks, work in nursing homes, clean up after disasters, perform basic household chores, build houses, sow, nurture, and harvest crops—could be accomplished only by robots that are much more sophisticated and versatile than the ones we have now.

Beyond opening up potentially enormous markets, the work bears directly on matters of profound importance not just for robotics but for all AI research, and indeed for our understanding of our own intelligence.

Let's start with the prosaic problem first. A neural network is only as good as the quality and quantity of the data used to train it. The availability of enormous data sets has been key to the recent successes in AI: Image-recognition software is trained on millions of labeled images. AlphaGo, which beat a grandmaster at the ancient board game of Go, was trained on a data set of hundreds of thousands of human games, and on the millions of games it played against itself in simulation.

To train a robot, though, such huge data sets are unavailable. "This is a problem," notes Hadsell. You can simulate thousands of games of Go in a few minutes, run in parallel on hundreds of CPUs. But if it takes 3 seconds for a robot to pick up a cup, then you can only do it 20 times per minute per robot. What's more, if your image-recognition system gets the first million images wrong, it might not matter much. But if your bipedal robot falls over the first 1,000 times it tries to walk, then you'll have a badly dented robot, if not worse.

The problem of real-world data is—at least for now—insurmountable. But that's not stopping DeepMind from gathering all it can, with robots constantly whirring in its labs. And across the field, robotics researchers are trying to get around this paucity of data with a technique called sim-to-real.

The San Francisco-based lab OpenAI recently exploited this strategy in training a robot hand to solve a Rubik's Cube. The researchers built a virtual environment containing a cube and a virtual model of the robot hand, and trained the AI that would run the hand in the simulation. Then they installed the AI in the real robot hand, and gave it a real Rubik's Cube. Their sim-to-real program enabled the physical robot to solve the physical puzzle.

Despite such successes, the technique has major limitations, Hadsell says, noting that AI researcher and roboticist Rodney Brooks "likes to say that simulation is 'doomed to succeed.' " The trouble is that simulations are too perfect, too removed from the complexities of the real world. "Imagine two robot hands in simulation, trying to put a cellphone together," Hadsell says. If you allow them to try millions of times, they might eventually discover that by throwing all the pieces up in the air with exactly the right amount of force, with exactly the right amount of spin, that they can build the cellphone in a few seconds: The pieces fall down into place precisely where the robot wants them, making a phone. That might work in the perfectly predictable environment of a simulation, but it could never work in complex, messy reality. For now, researchers have to settle for these imperfect simulacrums. "You can add noise and randomness artificially," Hadsell explains, "but no contemporary simulation is good enough to truly recreate even a small slice of reality."
Catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones.There are more profound problems. The one that Hadsell is most interested in is that of catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones.

The problem isn't lack of data storage. It's something inherent in how most modern AIs learn. Deep learning, the most common category of artificial intelligence today, is based on neural networks that use neuronlike computational nodes, arranged in layers, that are linked together by synapselike connections.

Before it can perform a task, such as classifying an image as that of either a cat or a dog, the neural network must be trained. The first layer of nodes receives an input image of either a cat or a dog. The nodes detect various features of the image and either fire or stay quiet, passing these inputs on to a second layer of nodes. Each node in each layer will fire if the input from the layer before is high enough. There can be many such layers, and at the end, the last layer will render a verdict: "cat" or "dog."

Each connection has a different "weight." For example, node A and node B might both feed their output to node C. Depending on their signals, C may then fire, or not. However, the A-C connection may have a weight of 3, and the B-C connection a weight of 5. In this case, B has greater influence over C. To give an implausibly oversimplified example, A might fire if the creature in the image has sharp teeth, while B might fire if the creature has a long snout. Since the length of the snout is more helpful than the sharpness of the teeth in distinguishing dogs from cats, C pays more attention to B than it does to A.

Each node has a threshold over which it will fire, sending a signal to its own downstream connections. Let's say C has a threshold of 7. Then if only A fires, it will stay quiet; if only B fires, it will stay quiet; but if A and B fire together, their signals to C will add up to 8, and C will fire, affecting the next layer.

What does all this have to do with training? Any learning scheme must be able to distinguish between correct and incorrect responses and improve itself accordingly. If a neural network is shown a picture of a dog, and it outputs "dog," then the connections that fired will be strengthened; those that did not will be weakened. If it incorrectly outputs "cat," then the reverse happens: The connections that fired will be weakened; those that did not will be strengthened.



Training of a neural network to distinguish whether a photograph is of a cat or a dog uses a portion of the nodes and connections in the network [shown in red, at left]. Using a technique called elastic weight consolidation, the network can then be trained on a different task, distinguishing images of cars from buses. The key connections from the original task are “frozen" and new connections are established [blue, at right]. A small fraction of the frozen connections, which would otherwise be used for the second task, are unavailable [purple, right diagram]. That slightly reduces performance on the second task.

But imagine you take your dog-and-cat-classifying neural network, and now start training it to distinguish a bus from a car. All its previous training will be useless. Its outputs in response to vehicle images will be random at first. But as it is trained, it will reweight its connections and gradually become effective. It will eventually be able to classify buses and cars with great accuracy. At this point, though, if you show it a picture of a dog, all the nodes will have been reweighted, and it will have "forgotten" everything it learned previously.

This is catastrophic forgetting, and it's a large part of the reason that programming neural networks with humanlike flexible intelligence is so difficult. "One of our classic examples was training an agent to play Pong," says Hadsell. You could get it playing so that it would win every game against the computer 20 to zero, she says; but if you perturb the weights just a little bit, such as by training it on Breakout or Pac-Man, "then the performance will—boop!—go off a cliff." Suddenly it will lose 20 to zero every time.

This weakness poses a major stumbling block not only for machines built to succeed at several different tasks, but also for any AI systems that are meant to adapt to changing circumstances in the world around them, learning new strategies as necessary.

There are ways around the problem. An obvious one is to simply silo off each skill. Train your neural network on one task, save its network's weights to its data storage, then train it on a new task, saving those weights elsewhere. Then the system need only recognize the type of challenge at the outset and apply the proper set of weights.

But that strategy is limited. For one thing, it's not scalable. If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you'd have to train it on every single one of them. And if the environment is unstructured, you won't even know ahead of time what some of those tasks will be. Another problem is that this strategy doesn't let the robot transfer the skills that it acquired solving task A over to task B. Such an ability to transfer knowledge is an important hallmark of human learning.

Hadsell's preferred approach is something called " elastic weight consolidation." The gist is that, after learning a task, a neural network will assess which of the synapselike connections between the neuronlike nodes are the most important to that task, and it will partially freeze their weights. "There'll be a relatively small number," she says. "Say, 5 percent." Then you protect these weights, making them harder to change, while the other nodes can learn as usual. Now, when your Pong-playing AI learns to play Pac-Man, those neurons most relevant to Pong will stay mostly in place, and it will continue to do well enough on Pong. It might not keep winning by a score of 20 to zero, but possibly by 18 to 2.




Raia Hadsell [top] leads a team of roboticists at DeepMind in London. At OpenAI, researchers used simulations to train a robot hand [above] to solve a Rubik's Cube.,TOP: DEEPMIND; BOTTOM: OPENAI
---------------------------------
There's an obvious side effect, however. Each time your neural network learns a task, more of its neurons will become inelastic. If Pong fixes some neurons, and Breakout fixes some more, "eventually, as your agent goes on learning Atari games, it's going to get more and more fixed, less and less plastic," Hadsell explains.

This is roughly similar to human learning. When we're young, we're fantastic at learning new things. As we age, we get better at the things we have learned, but find it harder to learn new skills.

"Babies start out having much denser connections that are much weaker," says Hadsell. "Over time, those connections become sparser but stronger. It allows you to have memories, but it also limits your learning." She speculates that something like this might help explain why very young children have no memories: "Our brain layout simply doesn't support it." In a very young child, "everything is being catastrophically forgotten all the time, because everything is connected and nothing is protected."

The loss-of-elasticity problem is, Hadsell thinks, fixable. She has been working with the DeepMind team since 2018 on a technique called " progress and compress." It involves combining three relatively recent ideas in machine learning: progressive neural networks, knowledge distillation, and elastic weight consolidation, described above.

Progressive neural networks are a straightforward way of avoiding catastrophic forgetting. Instead of having a single neural network that trains on one task and then another, you have one neural network that trains on a task—say, Breakout. Then, when it has finished training, it freezes its connections in place, moves that neural network into storage, and creates a new neural network to train on a new task—say, Pac-Man. Its knowledge of each of the earlier tasks is frozen in place, so cannot be forgotten. And when each new neural network is created, it brings over connections from the previous games it has trained on, so it can transfer skills forward from old tasks to new ones. But, Hadsell says, it has a problem: It can't transfer knowledge the other way, from new skills to old. "If I go back and play Breakout again, I haven't actually learned anything from this [new] game," she says. "There's no backwards transfer."

That's where knowledge distillation, developed by the British-Canadian computer scientist Geoffrey Hinton, comes in. It involves taking many different neural networks trained on a task and compressing them into a single one, averaging their predictions. So, instead of having lots of neural networks, each trained on an individual game, you have just two: one that learns each new game, called the "active column," and one that contains all the learning from previous games, averaged out, called the "knowledge base." First the active column is trained on a new task—the "progress" phase—and then its connections are added to the knowledge base, and distilled—the "compress" phase. It helps to picture the two networks as, literally, two columns. Hadsell does, and draws them on the whiteboard for me as she talks.

If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you'd have to train it on every single one of them.

The trouble is, by using knowledge distillation to lump the many individual neural networks of the progressive-neural-network system together, you've brought the problem of catastrophic forgetting back in. You'll change all the weights of the connections and render your earlier training useless. To deal with this, Hadsell adds in elastic weight consolidation: Each time the active column transfers its learning about a particular task to the knowledge base, it partially freezes the nodes most important to that particular task.

By having two neural networks, Hadsell's system avoids the main problem with elastic weight consolidation, which is that all its connections will eventually freeze. The knowledge base can be as large as you like, so a few frozen nodes won't matter. But the active column itself can be much smaller, and smaller neural networks can learn faster and more efficiently than larger ones. So the progress-and-compress model, Hadsell says, will allow an AI system to transfer skills from old tasks to new ones, and from new tasks back to old ones, while never either catastrophically forgetting or becoming unable to learn anything new.

Other researchers are using different strategies to attack the catastrophic forgetting problem; there are half a dozen or so avenues of research. Ted Senator, a program manager at the Defense Advanced Research Projects Agency ( DARPA), leads a group that is using one of the most promising, a technique called internal replay. "It's modeled after theories of how the brain operates," Senator explains, "particularly the role of sleep in preserving memory."

The theory is that the human brain replays the day's memories, both while awake and asleep: It reactivates its neurons in similar patterns to those that arose while it was having the corresponding experience. This reactivation helps stabilize the patterns, meaning that they are not overwritten so easily. Internal replay does something similar. In between learning tasks, the neural network recreates patterns of connections and weights, loosely mimicking the awake-sleep cycle of human neural activity. The technique has proven quite effective at avoiding catastrophic forgetting.

There are many other hurdles to overcome in the quest to bring embodied AI safely into our daily lives. "We have made huge progress in symbolic, data-driven AI," says Thrishantha Nanayakkara, who works on robotics at Imperial College London. "But when it comes to contact, we fail miserably. We don't have a robot that we can trust to hold a hamster safely. We cannot trust a robot to be around an elderly person or a child."

Nanayakkara points out that much of the "processing" that enables animals to deal with the world doesn't happen in the brain, but rather elsewhere in the body. For instance, the shape of the human ear canal works to separate out sound waves, essentially performing "the Fourier series in real time." Otherwise that processing would have to happen in the brain, at a cost of precious microseconds. "If, when you hear things, they're no longer there, then you're not embedded in the environment," he says. But most robots currently rely on CPUs to process all the inputs, a limitation that he believes will have to be surmounted before substantial progress can be made.

You know the cat is never going to learn language, and I'm okay with that.

His colleague Petar Kormushev says another problem is proprioception, the robot's sense of its own physicality. A robot's model of its own size and shape is programmed in directly by humans. The problem is that when it picks up a heavy object, it has no way of updating its self-image. When we pick up a hammer, we adjust our mental model of our body's shape and weight, which lets us use the hammer as an extension of our body. "It sounds ridiculous but they [robots] are not able to update their kinematic models," he says. Newborn babies, he notes, make random movements that give them feedback not only about the world but about their own bodies. He believes that some analogous technique would work for robots.

At the University of Oxford, Ingmar Posner is working on a robot version of "metacognition." Human thought is often modeled as having two main "systems"—system 1, which responds quickly and intuitively, such as when we catch a ball or answer questions like "which of these two blocks is blue?," and system 2, which responds more slowly and with more effort. It comes into play when we learn a new task or answer a more difficult mathematical question. Posner has built functionally equivalent systems in AI. Robots, in his view, are consistently either overconfident or underconfident, and need ways of knowing when they don't know something. "There are things in our brain that check our responses about the world. There's a bit which says don't trust your intuitive response," he says.

For most of these researchers, including Hadsell and her colleagues at DeepMind, the long-term goal is "general" intelligence. However, Hadsell's idea of an artificial general intelligence isn't the usual one—of an AI that can perform all the intellectual tasks that a human can, and more. Motivating her own work has "never been this idea of building a superintelligence," she says. "It's more: How do we come up with general methods to develop intelligence for solving particular problems?" Cat intelligence, for instance, is general in that it will never encounter some new problem that makes it freeze up or fail. "I find that level of animal intelligence, which involves incredible agility in the world, fusing different sensory modalities, really appealing. You know the cat is never going to learn language, and I'm okay with that."

Hadsell wants to build algorithms and robots that will be able to learn and cope with a wide array of problems in a specific sphere. A robot intended to clean up after a nuclear mishap, for example, might have some quite high-level goal—"make this area safe"—and be able to divide that into smaller subgoals, such as finding the radioactive materials and safely removing them.

I can't resist asking about consciousness. Some AI researchers, including Hadsell's DeepMind colleague Murray Shanahan, suspect that it will be impossible to build an embodied AI of real general intelligence without the machine having some sort of consciousness. Hadsell herself, though, despite a background in the philosophy of religion, has a robustly practical approach.

"I have a fairly simplistic view of consciousness," she says. For her, consciousness means an ability to think outside the narrow moment of "now"—to use memory to access the past, and to use imagination to envision the future. We humans do this well. Other creatures, less so: Cats seem to have a smaller time horizon than we do, with less planning for the future. Bugs, less still. She is not keen to be drawn out on the hard problem of consciousness and other philosophical ideas. In fact, most roboticists seem to want to avoid it. Kormushev likens it to asking "Can submarines swim?...It's pointless to debate. As long as they do what I want, we don't have to torture ourselves with the question."




Pushing a star-shaped peg into a star-shaped hole may seem simple, but it was a minor triumph for one of DeepMind's robots. DEEPMIND
---------------------------
In the DeepMind robotics lab it's easy to see why that sort of question is not front and center. The robots' efforts to pick up blocks suggest we don't have to worry just yet about philosophical issues relating to artificial consciousness.

Nevertheless, while walking around the lab, I find myself cheering one of them on. A red robotic arm is trying, jerkily, to pick up a star-shaped brick and then insert it into a star-shaped aperture, as a toddler might. On the second attempt, it gets the brick aligned and is on the verge of putting it in the slot. I find myself yelling "Come on, lad!," provoking a raised eyebrow from Hadsell. Then it successfully puts the brick in place.

One task completed, at least. Now, it just needs to hang on to that strategy while learning to play Pong.

This article appears in the October 2021 print issue as "How to Train an All-Purpose Robot."

How DeepMind Is Reinventing the Robot - IEEE Spectrum

Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/5/2021 6:29:32 PM
   of 2630
 
Out of the Box, Into the Container: NVIDIA and VMware Deliver AI at Scale for the Enterprise

NVIDIA AI Enterprise and VMware vSphere with Tanzu simplify enterprise AI development and application management.

October 5, 2021

by JOHN FANELLI

NVIDIA and VMware are marking another milestone in their collaboration to develop an AI-ready enterprise platform that brings the world’s leading AI stack and optimized software to the infrastructure used by hundreds of thousands of enterprises worldwide.

Today at VMworld 2021, VMware announced an upcoming update to VMware vSphere with Tanzu, the industry’s leading virtualization platform and the fastest way for IT teams to get started with Kubernetes workloads on existing infrastructure.

Enterprises can now run trials of their AI projects using vSphere with Tanzu in conjunction with the NVIDIA AI Enterprisesoftware suite. Availability announced in August 2021, NVIDIA AI Enterprise is an end-to-end, cloud-native suite of AI and data analytics frameworks and tools optimized, certified and supported by NVIDIA to enable the rapid deployment, management and scaling of AI applications?in the modern hybrid cloud.

AI Virtualization Submission to Industry MLPerf Benchmark

Today’s news follows on Dell Technologies’ recent MLPerf benchmarkachievement of 94.4% to 100% of the equivalent bare metal performance running NVIDIA AI Enterprise and VMware vSphere with three NVIDIA A100 Tensor Core GPUs in a Dell EMC PowerEdge R7525 server.

The submission is the second time a vendor has submitted MLPerf results on virtualized infrastructure, and reflects how NVIDIA AI Enterprise is designed to power advanced AI workloads on accelerated industry-standard center servers in the modern data center.

AI and IT: Better Together

Modern AI workloads can demand specialized infrastructure and software, creating complexity for IT teams working to support these advanced application requirements within enterprise data centers and hybrid clouds. By bridging the gap between the worlds of IT operations, data scientists and application developers, NVIDIA AI Enterprise simplifies the AI development lifecycle to help customers get projects into production faster.

NVIDIA AI Enterprise and VMware vSphere with Tanzu enable developers to run AI workloads on Kubernetes containers within their VMware environments, leveraging infrastructure easily managed by IT. The software runs on mainstream, NVIDIA-Certified Systems from leading server manufacturers, providing an integrated, complete stack of software and hardware optimized for AI.

“VMware serves enterprises by simplifying infrastructure complexity, and our collaboration with NVIDIA enables customers to develop and deploy advanced AI applications on their hybrid clouds,” said Lee Caswell, vice president of marketing for the Cloud Infrastructure Business Group at VMware. “With NVIDIA AI Enterprise and VMware vSphere with Tanzu, customers can manage AI development and deployment on mainstream data center servers and clouds, making it easy to integrate the AI applications powering growth in every industry.”

Enterprise-Grade AI for Developers and IT

NVIDIA AI Enterprise provides developer-optimized AI software such as PyTorch, TensorFlow, NVIDIA TensorRT, NVIDIA Triton Inference Server and NVIDIA RAPIDS. These tools make it easy for AI developers and data scientists to access tools and frameworks needed to build a host of enterprise AI applications such as conversational AI, computer vision and recommender systems.

The cloud-native architecture of NVIDIA AI Enterprise enables IT to centrally manage all clusters and apps across their hybrid cloud infrastructure. The software delivers near-bare-metal AI performance — even in virtualized environments — so that IT teams can help developers be able to rapidly explore ideas and iterate as they build their models.

Broad Ecosystem of Customer Choice

NVIDIA AI Enterprise is supported by a broad range of server manufacturers offering NVIDIA-Certified Systems. These include Atos, Dell Technologies, GIGABYTE, H3C, Hewlett Packard Enterprise, Inspur, Lenovo and Supermicro, all of which feature NVIDIA GPUs such as the NVIDIA A100 and NVIDIA A30.

NVIDIA AI Enterprise is available from worldwide NVIDIA channel partners including Atea, Axians, Carahsoft Technology Corp., Computacenter, Insight Enterprises, NTT, Presidio, Sirius, SoftServe, SVA System Vertrieb Alexander GmbH, TD SYNNEX, Trace3 and WWT. To support customers needing instant access to AI infrastructure, NVIDIA AI Enterprise is also expected to be coming soon to the NVIDIA AI LaunchPadprogram available with digital infrastructure leader Equinix.

NVIDIA AI Enterprise is generally available for VMware vSphere, and evaluation licenses are available for customers who would like to trial NVIDIA AI Enterprise and VMware vSphere with Tanzu. The Dell Technologies Validated Design for AI, the first jointly engineered solution of NVIDIA AI Enterprise software on VMware vSphere, is also available today.

Customers interested in learning more can tune into the NVIDIA and VMware executive presentation on making AI available to every enterprise, our joint update on enterprise AI and additional NVIDIA sessions at VMworld 2021, held online Oct. 5-7.

VMware and VMware vSphere are registered trademarks or trademarks of VMware, Inc. in the United States and other jurisdictions.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/5/2021 7:17:01 PM
   of 2630
 
Domino Data Lab Raises $100M in Series F Funding

USA

Published on October 5, 2021



Domino Data Lab, a San Francisco, CA-based provider of an Enterprise MLOps platform, raisded $100m in Series F funding.

The round, which brings total funding to $228m, was led by Great Hill Partners with participation from existing investors Coatue Management, Highland Capital Partners and Sequoia Capital, as well as NVIDIA.

The company intends to use the funds to continue to expand operations and its business reach.

Founded in 2013 and led by Nick Elprin, CEO, Domino Data Lab provides model-driven businesses with an MLOps platform that accelerates the development and deployment of data science work while increasing collaboration and governance.

Domino and NVIDIA will also further integrate products and expand joint sales efforts to support customers’ efforts to build model-driven businesses.

Since 2020, the company has offered certified Enterprise MLOps software for NVIDIA DGX systems as a charter member of the NVIDIA DGX-Ready Software program. Building on that collaboration, Domino is working with NVIDIA to develop product functionality to expand the accelerated computing capabilities in its platform. This includes validating the Domino platform for NVIDIA AI Enterprise so that it can run on mainstream, NVIDIA-Certified Systems from OEM hardware providers.

finsmes.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/6/2021 8:17:15 PM
   of 2630
 
AstraZeneca: Accelerating Drug Discovery with Machine Learning and AI on Cambridge-1


Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/7/2021 10:04:10 PM
   of 2630
 
Crypto miners have busted through Nvidia’s LHR graphics cards yet again

By dls

OCT 7, 2021

Computing, Crypto-Mining, cryptocurrency, GPU shortage, LHR, News, Nvidia

Nvidia’s RTX LHR 30-series cards were made to minimize crypto mining, but miners found yet another workaround by mining two coins at once.

dlsserve.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/7/2021 10:19:15 PM
   of 2630
 
Wonders of the World: NVIDIA Emerging Chapters Program Spurs AI Innovation Across Developing Countries

The new program supports local communities in emerging markets to build and scale their AI, data science and graphics projects.

October 7, 2021 by KATE KALLOT

Two artists, if given the same paint set, would create distinct works — each showcasing their own points of view. This is especially likely if the artists were to hail from opposite ends of the world.

Similarly, developers across the globe use the same tools to create different AI-based applications, each solving challenges relevant to their local community.

Give an NVIDIA Jetson Nano Developer Kitto an AI enthusiast in San Francisco, for example, and they will create an AI-based app that analyzes yoga poses. With that same tool, developers in the tinyML Kenya community are building AI systems for wildlife monitoring and conservation. Innovation is happening in each place, so long as the developers have access to technologies and enablement tools.

NVIDIA Emerging Chapters is a new program that enables local communities in emerging areas to build and scale their AI, data science and graphics projects. It provides technological tools, educational resources and co-marketing opportunities for these developers.

Members of Emerging Chapters have access to training and development opportunities through the NVIDIA Deep Learning Institute (DLI). This includes free passes to take select self- or instructor-led courses on AI and data science. Upon course completion, developers can receive an NVIDIA DLI Certificate to highlight their new skills and help advance their careers.

Since the program’s launch this year, 30+ African developer community groups — including seven founded by women — have joined Emerging Chapters, fostering a growing network of AI experts. The program is now expanding to Latin America, the Middle East, South Asia and other emerging markets that are hungry for more AI and related technology.

Reflecting the World’s DiversityWith Emerging Chapters, NVIDIA hopes to help mend what’s called the technology fracture — the gap between developers in the global North and those in emerging markets.

“This program is not about charity, it’s about innovation and business,” said Amulya Vishwanath, strategic program lead on NVIDIA’s emerging areas team. “It’s crucial to get communities in emerging areas access to AI technology, which they can then use to make their own, ensuring the developer community is more reflective of the world’s true diversity.”

By spurring innovation in collaboration with local communities, the program can cultivate AI solutions that most pertain to grassroot developers and their direct ecosystem, while also democratizing the global AI movement.

“With Emerging Chapters, NVIDIA is taking active steps toward positively contributing to the growing trend of technology in emerging markets,” said Michael Young, co-founder of the Python Ghana community. “NVIDIA helped us bring growing AI engineers from across the country together for live, online training sessions with experts.”

Fueling Innovation in AfricaAfrica is an emerging market in which the AI revolution is underway. African developers are using AI and NVIDIA technology, for example, to maximize crop yields and honor Olympic athletes.

Early members of the Emerging Chapters program include DeepLearning.AI Kenya, an education technology company that empowers individuals in the AI workforce; NERD Ethiopia, a youth center that provides a hacker space and educational resources for AI research; and tinyML Kenya, a community of machine learning researchers and practitioners.

“TinyML Kenya joining NVIDIA Emerging Chapters was such a timely, game-changing move for our community,” said Clinton Oduor, the foundation’s lead organizer. “The program has allowed our members to meet industry leaders, learn new skills, earn certifications and solve real-world problems.”

Zindi, Africa’s first data science competition platform, is also a member. The organization has an ambassador program made up of data science community leaders from 20+ African countries.

“It’s great to be able to support people doing such excellent work with the help of the Emerging Chapters program,” said Celina Lee, Zindi’s co-founder and CEO.

Allowing AI Access for AllTo bolster individuals who seek to work with AI, HPC, graphics and more, NVIDIA offers a range of opportunities including the NVIDIA Developer Program — which has more than 2.5 million members — and NVIDIA Inception, which offers go-to-market support, expertise and technology for AI and data science startups.

In addition, the company’s GTCconference, running Nov. 8-11, will feature a track focused on emerging markets. The conference is virtual, free and has sessions 24/7.

Over 20,000 developers from emerging markets tuned in to learn about AI innovation at our last GTC — and the appetite for developer resources is only growing.

To learn more about global AI innovation, register for GTC.

Join the NVIDIA Emerging Chapters Program.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/8/2021 11:39:32 AM
   of 2630
 
Oski Technology, an Expert in Formal Verification, Joins NVIDIA

October 8, 2021

by JONAH ALBEN

We are excited to announce that Oski Technology, a company specializing in formal verification methods, will be joining NVIDIA.

Modern processors pack tens of billions of transistors, tiny on/off switches connected by billions of microscopic pathways. A bug in a single transistor can prevent a chip from operating correctly, requiring costly revisions to fix.

Today, verification engineers rely on two very different methods to make sure bugs don’t make it into silicon — simulation and formal verification.

The first approach relies on millions of simulations that search for bugs, exercising corner cases in carefully designed tests.

Formal verification, Oski’s specialty, is a powerful alternative that uses mathematical analysis of a design instead of simulations to prove that a particular feature behaves correctly for all possible inputs.

Whereas simulation injects 1’s and 0’s into a design to test whether numbers are added properly, Oski’s approach formally verifies that “c = a + b.”

A Leading Role in Formal Verification

In the semiconductor industry, Oski is well known for its leading edge work in formal verification. Its founder, Vigyan Singhal, also founded Jasper Design Automation. Jasper’s software remains one of the most powerful software tools available for formal verification proofs. However, powerful tools require great expertise to be used successfully.

In 2005, Vigyan started Oski with the mission of applying the deep computer science of formal verification. Oski has since grown to become a well-recognized leader in the field and has been a valued partner to NVIDIA for more than 10 years.

An Office in a Rising Tech Hub

Oski has offices in San Jose, Budapest and Gurgaon. Most of its employees work in Gurgaon, a city of more than a million people less than 20 miles southeast of New Delhi. Recently renamed Gurugram, it’s the country’s second-largest tech hub, as well as an emerging center of finance and banking.

India is already home to NVIDIA’s largest group of employees outside the U.S., and the new Gurugram office will be NVIDIA’s fourth engineering office in India.

Raising the Bar for Innovation

As NVIDIA’s products have grown in complexity and scope, now spanning markets from gaming and data center computing to networking and autonomous vehicles, the importance of designing perfect first silicon has never been higher. And new autonomous machine applications where safety is the highest goal make verification vital.

With this acquisition, we have the opportunity to dramatically increase our investment in and commitment to formal verification strategies to achieve that goal.

Powerful verification methods directly enable more rapid innovation, and we are very excited to join forces with the Oski team to deliver even more amazing products for years to come.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully10/8/2021 7:01:11 PM
   of 2630
 
NVIDIA’s mining limiters bested by crypto miners with help of T-Rex software

Source: Harish Jonnalagadda / Windows Central

NVIDIA has been working hard to prove that the best GPUs for crypto mining are in fact not the best GPUS for crypto mining at all and that consumer-facing graphics cards should be for average users and not those looking to make a crypto fortune. The company’s stance comes at a time when analyst statistics estimate that roughly 25% of GPUs went to crypto miners and scalpers in the first quarter of 2021.

To combat miners, NVIDIA has instated hash rate limiters in many of its cards. You can learn what a hash rate is in our previous coverage on the topic, but here’s the key takeaway: Hash rates are essential to crypto mining, and putting a limit on them minimizes a GPU’s ability to effectively and efficiently dig for cryptocurrency.

So the question for miners has become: How to mine crypto when NVIDIA is deliberately trying to stymie said activity. And the latest answer to that ever-evolving question turns out to be rather counterintuitive.

As reported by Tom’s Hardware, a software has seized upon a fresh solution to the hash rate limiter conundrum. In order to bypass NVIDIA’s forbidden activities list, it simply does more of the forbidden activity. That’s right: The software (named T-Rex) means that if you mine two cryptocurrencies instead of just one, you can bypass limiters. The tradeoff is that you still can’t focus 100% of a card’s energies on one currency, but at least you can maximize the card’s overall output.

techtelegraph.co.uk

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10