Technology StocksNVIDIA Corporation (NVDA)

Previous 10 Next 10 
From: Glenn Petersen7/14/2017 9:16:54 AM
   of 1622
Nvidia is powering the world’s first Level 3 self-driving production car

by Darrell Etherington ( @etherington)
July 12, 2017

Audi announced Tuesday that its forthcoming A8 would be the first production vehicle to ship with a Level 3 self-driving feature onboard when it goes on sale next year, and now we know that Nvidia’s technology will be helping power the vehicle’s ‘traffic jam pilot’ autonomous capabilities. Nvidia’s going to be powering a lot in the new A8, in fact – the car has six Nvidia processors helping power not only traffic jam pilot, but also its infotainment system, virtual cockpit instrumentation and headrest tablets for backseat passengers on fully equipped models.

The introduction of Level 3 autonomy on the A8 will mean that drivers don’t have to pay attention to the road in certain conditions – specifically in this case when the car is driving 37 mph or under on a highway with a physical divider. If the vehicle meets those conditions, and local laws allow, drivers can do whatever else is legally allowed behind the wheel, and the system will let them know when it’s time to resume manual control.

That’s a step further than current highway driving assistance features like Tesla’s Autopilot, which is classified as a Level 2 system and requires a driver to be paying attention and ready to resume control at all times. But it’s also designed primarily for sitting in traffic, whereas Autopilot is designed for a range of speeds in highway driving scenarios.

Nvidia’s processor is the “brain” of Audi’s zFAS system, which is the computer that handles driver assistance onboard the A8, and that takes sensor data gathered from the vehicle’s radar, camera, laser scanning and ultrasound sensors to create a fused picture of the road with a range of different types of data. The zFAS decides how the car behaves when traffic jam pilot is engaged, processing data at a rate of 2.5 billion inputs per second.

Level 3 autonomy is somewhat controversial in the self-driving world because it both allows a driver to relax their attention and yet also can’t handle driving operations of the car entirely, as a Level 4 vehicle could. Audi must be very confident in the A8’s abilities with traffic jam pilot to bring this to market, and Nvidia’s tech has a lot riding on a smooth deployment once it does go to market.

Share RecommendKeepReplyMark as Last Read

From: groovygh577/14/2017 10:03:00 PM
   of 1622
Glenn, Jake, John... I love NVDA, 38% return so far but... you guys are posting junk... we all know that stuff, your posting advertising...

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: groovygh57 who wrote (1555)7/16/2017 6:43:21 AM
From: dominoe
3 Recommendations   of 1622
This week, am a new Nvidia stockholder. Having this forum, with both public material and personal insights, was very useful to me. It was nice to find so much content and reaction to it in one place

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: dominoe who wrote (1556)8/28/2017 7:35:19 AM
From: Glenn Petersen
1 Recommendation   of 1622
Nvidia Is A Textbook Case Of Sowing And Reaping Markets

August 11, 2017
The Next Platform
Timothy Prickett Morgan

In a properly working capitalist economy, innovative companies make big bets, help create new markets, vanquish competition or at least hold it at bay, and profit from all of the hard work, cleverness, luck, and deal making that comes with supplying a good or service to demanding customers.

There is no question that Nvidia has become a textbook example of this as it helped create and is now benefitting from the wave of accelerated computing that is crashing into the datacenters of the world. The company is on a roll, and is on the very laser-sharp cutting edge of its technology and, as best as we can figure, has the might and the means to stay there for the foreseeable future.

Nvidia has every prospect of being able to get a very large portion of the $30 billion in addressable markets that it is chasing with its Tesla accelerators in its datacenter product line – mainly traditional HPC simulation and modeling and machine learning training and inferencing – and is well on its way to fomenting new virtual workstation markets with its GRID line and instantaneous video transcoding and GPU accelerated database with its Tesla line, all of which are outside of that $30 billion total addressable market and which represent other very large opportunities.

It is no wonder that a bunch of executives from Cisco Systems have come to Nvidia – John Chambers, the former CEO at the company, is legendary in taking a router company that was at the beginning of the commercial Internet revolution to new heights by finding adjacencies and expanding into them. Cisco did it through a combination of internal investment and acquisition, while Nvidia is doing it mostly through finding new markets and creating them. Cisco essentially outsourced the job – sometimes quite literally with its UCS servers and Nexus switches – and it is interesting to contemplate if Nvidia might do some acquisitions of its own to help build out its datacenter business. To have a true platform, Nvidia needs a processor and networking, but it only has $5.9 billion in cash as of the end of its second quarter of fiscal 2018 ended on July 1. We have suggested that if IBM had any sense, it would have bought Nvidia, Mellanox Technologies, and Xilinx already. But maybe these four vendors should create a brand new company, each with proportional investment, that has all of their datacenter elements in it and call it International Business Machines, leaving all the rest of the IBM Company to be called something else.

It’s a funny thought, but the world needs a strong alternative to Intel, and even though AMD is back in the game, it is not yet making money even if it is making waves.

Before we get into the detailed numbers for the second quarter, we wanted to outline the addressable markets in the datacenter that Nvidia has identified and quantified. It all started with gaming with Nvidia, and this is the revenue stream that allowed the company to branch out into the adjacencies in the datacenter, much as Microsoft jumped from the Windows desktop to the Windows Server and Cisco jumped from routers to switches and collaboration software and video conferencing and converged server-network hybrids. The gaming business was just north of an $80 billion market, with 2 billion gamers worldwide and 400 million PC gamers who buy rocketsled PCs with fast CPUs and GPUs; this market is expected to be around $106 billion by 2020, growing at 6 percent annually. This is not huge growth, but it is a very large business and it has the virtue of not sinking. Nvidia’s GeForce GPU graphics card business is far outgrowing the business, with revenues expected to grow 25 percent per year between 2016 and 2020, and average selling prices rising 12 percent and units rising 11 percent.

So far, the installed base on gamer PCs is about 14 percent on the “Pascal” generation of GPUs that came out in 2016, and another 42 percent are on earlier “Maxwell” GPUs, with the remaining 44 percent on earlier “legacy” GPUs, as Nvidia calls them. This is a huge installed base that will gradually upgrade to new iron, and with the Pascal GPUs being very good compared to the AMD alternatives, there are good reasons why Nvidia is not rushing GeForce cards based on the current “Volta” GPUs to market. For one thing, as Nvidia co-founder and CEO Jensen Huang said during his call with Wall Street analysts that it costs about $1,000 to make the hardware in a Volta GPU card, give or take, and this is very expensive compared to prior GPU cards – all due to the many innovations that are encapsulated in the Volta GPU, from HBM2 memory from Samsung to 12 nanometer FinFET processes from Taiwan Semiconductor Manufacturing Corp to the sheer beastliness of the Volta GPU. This Pascal upgrade cycle on GPUs for gamers and workstations, plus the regular if somewhat muted hum of the normal PC business, is an excellent foundation from which Nvidia can invest in new markets, as it has in the past.

Here is the customer mix that Nvidia’s datacenter business, which sells the Tesla and GRID accelerators as well as the DGX-1 server line and now workstations, for fiscal 2017:

It is important to not confuse customer share with revenue share or aggregate computing share. We think, based on statements that Nvidia has made in the past, that the hyperscalers are spending a lot more dough on Nvidia iron than the supercomputing centers of the world, so that pie chart above shifts a bit when you start talking about money. The important thing is that over 450 applications have been tweaked to run on the CUDA parallel computing environment and can be accelerated by GPUs, and there are myriad more homegrown applications that have been ported to CUDA as well. The Tesla-CUDA combination is a real platform in its own right, and it is at the forefront of high performance computing in its many guises.

An aside: Nvidia is now shipping DGX systems that employ the Volta GPUs, which we profiled here, and has shipped DGX iron using either Pascal or Volta GPUs to over 300 customers and has another 1,000 in the pipeline.

Here is how Nvidia cases out the three core markets for its Tesla compute efforts:

Nvidia thinks that the amount of compute aimed at traditional HPC workloads will grow by more than a factor of 13X between 2013 and 2020, reaching 8 exaflops in aggregate by the end of that period and representing a $4 billion opportunity. One could argue that it is very tough to create exascale machines without some kind of massively parallel, energy efficient processor, and the reason is that CPUs that are good at serial work burn too much energy per unit of work, so you have to use them sparingly. It takes a lot less money to build a fat server node with lots of flops, and provided the applications can be parallelized in CUDA, you can see more than an order of magnitude in node count contraction and an order of magnitude lower hardware spending by moving to hybrid CPU-GPU cluster over a straight CPU cluster.

Intel knows this, and that is why the Knights family of Xeon Phi processors exist.

As you can see, the TAM for deep learning – for both training and inference – is expected to be a lot larger than the TAM for traditional HPC by 2020, according to Nvidia. And the amount of computing deployed for these areas is also expected to growing a lot faster than for HPC. By Nvidia’s reckoning, the amount of aggregate computing sold in 2020 dedicated to deep learning training will hit 55 exaflops in 2020 and will drive $11 billion in revenues, up from 1.4 exaflops in 2013 and an untold (but probably well under $100 million) in revenue that year. Deep learning inference, which is utterly dominated by Intel Xeon processors these days, will be an even larger market by 2020, with around 450 exaiops (that’s 450 quintillion integer operations per second of capacity, gauged using 8-bit INT8 instructions that are commonly used these days for inference) and racking up $15 billion in sales. Nvidia is now demonstrating an order of magnitude in savings over CPU-only solutions using its Pascal GPUs, but it remains to be seen if it can knock the CPU out of deep learning inference.

We think that virtual workstation clusters and database acceleration represents billions of dollars more TAM for the Nvidia datacenter business, and we also think that IoT will drive some more, at both the edge of the network and in the physical datacenter itself.

“The number of applications where GPUs are valuable, from training to high-performance computing to virtual PCs to new applications like inferencing and transcoding and AI, are starting to emerge,” Huang explained on the call. “Our belief is that, number one, a GPU has to be versatile to handle the vast array of big data and data-intensive applications that are happening in the cloud because the cloud is a computer. It is not an appliance. It is not a toaster. It is not a lightbulb. It is not a microphone. The cloud has a large number of applications that are data intensive. And second, we have to be world-class at deep learning, and our GPUs have evolved into something that can be absolutely world-class like the TPU, but it has to do all of the things that a datacenter needs to do. After four generations of evolution of our GPUs, the Nvidia GPU is basically a TPU that does a lot more. We could perform deep learning applications, whether it’s in training or in inferencing now, starting with the Pascal P4 and the Volta generation. We can inference better than any known ASIC on the market that I have ever seen. And so the new generation of our GPUs is essentially a TPU that does a lot more. And we can do all the things that I just mentioned and the vast number of applications that are emerging in the cloud.”

Having said all of that, Nvidia’s second quarter was hampered a little bit by the transition to the Volta GPUs and by Intel’s delayed rollout of its “Skylake” Xeon EP processors and their “Purley” server platform. We and Wall Street alike had been thinking the datacenter business would do better than it did – we expected around $450 million based on past trends and the explosive uptake of Pascal accelerators – but the datacenter business only brought in $416 million in the second fiscal quarter.

That said, the Datacenter division saw 175 percent growth year on year and managed to grow sequentially by 1.7 percent sequentially, which is no mean feat with two product transitions under way. (Three if you count the Power9 processor from IBM that Nvidia is tied to in some important ways, namely NVLink.) Nvidia does not break out Tesla and GRID sales separately, but we reckon GRID revenues were flat sequentially at around $83 million, and that the rest was Tesla units, up a smidgen sequentially at $333 million. If these estimates are close to reality, then the Tesla line alone now accounts for 15 percent if Nvidia’s revenues, and certainly a much higher proportion of its profits.

For the quarter, Nvidia sold just under $1.9 billion in GPU products, and $333 million in Tegra CPU-GPU hybrid products that are used in handheld gaming devices, drones, and autonomous driving platforms and other things we either don’t care about or hate here at The Next Platform. The Professional Visualization division of Nvidia, which we do care about and which sells Quadro workstation GPUs, posted $235 million in sales, up 10 percent year on year, and the core gaming business, driven by the GeForce GPU cards for PCs that make Nvidia’s ever-expanding business possible, had $1.19 billion in sales, up 52 percent. The OEM and IP business was a mixed bag, with the quarterly $66 million in royalty payments from Intel gone now, but with specialized GPU sales aimed at blockchain and cryptocurrency applications coming in at around $150 million (of the $251 million of OEM and IP revenue in the period), the overall Nvidia business made up for the what has to be called a minor slowdown in the Datacenter division (despite its growth) and the absence of Intel cash.

A lot of people think that blockchain and cryptocurrency is a fad, but it isn’t, and frankly it will be part of people’s platform and we will be looking into it in some detail without getting all gee-whiz about it.

“Cryptocurrency and blockchain is here to stay,” Huang said on the call. “The market need for it is going to grow, and over time it will become quite large. It is very clear that new currencies will come to market, and it is very clear that the GPU is just fantastic at cryptography. And as these new algorithms are being developed, the GPU is really quite ideal for it. And so this is a market that is not likely to go away anytime soon, and the only thing that we can probably expect is that there will be more currencies to come. It will come in a whole lot of different nations. It will emerge from time to time, and the GPU is really quite great for it. Our strategy is to stay very, very close to the market. We understand its dynamics really well. And we offer the coin miners a special coin-mining SKU. We know this market’s every single move and we know its dynamics.”

It is not clear if these mining SKUs are based on Pascal or Volta GPUs. Huang said that the Volta GPUs were fully ramped, but what that really means is that the Volta-based Tesla V100 accelerators as used for very high end HPC and AI gear are shipping, but you can’t really say that the Voltas are fully ramped until there are desktop, workstation, and Tegra units in the market. That could take some time, and it is significant that at the GPU Technical Conference back in May Nvidia did not provide even a hint of a product roadmap. The transition to Volta could take a long time, and a lot depends on what the competition – particularly that from AMD and Intel – does.

We know one thing for sure. Nvidia is on track to be a $10 billion company this fiscal year, with maybe 25 percent of that coming in as net income, and that is quite a feat. Nvidia’s Datacenter division is one of the reasons it can grow that much and be a respectably profitable company. Raking in $2.23 billion in sales and bringing $638 million of that to the bottom line is the second in four steps to get there.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

From: Glenn Petersen8/29/2017 4:36:25 PM
   of 1622
Nvidia to Play Big Role in ‘Huge’ Wal-Mart Cloud Push, Says Global Equities

Wal-Mart is building out its cloud computing network, OneOps, to one-tenth the size of Amazon's AWS, writes Trip Chowdhry of Global Equities, and Nvidia GPU chips will play a major role, he believes.

By Tiernan Ray
Aug. 29, 2017 3:26 p.m. ET

Trip Chowdhry of the boutique firm Global Equities Research today reiterates his upbeat view of Nvidia ( NVDA), after gathering details about what he expects is Wal-Mart’s (WMT) increasing usage of the company’s graphics chips, or GPUs, for machine learning.

"Within [the] next 6 months or so, Walmart is going full steam with DNN (Deep Neural Networks) and will be creating its own NVDA GPU Clusters on Walmart Cloud,” writes Chowdhry, referring to a cloud computing network the company acquired in 2013 called “OneOps."

"This is incrementally positive for NVDA's GPU Business,” he writes.

Without citing specific sources, Chowdhry offers some details he's gleaned from researching Wal-Mart’s setup:

Walmart’s NVDA GPU Farm will be about 1/10 th the size of AMZN-AWS GPU Cloud, which is huge!! Walmart NVDA GPU Clusters will run Ubuntu Linux and not Red Hat Linux Walmart is working with Anaconda to create custom DNN (Deep Neural Network), and Jupyter Notebook. Walmart will be running a hybrid of CNN (Convolution Neural Network) and RNN (Recurrent Neural Network).

Chowdhry advises that “investors should not underestimate the Technology acumen of Walmart Software Developers,” adding, "we have seen them present at various conferences, they are as smart as Google or Facebook engineers."

Nvidia stock today is up 40 cents at $165.34.

Share RecommendKeepReplyMark as Last Read

To: Glenn Petersen who wrote (1557)8/30/2017 12:07:21 PM
From: zzpat
   of 1622
IBM stock dropped almost 30% in the past five years. Why would investors want IBM to buy their company? To turn IBM around?

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen9/1/2017 4:03:28 PM
   of 1622
Why a 24-Year-Old Chipmaker Is One of Tech’s Hot Prospects

Nvidia, a maker of graphics processing units, is riding an artificial intelligence boom to put its chips in drones, robots and self-driving cars.

New York Times
SEPT. 1, 2017

Nvidia’s new Volta computer chip, which, according to the company, cost an estimated $3 billion to develop. Credit Christie Hemm Klok for The New York Times

SANTA CLARA, Calif. — Engineers at, an imaging-technology start-up in Poland, are trying to popularize a more comfortable alternative to the colonoscopy. To do so, they are using computer chips that are best known to video game fans.

The chips are made by the Silicon Valley company Nvidia. Its technology can help sift speedily through images taken by pill-size sensors that patients swallow, allowing doctors to detect intestinal disorders 70 percent faster than if they pored over videos. As a result, procedures cost less and diagnoses are more accurate, said Mateusz Marmolowski, CTA’s chief executive.

Health care applications like the one CTA is pioneering are among Nvidia’s many new targets. The company’s chips — known as graphics processing units, or GPUs — are finding homes in drones, robots, self-driving cars, servers, supercomputers and virtual-reality gear. A key reason for their spread is how rapidly the chips can handle complex artificial-intelligence tasks like image, facial and speech recognition.

Excitement about A.I. applications has turned 24-year-old Nvidia into one of the technology sector’s hottest companies. Its stock-market value has swelled more than sevenfold in the past two years, topping $100 billion, and its revenue jumped 56 percent in the most recent quarter.

Nvidia’s success makes it stand out in a chip industry that has experienced a steady decline in sales of personal computers and a slowing in demand for smartphones. Intel, the world’s largest chip producer and a maker of the semiconductors that have long been the brains of machines like PCs, had revenue growth of just 9 percent in the most recent quarter.

A demonstration room on the Nvidia campus in Santa Clara, Calif. Excitement about the use of its chips in artificial intelligence applications has made Nvidia one of the tech sector’s hottest companies. Credit Christie Hemm Klok for The New York Times

“They are just cruising,” Hans Mosesmann, an analyst at Rosenblatt Securities, said of Nvidia, which he has tracked since it went public in 1999.

Driving the surge is Jen-Hsun Huang, an Nvidia founder and the company’s chief executive, whose strategic instincts, demanding personality and dark clothes prompt comparisons to Steve Jobs.

Mr. Huang — who, like Mr. Jobs at Apple, pushed for a striking headquarters building, which Nvidia will soon occupy — made a pivotal gamble more than 10 years ago on a series of modifications and software developments so that GPUs could handle chores beyond drawing images on a computer screen.

“The cost to the company was incredible,” said Mr. Huang, 54, who estimated that Nvidia had spent $500 million a year on the effort, known broadly as CUDA (for compute unified device architecture), when the company’s total revenue was around $3 billion. Nvidia puts its total spending on turning GPUs into more general-purpose computing tools at nearly $10 billion since CUDA was introduced.

Mr. Huang bet on CUDA as the computing landscape was undergoing broad changes. Intel rose to dominance in large part because of improvements in computing speed that accompanied what is known as Moore’s Law: the observation that, through most of the industry’s history, manufacturers packed twice as many transistors onto chips roughly every two years. Those improvements in speed have now slowed.

Nvidia’s chief executive, Jen-Hsun Huang, made a pivotal bet more than 10 years ago on a series of modifications and software developments to the company graphics processing units, or GPUs. Credit Ethan Miller/Getty Images

The slowdown led designers to start dreaming up more specialized chips that could work alongside Intel processors and wring more benefits from the miniaturization of chip circuitry. Nvidia, which repurposed existing chips instead of starting from scratch, had a big head start. Using its chips and software it developed as part of the CUDA effort, the company gradually created a technology platform that became popular with many programmers and companies.

“They really were well led,” said John L. Hennessy, a computer scientist who stepped down as Stanford University’s president last year.

Now, Nvidia chips are pushing into new corporate applications. German business software giant SAP, for example, is promoting an artificial-intelligence technique called deep learning and using Nvidia GPUs for tasks like accelerating accounts-payable processes and matching resumes to job openings.

SAP has also demonstrated Nvidia-powered software to spot company logos in broadcasts of sports like basketball or soccer, so advertisers can learn about their brands’ exposure during games and take steps to try to improve it.

“That could not be done before,” said Juergen Mueller, SAP’s chief innovation officer.

Such applications go far beyond the original ambitions of Mr. Huang, who was born in Taiwan and studied electrical engineering at Oregon State University and Stanford before taking jobs at Silicon Valley chipmakers. He started Nvidia with Chris Malachowsky and Curtis Priem in 1993, setting out initially to help PCs offer visual effects to rival those of dedicated video game consoles.

The company’s original product was a dud, Mr. Malachowsky said, and the graphics market attracted a mob of rivals.

But Nvidia retooled its products and strategy and gradually separated itself from the competition to become the clear leader in the GPU-accelerator cards used in gaming PCs.

GPUs generate triangles to form framelike structures, simulating objects and applying colors to pixels on a display screen. To do that, many simple instructions must be executed in parallel, which is why graphics chips evolved with many tiny processors. A new GPU announced by Nvidia in May, called Volta, has more than 5,000 such processors; a new, high-end Intel server chip, by contrast, has just 28 larger, general-purpose processor cores.

Nvidia began its CUDA push in 2004 after hiring Ian Buck, a Stanford doctoral student and company intern who had worked on a programming challenge that involved making it easier to harness a GPU’s many calculating engines. Nvidia soon made changes to its chips and developed software aids, including support for a standard programming language rather than the arcane tools used to issue commands to graphics chips.

The company built CUDA into consumer GPUs and high-end products. That decision was critical, Mr. Buck said, because it meant researchers and students who owned laptops or desktop PCs for gaming could tinker on software in campus labs and dorm rooms. Nvidia also convinced many universities to offer courses in its new programming techniques.

Nvidia’s new headquarters in Santa Clara during construction last year. The company’s stock-market value has swelled more than sevenfold in the past two years. Credit Ramin Rahimian for The New York Times

Programmers gradually adopted GPUs for applications used in, among other things, climate modeling and oil and gas discovery. A new phase began in 2012 after Canadian researchers began to apply CUDA and GPUs to unusually large neural networks, the many-layered software required for deep learning.

Those systems are trained to perform tricks like spotting a face by exposure to millions of images instead of through definitions established by programmers. Before the emergence of GPUs, Mr. Buck said, training such a system might take an entire semester.

Aided by the new technology, researchers can now complete the process in weeks, days or even hours.

“I can’t imagine how we’d do it without using GPUs,” said Silvio Savarese, an associate professor at Stanford who directs the SAIL-Toyota Center for A.I. Research at the university.

Competitors argue that the A.I. battle among chipmakers has barely begun.

Intel, whose standard chips are widely used for A.I. tasks, has also spent heavily to buy Altera, a maker of programmable chips; start-ups specializing in deep learning and machine vision; and the Israeli car technology supplier Mobileye.

Google recently unveiled the second version of an internally developed A.I. chip that helped beat the world’s best player of the game Go. The search giant claims the chip has significant advantages over GPUs in some applications. Start-ups like Wave Computing make similar claims.

But Nvidia will not be easy to dislodge. For one thing, the company can afford to spend more than most of its A.I. rivals on chips — Mr. Huang estimated Nvidia had plowed an industry record $3 billion into Volta — because of the steady flow of revenue from the still-growing gaming market.

Nvidia said more than 500,000 developers are now using GPUs. And the company expects other chipmakers to help expand its fan base once it freely distributes an open-source chip design they can use for low-end deep learning applications — light-bulbs or cameras, for instance — that it does not plan to target itself.

A.I., Mr. Huang said, “will affect every company in the world. We won’t address all of it.”

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen9/1/2017 4:44:46 PM
1 Recommendation   of 1622
RBC: Nvidia is set to dominate the next wave of blockchain technology (NVDA)

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: Glenn Petersen who wrote (1561)9/2/2017 10:51:28 AM
From: zzpat
   of 1622
NVDAs growth won't come from Bitcoin or blockchain. It'll come from AI. It's a hard company to put my finger on. Good CEO, the company is innovating, it's constantly expanding its market into new things (like Bitcoin) and the stock is insanely high.

I like an autonomous car, medical computers, facial identification etc. AI is the future. The new Internet.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

From: Glenn Petersen9/8/2017 4:36:44 PM
   of 1622
Nvidia and Avitas Systems partner on using AI to help robots spot defects

by Darrell Etherington ( @etherington)
September 8, 2017

Automated inspection company Avitas

Systems, which is a GE Venture company, is using Nvidia’s DGX-1 and DGX Station to train its neural-network-based artificial intelligence to be able to quickly and consistently identify defects in industrial equipment.

Avitas Systems uses a range of robotic equipment to monitor things like oil and gas pipelines, coolant towers and other crucial equipment, including aerial and underwater drones – and Nvidia’s help means it can create software that can help these bots spot the slightest bit of corrosion or variance in equipment before it becomes a dangerous problem.

Alex Tepper, Avitas founder and head of corporate and business development, explained in an interview that GE has been helping customers with industrial inspections for a long time, and has found that these customers are spending hundreds of millions of dollars on inspections that involve a person driving out to, or flying a helicopter above an asset. These aren’t methods that generate fool-proof results, of course, and there’s a lot that can’t be seen reliably with the naked eye.

“We’re analyzing the results from those robotics to do automated defect recognition, which is a fancy way of saying interpreting those sensor results, applying AI to them, so that we can figure out if there are any defects being sensed, whether it’s corrosion, micro-fractures, hot and cold spots – oftentimes defects that the human eye can’t see.

UAV over flare stack

Additionally, Avitas can provide reliable replication of observation conditions with automated inspection methods – robots can take the same photograph or sensor reading from the same perspective over and over again. And they can help shift defect monitoring from a time-based operation to a risk-based one: Instead of sending out a person to check an asset on a pre-defined schedule, automated observation can target high-risk assets and keep them under pretty much constant watch.

Nvidia’s role in all this is processing of the resulting data via its DGX-1 supercomputer, and also through its DGX Station, which provides unique capabilities by offering analysis and processing capabilities at the edge – decoupled from the data center. Tepper says that more and more of their work involves running AI applications in areas where there isn’t a reliable connection to a central server – or even any connection at all, in some cases.

The DGX Station puts hundreds of CPUs in a power-efficient, portable form factor, and it’s just the start for Nvidia’s ambitions to bring supercomputing power to the field.

“Avitas started with a prototype version of our station, and soon they’ll be getting an upgrade to our DGX Station with Volta [launched in May], and that’ll be a huge performance gain,” explained Nvidia GM of DGX Systems Jim Hugh. “I think Alex and team are going to see a 3x performance in the activity there at a minimum, and it could even be greater for the inference activity they’re seeing.”

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2018 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.