SI
SI
discoversearch

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 
From: Glenn Petersen9/18/2017 9:04:33 PM
   of 1574
 
So, the leading internet companies are now training their neural networks with help from another type of chip called a graphics processing unit, or G.P.U. These low-power chips — usually made by Nvidia — were originally designed to render images for games and other software, and they worked hand-in-hand with the chip — usually made by Intel — at the center of a computer. G.P.U.s can process the math required by neural networks far more efficiently than C.P.U.s.

Nvidia is thriving as a result, and it is now selling large numbers of G.P.U.s to the internet giants of the United States and the biggest online companies around the world, in China most notably. The company’s quarterly revenue from data center sales tripled to $409 million over the past year.


Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains

New technologies are testing the limits of computer semiconductors. To deal with that researchers have gone looking for ideas from nature.

By CADE METZ
New York Times
SEPT. 16, 2017



After years of stagnation, the computer is evolving again, prompting some of the world’s largest tech companies to turn to biology for insights. Credit Minh Uong/The New York Times
______________________________

SAN FRANCISCO — We expect a lot from our computers these days. They should talk to us, recognize everything from faces to flowers, and maybe soon do the driving. All this artificial intelligence requires an enormous amount of computing power, stretching the limits of even the most modern machines.

Now, some of the world’s largest tech companies are taking a cue from biology as they respond to these growing demands. They are rethinking the very nature of computers and are building machines that look more like the human brain, where a central brain stem oversees the nervous system and offloads particular tasks — like hearing and seeing — to the surrounding cortex.

After years of stagnation, the computer is evolving again, and this behind-the-scenes migration to a new kind of machine will have broad and lasting implications. It will allow work on artificially intelligent systems to accelerate, so the dream of machines that can navigate the physical world by themselves can one day come true.

This migration could also diminish the power of Intel, the longtime giant of chip design and manufacturing, and fundamentally remake the $335 billion a year semiconductor industry that sits at the heart of all things tech, from the data centers that drive the internet to your iPhone to the virtual reality headsets and flying drones of tomorrow.

“This is an enormous change,” said John Hennessy, the former Stanford University president who wrote an authoritative book on computer design in the mid-1990s and is now a member of the board at Alphabet, Google’s parent company. “The existing approach is out of steam, and people are trying to re-architect the system.”



Xuedong Huang, left, and Doug Burger of Microsoft are among the employees leading the company’s efforts to develop specialized chips. Credit Ian C. Bates for The New York Times
_________________________________

The existing approach has had a pretty nice run. For about half a century, computer makers have built systems around a single, do-it-all chip — the central processing unit — from a company like Intel, one of the world’s biggest semiconductor makers. That’s what you’ll find in the middle of your own laptop computer or smartphone.

Now, computer engineers are fashioning more complex systems. Rather than funneling all tasks through one beefy chip made by Intel, newer machines are dividing work into tiny pieces and spreading them among vast farms of simpler, specialized chips that consume less power.

Changes inside Google’s giant data centers are a harbinger of what is to come for the rest of the industry. Inside most of Google’s servers, there is still a central processor. But enormous banks of custom-built chips work alongside them, running the computer algorithms that drive speech recognition and other forms of artificial intelligence.

Google reached this point out of necessity. For years, the company had operated the world’s largest computer network — an empire of data centers and cables that stretched from California to Finland to Singapore. But for one Google researcher, it was much too small.

In 2011, Jeff Dean, one of the company’s most celebrated engineers, led a research team that explored the idea of neural networks — essentially computer algorithms that can learn tasks on their own. They could be useful for a number of things, like recognizing the words spoken into smartphones or the faces in a photograph.

In a matter of months, Mr. Dean and his team built a service that could recognize spoken words far more accurately than Google’s existing service. But there was a catch: If the world’s more than one billion phones that operated on Google’s Android software used the new service just three minutes a day, Mr. Dean realized, Google would have to double its data center capacity in order to support it.

“We need another Google,” Mr. Dean told Urs Hölzle, the Swiss-born computer scientist who oversaw the company’s data center empire, according to someone who attended the meeting. So Mr. Dean proposed an alternative: Google could build its own computer chip just for running this kind of artificial intelligence.

But what began inside data centers is starting to shift other parts of the tech landscape. Over the next few years, companies like Google, Apple and Samsung will build phones with specialized A.I. chips. Microsoft is designing such a chip specifically for an augmented-reality headset. And everyone from Google to Toyota is building autonomous cars that will need similar chips.

This trend toward specialty chips and a new computer architecture could lead to a “Cambrian explosion” of artificial intelligence, said Gill Pratt, who was a program manager at Darpa, a research arm of the United States Department of Defense, and now works on driverless cars at Toyota. As he sees it, machines that spread computations across vast numbers of tiny, low-power chips can operate more like the human brain, which efficiently uses the energy at its disposal.

“In the brain, energy efficiency is the key,” he said during a recent interview at Toyota’s new research center in Silicon Valley.

Change on the Horizon

There are many kinds of silicon chips. There are chips that store information. There are chips that perform basic tasks in toys and televisions. And there are chips that run various processes for computers, from the supercomputers used to create models for global warming to personal computers, internet servers and smartphones.



An older board and chip combination at Microsoft’s offices. Chips now being developed by the company can be reprogrammed for new tasks on the fly. Credit Ian C. Bates for The New York Times
_________________________________

For years, the central processing units, or C.P.U.s, that ran PCs and similar devices were where the money was. And there had not been much need for change.

In accordance with Moore’s Law, the oft-quoted maxim from Intel co-founder Gordon Moore, the number of transistors on a computer chip had doubled every two years or so, and that provided steadily improved performance for decades. As performance improved, chips consumed about the same amount of power, according to another, lesser-known law of chip design called Dennard scaling, named for the longtime IBM researcher Robert Dennard.

By 2010, however, doubling the number of transistors was taking much longer than Moore’s Law predicted. Dennard’s scaling maxim had also been upended as chip designers ran into the limits of the physical materials they used to build processors. The result: If a company wanted more computing power, it could not just upgrade its processors. It needed more computers, more space and more electricity.

Researchers in industry and academia were working to extend Moore’s Law, exploring entirely new chip materials and design techniques. But Doug Burger, a researcher at Microsoft, had another idea: Rather than rely on the steady evolution of the central processor, as the industry had been doing since the 1960s, why not move some of the load onto specialized chips?

During his Christmas vacation in 2010, Mr. Burger, working with a few other chip researchers inside Microsoft, began exploring new hardware that could accelerate the performance of Bing, the company’s internet search engine.

At the time, Microsoft was just beginning to improve Bing using machine-learning algorithms (neural networks are a type of machine learning) that could improve search results by analyzing the way people used the service. Though these algorithms were less demanding than the neural networks that would later remake the internet, existing chips had trouble keeping up.

Mr. Burger and his team explored several options but eventually settled on something called Field Programmable Gate Arrays, or F.P.G.A.s.: chips that could be reprogrammed for new jobs on the fly. Microsoft builds software, like Windows, that runs on an Intel C.P.U. But such software cannot reprogram the chip, since it is hard-wired to perform only certain tasks.

With an F.P.G.A., Microsoft could change the way the chip works. It could program the chip to be really good at executing particular machine learning algorithms. Then, it could reprogram the chip to be really good at running logic that sends the millions and millions of data packets across its computer network. It was the same chip but it behaved in a different way.

Microsoft started to install the chips en masse in 2015. Now, just about every new server loaded into a Microsoft data center includes one of these programmable chips. They help choose the results when you search Bing, and they help Azure, Microsoft’s cloud-computing service, shuttle information across its network of underlying machines.

Teaching Computers to Listen

In fall 2016, another team of Microsoft researchers — mirroring the work done by Jeff Dean at Google — built a neural network that could, by one measure at least, recognize spoken words more accurately than the average human could.

Xuedong Huang, a speech-recognition specialist who was born in China, led the effort, and shortly after the team published a paper describing its work, he had dinner in the hills above Palo Alto, Calif., with his old friend Jen-Hsun Huang, (no relation), the chief executive of the chipmaker Nvidia. The men had reason to celebrate, and they toasted with a bottle of champagne.



Jeff Dean, one of Google’s most celebrated engineers, said the company should develop a chip for running a type of artificial intelligence; right, Google’s Tensor Processing Unit, or T.P.U. Credit Ryan Young for The New York Times
____________________________
|
Xuedong Huang and his fellow Microsoft researchers had trained their speech-recognition service using large numbers of specialty chips supplied by Nvidia, rather than relying heavily on ordinary Intel chips. Their breakthrough would not have been possible had they not made that change.

“We closed the gap with humans in about a year,” Microsoft’s Mr. Huang said. “If we didn’t have the weapon — the infrastructure — it would have taken at least five years.”

Because systems that rely on neural networks can learn largely on their own, they can evolve more quickly than traditional services. They are not as reliant on engineers writing endless lines of code that explain how they should behave.

But there is a wrinkle: Training neural networks this way requires extensive trial and error. To create one that is able to recognize words as well as a human can, researchers must train it repeatedly, tweaking the algorithms and improving the training data over and over. At any given time, this process unfolds over hundreds of algorithms. That requires enormous computing power, and if companies like Microsoft use standard-issue chips to do it, the process takes far too long because the chips cannot handle the load and too much electrical power is consumed.

So, the leading internet companies are now training their neural networks with help from another type of chip called a graphics processing unit, or G.P.U. These low-power chips — usually made by Nvidia — were originally designed to render images for games and other software, and they worked hand-in-hand with the chip — usually made by Intel — at the center of a computer. G.P.U.s can process the math required by neural networks far more efficiently than C.P.U.s.

Nvidia is thriving as a result, and it is now selling large numbers of G.P.U.s to the internet giants of the United States and the biggest online companies around the world, in China most notably. The company’s quarterly revenue from data center sales tripled to $409 million over the past year.


“This is a little like being right there at the beginning of the internet,” Jen-Hsun Huang said in a recent interview. In other words, the tech landscape is changing rapidly, and Nvidia is at the heart of that change.

Creating Specialized Chips

G.P.U.s are the primary vehicles that companies use to teach their neural networks a particular task, but that is only part of the process. Once a neural network is trained for a task, it must perform it, and that requires a different kind of computing power.

After training a speech-recognition algorithm, for example, Microsoft offers it up as an online service, and it actually starts identifying commands that people speak into their smartphones. G.P.U.s are not quite as efficient during this stage of the process. So, many companies are now building chips specifically to do what the other chips have learned.

Google built its own specialty chip, a Tensor Processing Unit, or T.P.U. Nvidia is building a similar chip. And Microsoft has reprogrammed specialized chips from Altera, which was acquired by Intel, so that it too can run neural networks more easily.

Other companies are following suit. Qualcomm, which specializes in chips for smartphones, and a number of start-ups are also working on A.I. chips, hoping to grab their piece of the rapidly expanding market. The tech research firm IDC predicts that revenue from servers equipped with alternative chips will reach $6.8 billion by 2021, about 10 percent of the overall server market.



Bart Sano, the vice president of engineering who leads hardware and software development for Google’s network, acknowledged that specialty chips were still a relatively modest part of the company’s operation. Credit Ryan Young for The New York Times
_______________________________


Across Microsoft’s global network of machines, Mr. Burger pointed out, alternative chips are still a relatively modest part of the operation. And Bart Sano, the vice president of engineering who leads hardware and software development for Google’s network, said much the same about the chips deployed at its data centers.

Mike Mayberry, who leads Intel Labs, played down the shift toward alternative processors, perhaps because Intel controls more than 90 percent of the data-center market, making it by far the largest seller of traditional chips. He said that if central processors were modified the right way, they could handle new tasks without added help.

But this new breed of silicon is spreading rapidly, and Intel is increasingly a company in conflict with itself. It is in some ways denying that the market is changing, but nonetheless shifting its business to keep up with the change.

Two years ago, Intel spent $16.7 billion to acquire Altera, which builds the programmable chips that Microsoft uses. It was Intel’s largest acquisition ever. Last year, the company paid a reported $408 million buying Nervana, a company that was exploring a chip just for executing neural networks. Now, led by the Nervana team, Intel is developing a dedicated chip for training and executing neural networks.

“They have the traditional big-company problem,” said Bill Coughran, a partner at the Silicon Valley venture capital firm Sequoia Capital who spent nearly a decade helping to oversee Google’s online infrastructure, referring to Intel. “They need to figure out how to move into the new and growing areas without damaging their traditional business.”

Intel’s internal conflict is most apparent when company officials discuss the decline of Moore’s Law. During a recent interview with The New York Times, Naveen Rao, the Nervana founder and now an Intel executive, said Intel could squeeze “a few more years” out of Moore’s Law. Officially, the company’s position is that improvements in traditional chips will continue well into the next decade.

Mr. Mayberry of Intel also argued that the use of additional chips was not new. In the past, he said, computer makers used separate chips for tasks like processing audio.

But now the scope of the trend is significantly larger. And it is changing the market in new ways. Intel is competing not only with chipmakers like Nvidia and Qualcomm, but also with companies like Google and Microsoft.

Google is designing the second generation of its T.P.U. chips. Later this year, the company said, any business or developer that is a customer of its cloud-computing service will be able to use the new chips to run its software.

While this shift is happening mostly inside the massive data centers that underpin the internet, it is probably a matter of time before it permeates the broader industry.

The hope is that this new breed of mobile chip can help devices handle more, and more complex, tasks on their own, without calling back to distant data centers: phones recognizing spoken commands without accessing the internet; driverless cars recognizing the world around them with a speed and accuracy that is not possible now.

In other words, a driverless car needs cameras and radar and lasers. But it also needs a brain.

Follow Cade Metz on Twitter: @CadeMetz

A version of this article appears in print on September 17, 2017, on Page BU1 of the New York edition with the headline: Chip Off the Old Block

nytimes.com

Share RecommendKeepReplyMark as Last Read


To: zzpat who wrote (1565)9/18/2017 11:16:58 PM
From: Glenn Petersen
   of 1574
 
Another day, another upgrade:

Nvidia Jumps 4%: Merrill Lynch Ups Target to $210 on Volta, Data Center Capex

Stock price targets keep moving higher for Nvidia, with Merrill Lynch's Vivek Arya raising his today to $210, following Evercore ISI's C.J. Muse raising his target to $250 on Friday.

By Tiernan Ray
Barron's
Sept. 18, 2017 10:08 a.m. ET

Shares of GPU titan Nvidia ( NVDA) are up $6.42, or almost 4%,a t $186.53, after Merrill Lynch's Vivek Arya this morning reiterated his Buy rating on the shares, and raised his price target to $210 from $185, citing potential for estimates to go higher this latter half of the year and next year.

Merrill Lynch does not provide research reports to media. However, according to a summary by TheFlyontheWall, Arya notes in particular several positive potential developments, such as the actual roll out of the newer “Volta” parts, various reviews lauding the performance of that product, and an “acceleration in data center capital spending.

Today boost in target at Merrill follows one on Friday from Evercore ISI’s C.J. Muse, who had raised his target to $250 from $180, based on his belief the company is creating “the industry standard” in artificial intelligence computing.

Also late Friday, RBC Capital Markets’s Mitch Steves argued that a crackdown on bitcoin exchanges by the Chinese government may actually fuel demand for Nvidia gear in order to “mine” bitcoin and other crypto-currencies.

barrons.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Glenn Petersen who wrote (1567)9/19/2017 9:23:48 AM
From: zzpat
   of 1574
 
I think these numbers are ridiculous but this is what happens after earnings conference calls. I don't trust any of the analysts. What I've seen are downgrades that push the stock lower and then analysts going from strong sell to strong buy after they pushed the stock lower. The same will happen here. Their strong buys will go to stong sells based entirely on price, not the fundamentals.

Share RecommendKeepReplyMark as Last Read


From: hollyhunter9/26/2017 11:42:33 AM
1 Recommendation   of 1574
 
Nvidia Corp. (NVDA) Chief Executive Jensen Huang announced Monday night that the company will be supplying its artificial intelligence-focused GPU hardware to several of China’s largest cloud-computing providers and server-hardware manufacturers, as well as new partnerships with server-hardware makers. stoxline.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/2/2017 9:01:02 PM
2 Recommendations   of 1574
 
To Compete With New Rivals, Chipmaker Nvidia Shares Its Secrets

Author: Tom Simonite
Wired
September 29, 2017



Getty Images
_______________________________

Five years ago, Nvidia was best known as a maker of chips to power videogame graphics in PCs. Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence.

The discovery made Nvidia into the preferred seller of shovels for the AI gold rush that’s propelling dreams of self-driving cars, delivery drones and software that plays doctor. The company’s stock-market value has risen 10-fold in three years, to more than $100 billion.

That’s made Nvidia and the market it more-or-less stumbled into an attractive target. Longtime chip kingpin Intel and a stampede of startups are building and offering chips to power smart machines. Further competition comes from large tech companies designing their own AI chips. Google’s voice recognition and image search now run on in-house chips dubbed “tensor processing units,” while the face-unlock feature in Apple’s new iPhone is powered by a home-grown chip with a “ neural engine”.

Nvidia’s latest countermove is counterintuitive. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. That module, the DLA for deep learning accelerator, is somewhat analogous to Apple’s neural engine. Nvidia plans to start shipping it next year in a chip built into a new version of its Drive PX computer for self-driving cars, which Toyota plans to use in its autonomous-vehicle program.

Why give away this valuable intellectual property for free? Deepu Talla, Nvidia’s vice president for autonomous machines, says he wants to help AI chips reach more markets than Nvidia can accommodate itself. While his unit works to put the DLA in cars, robots, and drones, he expects others to build chips that put it into diverse markets ranging from security cameras to kitchen gadgets to medical devices. “There are going to be hundreds of billions of internet of things devices in the future,” says Talla. “We cannot address all the markets out there.”



Source: S&P CapitalIQ
____________________

One risk of helping other companies build new businesses is that they’ll start encroaching on your own. Talla says that doesn’t concern him because greater use of AI will mean more demand for Nvidia’s other hardware, such as the powerful graphic chips used to train deep learning software before it is deployed. “There’s no good deed that goes unpunished but net-net it’s a great thing because this will increase the adoption of AI,” says Talla. “We think we can rise higher.”

Mi Zhang, a professor at Michigan State University, calls open sourcing the DLA design a “very smart move.” He guesses that while researchers, startups, and even large companies will be tempted by Nvidia’s designs, they mostly won’t change them radically. That means they are likely to maintain compatibility with Nvidia’s software tools and other hardware, boosting the company’s influence.

Zhang says it makes sense that devices beyond cars and robots have much to gain from new forms of AI chip. He points to a recent project in his research group developing hearing aids that used learning algorithms to filter out noise. Deep-learning software was the best at smartly recognizing what to tune out, but the limitations of existing hearing aid-scale computer hardware made it too slow to be practical.

Creating a web of companies building on its chip designs would also help Nvidia undermine efforts by rivals to market AI chips and create their ecosystems around them. In a tweet this week, one Intel engineer called Nvidia’s open source tactic a “devastating blow” to startups working on deep learning chips.

It might also lead to new challenges for Intel. The company bought two such startups in the past year: Movidius, focused on image processing, and Mobileye, which makes chips and cameras for automated driving.

wired.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/4/2017 9:58:23 AM
1 Recommendation   of 1574
 
Despite the hype, nobody is beating Nvidia in AI

Written by Dave Gershgorn
Quartz
October 02, 2017



NVIDIA CEO Jen-Hsun Huang shows NEW HGX with TESLA V100 VERSA GPU CLOUD COMPUTING as he talks about AI and gaming during the Computex Taipei exhibition at the world trade center in Taipei, Taiwan, Tuesday, May 30, 2017. (AP Photo/Chiang Ying-ying)
AI market in hand. (AP Photo/Chiang Ying-ying)
______________________

You have to wonder whether Nvidia is going to get sick of winning all the time.

The company’s stock price is up to $178—69% more than this time last year. Nvidia is riding high on its core technology, the graphics processing unit used in the machine-learning that powers the algorithms of Facebook and Google; partnerships with nearly every company keen on building self-driving cars; and freshly announced hardware deals with three of China’s biggest internet companies. Investors say this isn’t even the top for Nvidia: William Stein at SunTrust Robinson Humphrey predicts Nvidia’s revenue from selling server-grade GPUs to internet companies, which doubled last year, will continue to increase 61% annually until 2020.

Nvidia will likely see competition in the near future. At least 15 public companies and startups are looking to capture the market for a “second wave” of AI chips, which promise faster performance with decreased energy consumption, according to James Wang of investment firm ARK. Nvidia’s GPUs were originally developed to speed up graphics for gaming; the company then pivoted to machine learning. Competitors’ chips, however, are being custom-built for the purpose.

The most well-known of these next-generation chips is Google’s Tensor Processing Unit (TPU), which the company claims is 15-30 times faster than others’ central processing units (CPUs) and GPUs. Google explicitly mentioned performance improvements over Nvidia’s tech; Nvidia says the underlying tests were conducted on Nvidia’s old hardware. Either way, Google is now offering customers the option to rent use of TPUs through its cloud.

Intel, the CPU maker recently on a shopping spree for AI hardware startups—it bought Nervana Systems in 2016 and Mobileye in March 2017—also poses a threat. The company says it will release a new set of chips called Lake Crest later in 2017 specifically focused on AI, incorporating the technology it acquired through Nervana Systems. Intel is also hedging its bets by investing in neuromorphic computing, which uses chips that don’t rely on traditional microprocessor architecture but instead try to mimic neurons in the brain.

ARK predicts Nvidia will keep its technology ahead of the competition. Even disregarding the market advantage of capturing a strong initial customer base, Wang notes that the company is also continuing to increase the efficiency of GPU architecture at a rate fast enough to be competitive with new challengers. Nvidia has improved the efficiency of its GPU chips about 10x over the past four years.

Nvidia has also been investing since the mid-aughts in research to optimize how machine-learning frameworks, the software used to build AI programs, interact with the hardware, critical to ensuring efficiency. It currently supports every major machine-learning framework; Intel supports four, AMD supports two, Qualcomm supports two, and Google supports only Google’s.

Since GPUs aren’t specifically built for machine learning, they can also pull double-duty in a datacenter as video- or image-processing hardware. TPUs are custom-built for AI only, which means they’re inefficient at tasks like transcoding video into different qualities or formats. Nvidia CEO Jen-Hsun Huang told investors in August that “a GPU is basically a TPU that does a lot more,” since many social networks are promoting video on their platforms.

“Until TPUs demonstrate an unambiguous lead over GPUs in independent tests, Nvidia should continue to dominate the deep-learning data center.” Wang writes, noting that AI chips for smaller devices outside of the datacenter are still ripe for startups to disrupt.

qz.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/10/2017 4:08:36 PM
   of 1574
 
Nvidia says its new supercomputer will enable the highest level of automated driving

No steering wheels, no pedals, no mirrors

by Andrew J. Hawkins
The Verge
Oct 10, 2017, 6:00am EDT



Nvidia Founder, President and CEO Jen-Hsun Huang delivers a keynote address at CES 2017. Photo by Ethan Miller/Getty Images
________________________________

Nvidia, one of the world’s best known manufacturers of computer graphics cards, announced a new, more powerful computing platform for use in autonomous vehicles. The company claims its new system, codenamed Pegasus, can be used to power Level 5, fully driverless cars without steering wheels, pedals, or mirrors.

The new iteration of the GPU maker’s Drive PX platform will deliver over 320 trillion operations per second, which amounts to more than 10 times its predecessor’s processing power. Pegasus will be marketed to the hundreds of automakers and tech companies that are currently developing self-driving cars starting the second half of 2018, the company says.


Nvidia’s promise of Level 5 autonomy shouldn’t be taken lightly. Most automakers and tech companies speak carefully about the levels of autonomy, avoiding claims on which they may not ultimately be able to deliver. Nothing on the road today that’s commercially available is higher than a Level 2. Audi says its new A8 sedan is Level 3 autonomous — but we have to take the company’s word for it because present regulations won’t allow the German automaker to turn it on. Most car companies have said they will probably skip Level 3 and 4 because it’s too dangerous, and go right to Level 5. So for Nvidia to state definitively it can deliver the highest level of autonomous driving starting next year is pretty staggering — and maybe a little bit reckless.

Presently, self-driving cars that don’t require any human intervention are only theoretical. This vision of the future, where the vehicle can handle every task in all possible conditions, is the one that is most appealing to futurists and tech evangelists. But it will take years, if not decades, before our roads and rules catch up to robotic cars that can roam freely without limitations.

Nvidia’s Drive PX Pegasus computing platform Nvidia In a conference call with reporters Monday, Nvidia’s executives acknowledged that these driverless cars with their Level 5-empowering GPUs will most likely first be deployed in a ride-hailing capacity in limited settings, like college campuses or airports. But as soon as their life-saving potential is realized, they expect them to be rolled out onto more public roads. “These vehicles are going to save a lot of lives,” said Danny Shapiro, senior director of automated driving at Nvidia.

The type of computers produced by Nvidia and its competitors like Intel are arguably the most important part of the driverless car. Everything the vehicle “sees” with its sensors, all of the images, mapping data, and audio material picked up by its cameras, needs to be processed by giant PCs in order for the vehicle to make split-second decisions. All this processing must be done with multiple levels of redundancy to ensure the highest level of safety. This is why so many self-driving operators prefer SUVs, minivans, and other large wheelbase vehicles: autonomous cars need enormous space in the trunk for their big “brains.”

The trunk of a self-driving Ford Fusion Sam Abuelsamid But Nvidia claims to have shrunk down its GPU, making it an easier fit for production vehicles. Pegasus contains an amount of power equivalent to “a 100-server data center in the form-factor size of a license plate,” Shapiro said.

Nvidia began working on autonomous vehicles several years ago and has racked up partnerships with dozens of automakers and suppliers racing to develop self-driving cars, including Chinese search engine giant Baidu, Toyota, Audi, Tesla, and Volvo.

Nvidia’s original architecture for self-driving cars, introduced in 2015, is a supercomputer platform called Drive PX that can process all of the data coming from the vehicle’s cameras and sensors. The platform then uses an AI algorithm-based operating system and a cloud-based, high-definition 3D map to help the car understand its environment, know its location, and anticipate potential hazards while driving. The system’s software can be updated over the air — similar to how a smartphone’s operating system is updated — making the car become smarter over time.

A more powerful next-generation computer called Drive PX 2 — along with a suite of software tools and libraries aimed at speeding up the deployment of self-driving vehicles — followed in 2016. Nvidia has continued to push its tech further with the introduction last year of Xavier, a complete system-on-a-chip processor that is essentially an AI brain for self-driving cars. And Pegasus is the equivalent of two Xavier units, plus two next-generation discrete GPUs, Nvidia says. The new system was introduced at a GPU conference in Munich, Germany on Tuesday.

Nvidia also made two additional announcements at the conference: that it was partnering with Deutsche Post DHL Group and auto supplier ZF to deploy fully autonomous delivery trucks by 2019; and that it was offering early access to its virtual “Holodeck” technology to select designers and developers. (The Verge’s Adi Robertson wrote recently about the unlimited number of VR projects using “holodeck” terminology.

theverge.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/11/2017 6:40:43 AM
   of 1574
 
NVIDIA opens up its Holodeck VR design suite

Designers can model and interact with people, robots and objects in real time. \

Steve Dent, @stevetdent
Engadget
October 10, 2017

NVIDIA
___________________

Hardware makers have figured out that enterprises are the best way to make money off of VR and AR, not consumers. NVIDIA, a company that does both things well but has been particularly strong on the business side lately, has just opened up its Holodeck "intelligent" VR platform to select designers and developers. First unveiled in May, it allows for photorealistic graphics, haptics, real-world physics and multi-user collaboration.

That helps engineers and designers build and interact with photorealistic people, objects and robots in a fully simulated environment. The idea is to get new hardware prototyped in as much detail as possible before building real-world models. It also allows manufacturers to start training personnel well before hardware is market-ready. For instance, NVIDIA showed how the engineers that built the Koenigsegg supercar could explore the car "at scale and in full visual fidelity" and consult in real time on design changes.

Holodeck is built on a bespoke version of Epic Games' Unreal Engine 4 and uses NVIDIA's VRWorks, DesignWorks and GameWorks. It requires some significant hardware, either an NVIDIA 1080, Quadro P600, NVIDIA 1080 Ti or Titan XP GPU, but the firm says it will eventually lower the bar. It's not clear what kind of headsets are supported, but both of the major PC models (the HTC Vive and Oculus Rift) will likely work.

NVIDIA is already using its Holodeck as a way to train AI agents in its Isaac Simulator, a photorealistic machine-learning environment. With Holodeck, NVIDIA is taking on Microsoft and its Hololens in the enterprise and design arena -- though the latter AR system is more about letting engineers interact with real and virtual objects at the same time. Another player in the simulation scene is Google with Glass Enterprise, a product aimed more at training and manufacturing than design.

All of this doesn't seem like it's going to help you game or be entertained, but there is a silver lining. Much of this very advanced tech is bound to trickle down to consumers, hopefully making VR and AR good enough to actually become popular.

engadget.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/13/2017 11:02:00 PM
   of 1574
 
Nvidia Can Go to $250 on All the Data Center Opportunities, Says Needham

Nvidia's business in data centers has several avenues to tens of billions in revenue, including "inferencing," an emerging area of machine learning, but also selling chips to Uber and other "transportation as a service" companies, according to Rajvindra Gill of Neeedham & Co.

By Tiernan Ray
Barron's
Oct. 13, 2017 11:19 a.m. ET \

Another day, another Nvidia ( NVDA) price target increase, this one from Needham & Co.’s Rajvindra Gill, who reiterates a Buy rating, and raises his price target to $250 from $200, after attending the company’s “GTC” conference in Munich, Germany, and coming away upbeat about the prospects for the company’s data center market.

Gill’s new target beats the $220 that RBC Capital’s Mitch Steves offered yesterday on his own enthusiasm for Nvidia’s markets.

Gill talked with Nvidia CEO Jen-Hsun Huang at the event, along with other attendees, and the discussion mostly “centered around the growth drivers in data center,” he writes.

The market could be worth $21 billion to $35 billion over five years, writes Gill, in three buckets.

One big area is the current “training” market in machine learning:

Nearly all the hyperscalers, cloud and server vendors (Google, Alibaba, Cisco, Huawei, AWS, Microsoft Azure, IBM, Lenovo, Tencent) along with several A.I. startups will train on GPUs in the cloud — both internally and for their customers.

Inference, acting on the results of training, is another one, though “we are waiting to see evidence” of the GPU take up there, he writes:

The second major growth driver is inference. We estimate there are 20 million CPU nodes that will be accelerated over the next five years to support AI applications (live video on Internet, video surveillance cameras). At $500-$1,000 ASPs, we forecast the inference TAM at $10 billion to $20 billion.

And yet another part is spreading GPUs to new areas, including the “transportation as a service" companies such as Uber:

For example, Lyft or Uber could possibly deploy supercomputing GPUs to process the innumerable driving decisions needed to support AVs along with SQL databases being accelerated with AI-GPUs. Moreover, 15 of the top 500 supercomputers have GPUs. We believe over the next five years, 100% of those supercomputers will be accelerated. In a typical supercomputer node, we estimate NVDA receives $64k (8 GPUs X $8k each). This would translate to an HPC GPU TAM of ~$10BN.

barrons.com

Share RecommendKeepReplyMark as Last Read
Previous 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.