SI
SI
discoversearch

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Glenn Petersen8/29/2017 4:36:25 PM
   of 1580
 
Nvidia to Play Big Role in ‘Huge’ Wal-Mart Cloud Push, Says Global Equities

Wal-Mart is building out its cloud computing network, OneOps, to one-tenth the size of Amazon's AWS, writes Trip Chowdhry of Global Equities, and Nvidia GPU chips will play a major role, he believes.

By Tiernan Ray
Barron's
Aug. 29, 2017 3:26 p.m. ET

Trip Chowdhry of the boutique firm Global Equities Research today reiterates his upbeat view of Nvidia ( NVDA), after gathering details about what he expects is Wal-Mart’s (WMT) increasing usage of the company’s graphics chips, or GPUs, for machine learning.

"Within [the] next 6 months or so, Walmart is going full steam with DNN (Deep Neural Networks) and will be creating its own NVDA GPU Clusters on Walmart Cloud,” writes Chowdhry, referring to a cloud computing network the company acquired in 2013 called “OneOps."

"This is incrementally positive for NVDA's GPU Business,” he writes.

Without citing specific sources, Chowdhry offers some details he's gleaned from researching Wal-Mart’s setup:

Walmart’s NVDA GPU Farm will be about 1/10 th the size of AMZN-AWS GPU Cloud, which is huge!! Walmart NVDA GPU Clusters will run Ubuntu Linux and not Red Hat Linux Walmart is working with Anaconda to create custom DNN (Deep Neural Network), and Jupyter Notebook. Walmart will be running a hybrid of CNN (Convolution Neural Network) and RNN (Recurrent Neural Network).

Chowdhry advises that “investors should not underestimate the Technology acumen of Walmart Software Developers,” adding, "we have seen them present at various conferences, they are as smart as Google or Facebook engineers."

Nvidia stock today is up 40 cents at $165.34.

barrons.com

Share RecommendKeepReplyMark as Last Read


To: Glenn Petersen who wrote (1557)8/30/2017 12:07:21 PM
From: zzpat
   of 1580
 
IBM stock dropped almost 30% in the past five years. Why would investors want IBM to buy their company? To turn IBM around?

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen9/1/2017 4:03:28 PM
   of 1580
 
Why a 24-Year-Old Chipmaker Is One of Tech’s Hot Prospects

Nvidia, a maker of graphics processing units, is riding an artificial intelligence boom to put its chips in drones, robots and self-driving cars.

By DON CLARK
New York Times
SEPT. 1, 2017



Nvidia’s new Volta computer chip, which, according to the company, cost an estimated $3 billion to develop. Credit Christie Hemm Klok for The New York Times
___________________________________

SANTA CLARA, Calif. — Engineers at CTA.ai, an imaging-technology start-up in Poland, are trying to popularize a more comfortable alternative to the colonoscopy. To do so, they are using computer chips that are best known to video game fans.

The chips are made by the Silicon Valley company Nvidia. Its technology can help sift speedily through images taken by pill-size sensors that patients swallow, allowing doctors to detect intestinal disorders 70 percent faster than if they pored over videos. As a result, procedures cost less and diagnoses are more accurate, said Mateusz Marmolowski, CTA’s chief executive.

Health care applications like the one CTA is pioneering are among Nvidia’s many new targets. The company’s chips — known as graphics processing units, or GPUs — are finding homes in drones, robots, self-driving cars, servers, supercomputers and virtual-reality gear. A key reason for their spread is how rapidly the chips can handle complex artificial-intelligence tasks like image, facial and speech recognition.

Excitement about A.I. applications has turned 24-year-old Nvidia into one of the technology sector’s hottest companies. Its stock-market value has swelled more than sevenfold in the past two years, topping $100 billion, and its revenue jumped 56 percent in the most recent quarter.

Nvidia’s success makes it stand out in a chip industry that has experienced a steady decline in sales of personal computers and a slowing in demand for smartphones. Intel, the world’s largest chip producer and a maker of the semiconductors that have long been the brains of machines like PCs, had revenue growth of just 9 percent in the most recent quarter.



A demonstration room on the Nvidia campus in Santa Clara, Calif. Excitement about the use of its chips in artificial intelligence applications has made Nvidia one of the tech sector’s hottest companies. Credit Christie Hemm Klok for The New York Times
_______________________________________

“They are just cruising,” Hans Mosesmann, an analyst at Rosenblatt Securities, said of Nvidia, which he has tracked since it went public in 1999.

Driving the surge is Jen-Hsun Huang, an Nvidia founder and the company’s chief executive, whose strategic instincts, demanding personality and dark clothes prompt comparisons to Steve Jobs.

Mr. Huang — who, like Mr. Jobs at Apple, pushed for a striking headquarters building, which Nvidia will soon occupy — made a pivotal gamble more than 10 years ago on a series of modifications and software developments so that GPUs could handle chores beyond drawing images on a computer screen.

“The cost to the company was incredible,” said Mr. Huang, 54, who estimated that Nvidia had spent $500 million a year on the effort, known broadly as CUDA (for compute unified device architecture), when the company’s total revenue was around $3 billion. Nvidia puts its total spending on turning GPUs into more general-purpose computing tools at nearly $10 billion since CUDA was introduced.

Mr. Huang bet on CUDA as the computing landscape was undergoing broad changes. Intel rose to dominance in large part because of improvements in computing speed that accompanied what is known as Moore’s Law: the observation that, through most of the industry’s history, manufacturers packed twice as many transistors onto chips roughly every two years. Those improvements in speed have now slowed.



Nvidia’s chief executive, Jen-Hsun Huang, made a pivotal bet more than 10 years ago on a series of modifications and software developments to the company graphics processing units, or GPUs. Credit Ethan Miller/Getty Images
_______________________________

The slowdown led designers to start dreaming up more specialized chips that could work alongside Intel processors and wring more benefits from the miniaturization of chip circuitry. Nvidia, which repurposed existing chips instead of starting from scratch, had a big head start. Using its chips and software it developed as part of the CUDA effort, the company gradually created a technology platform that became popular with many programmers and companies.

“They really were well led,” said John L. Hennessy, a computer scientist who stepped down as Stanford University’s president last year.

Now, Nvidia chips are pushing into new corporate applications. German business software giant SAP, for example, is promoting an artificial-intelligence technique called deep learning and using Nvidia GPUs for tasks like accelerating accounts-payable processes and matching resumes to job openings.

SAP has also demonstrated Nvidia-powered software to spot company logos in broadcasts of sports like basketball or soccer, so advertisers can learn about their brands’ exposure during games and take steps to try to improve it.

“That could not be done before,” said Juergen Mueller, SAP’s chief innovation officer.

Such applications go far beyond the original ambitions of Mr. Huang, who was born in Taiwan and studied electrical engineering at Oregon State University and Stanford before taking jobs at Silicon Valley chipmakers. He started Nvidia with Chris Malachowsky and Curtis Priem in 1993, setting out initially to help PCs offer visual effects to rival those of dedicated video game consoles.

The company’s original product was a dud, Mr. Malachowsky said, and the graphics market attracted a mob of rivals.

But Nvidia retooled its products and strategy and gradually separated itself from the competition to become the clear leader in the GPU-accelerator cards used in gaming PCs.

GPUs generate triangles to form framelike structures, simulating objects and applying colors to pixels on a display screen. To do that, many simple instructions must be executed in parallel, which is why graphics chips evolved with many tiny processors. A new GPU announced by Nvidia in May, called Volta, has more than 5,000 such processors; a new, high-end Intel server chip, by contrast, has just 28 larger, general-purpose processor cores.

Nvidia began its CUDA push in 2004 after hiring Ian Buck, a Stanford doctoral student and company intern who had worked on a programming challenge that involved making it easier to harness a GPU’s many calculating engines. Nvidia soon made changes to its chips and developed software aids, including support for a standard programming language rather than the arcane tools used to issue commands to graphics chips.

The company built CUDA into consumer GPUs and high-end products. That decision was critical, Mr. Buck said, because it meant researchers and students who owned laptops or desktop PCs for gaming could tinker on software in campus labs and dorm rooms. Nvidia also convinced many universities to offer courses in its new programming techniques.



Nvidia’s new headquarters in Santa Clara during construction last year. The company’s stock-market value has swelled more than sevenfold in the past two years. Credit Ramin Rahimian for The New York Times
__________________________

Programmers gradually adopted GPUs for applications used in, among other things, climate modeling and oil and gas discovery. A new phase began in 2012 after Canadian researchers began to apply CUDA and GPUs to unusually large neural networks, the many-layered software required for deep learning.

Those systems are trained to perform tricks like spotting a face by exposure to millions of images instead of through definitions established by programmers. Before the emergence of GPUs, Mr. Buck said, training such a system might take an entire semester.

Aided by the new technology, researchers can now complete the process in weeks, days or even hours.

“I can’t imagine how we’d do it without using GPUs,” said Silvio Savarese, an associate professor at Stanford who directs the SAIL-Toyota Center for A.I. Research at the university.

Competitors argue that the A.I. battle among chipmakers has barely begun.

Intel, whose standard chips are widely used for A.I. tasks, has also spent heavily to buy Altera, a maker of programmable chips; start-ups specializing in deep learning and machine vision; and the Israeli car technology supplier Mobileye.

Google recently unveiled the second version of an internally developed A.I. chip that helped beat the world’s best player of the game Go. The search giant claims the chip has significant advantages over GPUs in some applications. Start-ups like Wave Computing make similar claims.

But Nvidia will not be easy to dislodge. For one thing, the company can afford to spend more than most of its A.I. rivals on chips — Mr. Huang estimated Nvidia had plowed an industry record $3 billion into Volta — because of the steady flow of revenue from the still-growing gaming market.

Nvidia said more than 500,000 developers are now using GPUs. And the company expects other chipmakers to help expand its fan base once it freely distributes an open-source chip design they can use for low-end deep learning applications — light-bulbs or cameras, for instance — that it does not plan to target itself.

A.I., Mr. Huang said, “will affect every company in the world. We won’t address all of it.”

nytimes.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen9/1/2017 4:44:46 PM
1 Recommendation   of 1580
 
RBC: Nvidia is set to dominate the next wave of blockchain technology (NVDA)

markets.businessinsider.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Glenn Petersen who wrote (1561)9/2/2017 10:51:28 AM
From: zzpat
   of 1580
 
NVDAs growth won't come from Bitcoin or blockchain. It'll come from AI. It's a hard company to put my finger on. Good CEO, the company is innovating, it's constantly expanding its market into new things (like Bitcoin) and the stock is insanely high.

I like an autonomous car, medical computers, facial identification etc. AI is the future. The new Internet.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: Glenn Petersen9/8/2017 4:36:44 PM
   of 1580
 
Nvidia and Avitas Systems partner on using AI to help robots spot defects

by Darrell Etherington ( @etherington)
TechCrunch
September 8, 2017

Automated inspection company Avitas

Systems, which is a GE Venture company, is using Nvidia’s DGX-1 and DGX Station to train its neural-network-based artificial intelligence to be able to quickly and consistently identify defects in industrial equipment.

Avitas Systems uses a range of robotic equipment to monitor things like oil and gas pipelines, coolant towers and other crucial equipment, including aerial and underwater drones – and Nvidia’s help means it can create software that can help these bots spot the slightest bit of corrosion or variance in equipment before it becomes a dangerous problem.

Alex Tepper, Avitas founder and head of corporate and business development, explained in an interview that GE has been helping customers with industrial inspections for a long time, and has found that these customers are spending hundreds of millions of dollars on inspections that involve a person driving out to, or flying a helicopter above an asset. These aren’t methods that generate fool-proof results, of course, and there’s a lot that can’t be seen reliably with the naked eye.

“We’re analyzing the results from those robotics to do automated defect recognition, which is a fancy way of saying interpreting those sensor results, applying AI to them, so that we can figure out if there are any defects being sensed, whether it’s corrosion, micro-fractures, hot and cold spots – oftentimes defects that the human eye can’t see.



UAV over flare stack
___________________________________

Additionally, Avitas can provide reliable replication of observation conditions with automated inspection methods – robots can take the same photograph or sensor reading from the same perspective over and over again. And they can help shift defect monitoring from a time-based operation to a risk-based one: Instead of sending out a person to check an asset on a pre-defined schedule, automated observation can target high-risk assets and keep them under pretty much constant watch.

Nvidia’s role in all this is processing of the resulting data via its DGX-1 supercomputer, and also through its DGX Station, which provides unique capabilities by offering analysis and processing capabilities at the edge – decoupled from the data center. Tepper says that more and more of their work involves running AI applications in areas where there isn’t a reliable connection to a central server – or even any connection at all, in some cases.

The DGX Station puts hundreds of CPUs in a power-efficient, portable form factor, and it’s just the start for Nvidia’s ambitions to bring supercomputing power to the field.

“Avitas started with a prototype version of our station, and soon they’ll be getting an upgrade to our DGX Station with Volta [launched in May], and that’ll be a huge performance gain,” explained Nvidia GM of DGX Systems Jim Hugh. “I think Alex and team are going to see a 3x performance in the activity there at a minimum, and it could even be greater for the inference activity they’re seeing.”

techcrunch.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen9/14/2017 6:41:34 AM
   of 1580
 
h/t The Ox


Nvidia and AMD aren’t at serious risk from crypto concerns, analysts says

Published: Sept 11, 2017 3:24 p.m. ET

Shift to mining-specific products could insulate graphics-card makers from potential demand downturn on China ban



Reuters
WALLACEWITKOWSKI
REPORTER

Two graphics-card makers that have benefited from the rise of cryptocurrencies, Nvidia Corp. and Advanced Micro Devices Inc., should be insulated from concerns about a drop in the virtual currencies, analysts said Monday.

After briefly trading above $5,000 on Sept. 2, the price of bitcoin has fallen under pressure lately, most recently as the Bank of China has issued a draft of instructions that would ban Chinese exchanges from providing cryptocurrency trading services. Given the effect of past bitcoin downturns on graphics card sales, many are concerned that a drop in crypto prices could punish sales at AMD and Nvidia.

“We think that the risk of a ‘crypto-driven’ inventory correction driving material downside is low in the near term,” said Jefferies analyst Mark Lipacis.

Shares of AMD AMD, -0.08% were up 2.7% to $12.58 and Nvidia NVDA, +3.24% shares gained 3.4% to $169.29 Monday. The price of one bitcoin BTCUSD, -0.60% rose 0.6% to $4,217.54 and Ether, the cryptocurrency on the Etherium network, gained 4.4% for $301.42.

Reasons for the low risk outlook include upward momentum in crypto prices since July, and AMD and Nvidia hinting that vendors will start developing products directed at cryptocurrency miners.Asustek Computer Inc. 2357, +0.40% has started distributing cards based on AMD and Nvidia chipsets targeted solely at cryptocurrency mining.

That alters a landscape that has been based on what happened a few years ago, when bitcoin prices spiked and drove demand for graphics cards to help with mining, only for that demand to soften when bitcoin prices fell back down, Lipacis said. When crypto prices fell, miners dismantled their mining rigs and flooded the secondary market with graphics cards, sapping demand for Nvidia and AMD products.


Should crypto prices face a similar decline, both AMD and Nvidia are better insulated this time, said Lipacis, who thinks the newer cards built for cryptocurrency mining are worthless to gamers on the secondary market, lessening the risk that a dive in crypto prices will tank demand.


The risk isn’t zero, however. Overall, Lipacis sees a 3% downside to AMD’s quarterly sales should the crypto market tank, and a 10% risk to Nvidia sales. Lipacis has “Buy” ratings on both AMD and Nvidia.



As to whether cryptocurrencies are a fad, Lipacis doesn’t think so. He writes:



We actually believe that the technology they are based on, called Blockchain, which supports secure accounting of distributed ledgers, has applications in financial services beyond cryptocurrencies. We expect demand for Blockchain GPUs (including for cryptocurrencies) to continue to grow and become an important driver for GPU growth, even if with some degree of volatility.

The recent crackdowns in China on cryptocurrencies could soften demand for mining cards in the December quarter, said Mizuho Securities analyst Vijay Rakesh in a note. Rakesh has “Buy” ratings on both Nvidia and AMD, with $180 and $17 price targets respectively.

Rakesh writes:



While Sep/OctQ could see upside, some recent potential crackdowns by the China regulators…could imply a modest DecQ GPU demand softening. We believe key for NVDA/AMD will be to show continued Data Center momentum in the DecQ.

For the year, AMD shares are up nearly 11% and Nvidia shares have gained 58%. By comparison, the S&P 500 index SPX, +1.08% has advanced 11%. Meanwhile, bitcoin has rallied 337% year to date, while Ether has soared 3,668%.

Of the 36 analysts covering Nvidia, 18 have “Buy” or “Overweight” ratings, 13 have “Hold” ratings, and five have “Underweight” or “Sell” ratings, according to FactSet.

marketwatch.com

Share RecommendKeepReplyMark as Last Read


To: zzpat who wrote (1562)9/16/2017 3:55:19 PM
From: zzpat
   of 1580
 
NVDA's AI related upgrade to a target of 250 resulted in a 10.71 jump in one day.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: Glenn Petersen9/18/2017 9:04:33 PM
   of 1580
 
So, the leading internet companies are now training their neural networks with help from another type of chip called a graphics processing unit, or G.P.U. These low-power chips — usually made by Nvidia — were originally designed to render images for games and other software, and they worked hand-in-hand with the chip — usually made by Intel — at the center of a computer. G.P.U.s can process the math required by neural networks far more efficiently than C.P.U.s.

Nvidia is thriving as a result, and it is now selling large numbers of G.P.U.s to the internet giants of the United States and the biggest online companies around the world, in China most notably. The company’s quarterly revenue from data center sales tripled to $409 million over the past year.


Chips Off the Old Block: Computers Are Taking Design Cues From Human Brains

New technologies are testing the limits of computer semiconductors. To deal with that researchers have gone looking for ideas from nature.

By CADE METZ
New York Times
SEPT. 16, 2017



After years of stagnation, the computer is evolving again, prompting some of the world’s largest tech companies to turn to biology for insights. Credit Minh Uong/The New York Times
______________________________

SAN FRANCISCO — We expect a lot from our computers these days. They should talk to us, recognize everything from faces to flowers, and maybe soon do the driving. All this artificial intelligence requires an enormous amount of computing power, stretching the limits of even the most modern machines.

Now, some of the world’s largest tech companies are taking a cue from biology as they respond to these growing demands. They are rethinking the very nature of computers and are building machines that look more like the human brain, where a central brain stem oversees the nervous system and offloads particular tasks — like hearing and seeing — to the surrounding cortex.

After years of stagnation, the computer is evolving again, and this behind-the-scenes migration to a new kind of machine will have broad and lasting implications. It will allow work on artificially intelligent systems to accelerate, so the dream of machines that can navigate the physical world by themselves can one day come true.

This migration could also diminish the power of Intel, the longtime giant of chip design and manufacturing, and fundamentally remake the $335 billion a year semiconductor industry that sits at the heart of all things tech, from the data centers that drive the internet to your iPhone to the virtual reality headsets and flying drones of tomorrow.

“This is an enormous change,” said John Hennessy, the former Stanford University president who wrote an authoritative book on computer design in the mid-1990s and is now a member of the board at Alphabet, Google’s parent company. “The existing approach is out of steam, and people are trying to re-architect the system.”



Xuedong Huang, left, and Doug Burger of Microsoft are among the employees leading the company’s efforts to develop specialized chips. Credit Ian C. Bates for The New York Times
_________________________________

The existing approach has had a pretty nice run. For about half a century, computer makers have built systems around a single, do-it-all chip — the central processing unit — from a company like Intel, one of the world’s biggest semiconductor makers. That’s what you’ll find in the middle of your own laptop computer or smartphone.

Now, computer engineers are fashioning more complex systems. Rather than funneling all tasks through one beefy chip made by Intel, newer machines are dividing work into tiny pieces and spreading them among vast farms of simpler, specialized chips that consume less power.

Changes inside Google’s giant data centers are a harbinger of what is to come for the rest of the industry. Inside most of Google’s servers, there is still a central processor. But enormous banks of custom-built chips work alongside them, running the computer algorithms that drive speech recognition and other forms of artificial intelligence.

Google reached this point out of necessity. For years, the company had operated the world’s largest computer network — an empire of data centers and cables that stretched from California to Finland to Singapore. But for one Google researcher, it was much too small.

In 2011, Jeff Dean, one of the company’s most celebrated engineers, led a research team that explored the idea of neural networks — essentially computer algorithms that can learn tasks on their own. They could be useful for a number of things, like recognizing the words spoken into smartphones or the faces in a photograph.

In a matter of months, Mr. Dean and his team built a service that could recognize spoken words far more accurately than Google’s existing service. But there was a catch: If the world’s more than one billion phones that operated on Google’s Android software used the new service just three minutes a day, Mr. Dean realized, Google would have to double its data center capacity in order to support it.

“We need another Google,” Mr. Dean told Urs Hölzle, the Swiss-born computer scientist who oversaw the company’s data center empire, according to someone who attended the meeting. So Mr. Dean proposed an alternative: Google could build its own computer chip just for running this kind of artificial intelligence.

But what began inside data centers is starting to shift other parts of the tech landscape. Over the next few years, companies like Google, Apple and Samsung will build phones with specialized A.I. chips. Microsoft is designing such a chip specifically for an augmented-reality headset. And everyone from Google to Toyota is building autonomous cars that will need similar chips.

This trend toward specialty chips and a new computer architecture could lead to a “Cambrian explosion” of artificial intelligence, said Gill Pratt, who was a program manager at Darpa, a research arm of the United States Department of Defense, and now works on driverless cars at Toyota. As he sees it, machines that spread computations across vast numbers of tiny, low-power chips can operate more like the human brain, which efficiently uses the energy at its disposal.

“In the brain, energy efficiency is the key,” he said during a recent interview at Toyota’s new research center in Silicon Valley.

Change on the Horizon

There are many kinds of silicon chips. There are chips that store information. There are chips that perform basic tasks in toys and televisions. And there are chips that run various processes for computers, from the supercomputers used to create models for global warming to personal computers, internet servers and smartphones.



An older board and chip combination at Microsoft’s offices. Chips now being developed by the company can be reprogrammed for new tasks on the fly. Credit Ian C. Bates for The New York Times
_________________________________

For years, the central processing units, or C.P.U.s, that ran PCs and similar devices were where the money was. And there had not been much need for change.

In accordance with Moore’s Law, the oft-quoted maxim from Intel co-founder Gordon Moore, the number of transistors on a computer chip had doubled every two years or so, and that provided steadily improved performance for decades. As performance improved, chips consumed about the same amount of power, according to another, lesser-known law of chip design called Dennard scaling, named for the longtime IBM researcher Robert Dennard.

By 2010, however, doubling the number of transistors was taking much longer than Moore’s Law predicted. Dennard’s scaling maxim had also been upended as chip designers ran into the limits of the physical materials they used to build processors. The result: If a company wanted more computing power, it could not just upgrade its processors. It needed more computers, more space and more electricity.

Researchers in industry and academia were working to extend Moore’s Law, exploring entirely new chip materials and design techniques. But Doug Burger, a researcher at Microsoft, had another idea: Rather than rely on the steady evolution of the central processor, as the industry had been doing since the 1960s, why not move some of the load onto specialized chips?

During his Christmas vacation in 2010, Mr. Burger, working with a few other chip researchers inside Microsoft, began exploring new hardware that could accelerate the performance of Bing, the company’s internet search engine.

At the time, Microsoft was just beginning to improve Bing using machine-learning algorithms (neural networks are a type of machine learning) that could improve search results by analyzing the way people used the service. Though these algorithms were less demanding than the neural networks that would later remake the internet, existing chips had trouble keeping up.

Mr. Burger and his team explored several options but eventually settled on something called Field Programmable Gate Arrays, or F.P.G.A.s.: chips that could be reprogrammed for new jobs on the fly. Microsoft builds software, like Windows, that runs on an Intel C.P.U. But such software cannot reprogram the chip, since it is hard-wired to perform only certain tasks.

With an F.P.G.A., Microsoft could change the way the chip works. It could program the chip to be really good at executing particular machine learning algorithms. Then, it could reprogram the chip to be really good at running logic that sends the millions and millions of data packets across its computer network. It was the same chip but it behaved in a different way.

Microsoft started to install the chips en masse in 2015. Now, just about every new server loaded into a Microsoft data center includes one of these programmable chips. They help choose the results when you search Bing, and they help Azure, Microsoft’s cloud-computing service, shuttle information across its network of underlying machines.

Teaching Computers to Listen

In fall 2016, another team of Microsoft researchers — mirroring the work done by Jeff Dean at Google — built a neural network that could, by one measure at least, recognize spoken words more accurately than the average human could.

Xuedong Huang, a speech-recognition specialist who was born in China, led the effort, and shortly after the team published a paper describing its work, he had dinner in the hills above Palo Alto, Calif., with his old friend Jen-Hsun Huang, (no relation), the chief executive of the chipmaker Nvidia. The men had reason to celebrate, and they toasted with a bottle of champagne.



Jeff Dean, one of Google’s most celebrated engineers, said the company should develop a chip for running a type of artificial intelligence; right, Google’s Tensor Processing Unit, or T.P.U. Credit Ryan Young for The New York Times
____________________________
|
Xuedong Huang and his fellow Microsoft researchers had trained their speech-recognition service using large numbers of specialty chips supplied by Nvidia, rather than relying heavily on ordinary Intel chips. Their breakthrough would not have been possible had they not made that change.

“We closed the gap with humans in about a year,” Microsoft’s Mr. Huang said. “If we didn’t have the weapon — the infrastructure — it would have taken at least five years.”

Because systems that rely on neural networks can learn largely on their own, they can evolve more quickly than traditional services. They are not as reliant on engineers writing endless lines of code that explain how they should behave.

But there is a wrinkle: Training neural networks this way requires extensive trial and error. To create one that is able to recognize words as well as a human can, researchers must train it repeatedly, tweaking the algorithms and improving the training data over and over. At any given time, this process unfolds over hundreds of algorithms. That requires enormous computing power, and if companies like Microsoft use standard-issue chips to do it, the process takes far too long because the chips cannot handle the load and too much electrical power is consumed.

So, the leading internet companies are now training their neural networks with help from another type of chip called a graphics processing unit, or G.P.U. These low-power chips — usually made by Nvidia — were originally designed to render images for games and other software, and they worked hand-in-hand with the chip — usually made by Intel — at the center of a computer. G.P.U.s can process the math required by neural networks far more efficiently than C.P.U.s.

Nvidia is thriving as a result, and it is now selling large numbers of G.P.U.s to the internet giants of the United States and the biggest online companies around the world, in China most notably. The company’s quarterly revenue from data center sales tripled to $409 million over the past year.


“This is a little like being right there at the beginning of the internet,” Jen-Hsun Huang said in a recent interview. In other words, the tech landscape is changing rapidly, and Nvidia is at the heart of that change.

Creating Specialized Chips

G.P.U.s are the primary vehicles that companies use to teach their neural networks a particular task, but that is only part of the process. Once a neural network is trained for a task, it must perform it, and that requires a different kind of computing power.

After training a speech-recognition algorithm, for example, Microsoft offers it up as an online service, and it actually starts identifying commands that people speak into their smartphones. G.P.U.s are not quite as efficient during this stage of the process. So, many companies are now building chips specifically to do what the other chips have learned.

Google built its own specialty chip, a Tensor Processing Unit, or T.P.U. Nvidia is building a similar chip. And Microsoft has reprogrammed specialized chips from Altera, which was acquired by Intel, so that it too can run neural networks more easily.

Other companies are following suit. Qualcomm, which specializes in chips for smartphones, and a number of start-ups are also working on A.I. chips, hoping to grab their piece of the rapidly expanding market. The tech research firm IDC predicts that revenue from servers equipped with alternative chips will reach $6.8 billion by 2021, about 10 percent of the overall server market.



Bart Sano, the vice president of engineering who leads hardware and software development for Google’s network, acknowledged that specialty chips were still a relatively modest part of the company’s operation. Credit Ryan Young for The New York Times
_______________________________


Across Microsoft’s global network of machines, Mr. Burger pointed out, alternative chips are still a relatively modest part of the operation. And Bart Sano, the vice president of engineering who leads hardware and software development for Google’s network, said much the same about the chips deployed at its data centers.

Mike Mayberry, who leads Intel Labs, played down the shift toward alternative processors, perhaps because Intel controls more than 90 percent of the data-center market, making it by far the largest seller of traditional chips. He said that if central processors were modified the right way, they could handle new tasks without added help.

But this new breed of silicon is spreading rapidly, and Intel is increasingly a company in conflict with itself. It is in some ways denying that the market is changing, but nonetheless shifting its business to keep up with the change.

Two years ago, Intel spent $16.7 billion to acquire Altera, which builds the programmable chips that Microsoft uses. It was Intel’s largest acquisition ever. Last year, the company paid a reported $408 million buying Nervana, a company that was exploring a chip just for executing neural networks. Now, led by the Nervana team, Intel is developing a dedicated chip for training and executing neural networks.

“They have the traditional big-company problem,” said Bill Coughran, a partner at the Silicon Valley venture capital firm Sequoia Capital who spent nearly a decade helping to oversee Google’s online infrastructure, referring to Intel. “They need to figure out how to move into the new and growing areas without damaging their traditional business.”

Intel’s internal conflict is most apparent when company officials discuss the decline of Moore’s Law. During a recent interview with The New York Times, Naveen Rao, the Nervana founder and now an Intel executive, said Intel could squeeze “a few more years” out of Moore’s Law. Officially, the company’s position is that improvements in traditional chips will continue well into the next decade.

Mr. Mayberry of Intel also argued that the use of additional chips was not new. In the past, he said, computer makers used separate chips for tasks like processing audio.

But now the scope of the trend is significantly larger. And it is changing the market in new ways. Intel is competing not only with chipmakers like Nvidia and Qualcomm, but also with companies like Google and Microsoft.

Google is designing the second generation of its T.P.U. chips. Later this year, the company said, any business or developer that is a customer of its cloud-computing service will be able to use the new chips to run its software.

While this shift is happening mostly inside the massive data centers that underpin the internet, it is probably a matter of time before it permeates the broader industry.

The hope is that this new breed of mobile chip can help devices handle more, and more complex, tasks on their own, without calling back to distant data centers: phones recognizing spoken commands without accessing the internet; driverless cars recognizing the world around them with a speed and accuracy that is not possible now.

In other words, a driverless car needs cameras and radar and lasers. But it also needs a brain.

Follow Cade Metz on Twitter: @CadeMetz

A version of this article appears in print on September 17, 2017, on Page BU1 of the New York edition with the headline: Chip Off the Old Block

nytimes.com

Share RecommendKeepReplyMark as Last Read


To: zzpat who wrote (1565)9/18/2017 11:16:58 PM
From: Glenn Petersen
   of 1580
 
Another day, another upgrade:

Nvidia Jumps 4%: Merrill Lynch Ups Target to $210 on Volta, Data Center Capex

Stock price targets keep moving higher for Nvidia, with Merrill Lynch's Vivek Arya raising his today to $210, following Evercore ISI's C.J. Muse raising his target to $250 on Friday.

By Tiernan Ray
Barron's
Sept. 18, 2017 10:08 a.m. ET

Shares of GPU titan Nvidia ( NVDA) are up $6.42, or almost 4%,a t $186.53, after Merrill Lynch's Vivek Arya this morning reiterated his Buy rating on the shares, and raised his price target to $210 from $185, citing potential for estimates to go higher this latter half of the year and next year.

Merrill Lynch does not provide research reports to media. However, according to a summary by TheFlyontheWall, Arya notes in particular several positive potential developments, such as the actual roll out of the newer “Volta” parts, various reviews lauding the performance of that product, and an “acceleration in data center capital spending.

Today boost in target at Merrill follows one on Friday from Evercore ISI’s C.J. Muse, who had raised his target to $250 from $180, based on his belief the company is creating “the industry standard” in artificial intelligence computing.

Also late Friday, RBC Capital Markets’s Mitch Steves argued that a crackdown on bitcoin exchanges by the Chinese government may actually fuel demand for Nvidia gear in order to “mine” bitcoin and other crypto-currencies.

barrons.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.