SI
SI
discoversearch

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Glenn Petersen10/2/2017 9:01:02 PM
2 Recommendations   of 1580
 
To Compete With New Rivals, Chipmaker Nvidia Shares Its Secrets

Author: Tom Simonite
Wired
September 29, 2017



Getty Images
_______________________________

Five years ago, Nvidia was best known as a maker of chips to power videogame graphics in PCs. Then researchers found its graphics chips were also good at powering deep learning, the software technique behind recent enthusiasm for artificial intelligence.

The discovery made Nvidia into the preferred seller of shovels for the AI gold rush that’s propelling dreams of self-driving cars, delivery drones and software that plays doctor. The company’s stock-market value has risen 10-fold in three years, to more than $100 billion.

That’s made Nvidia and the market it more-or-less stumbled into an attractive target. Longtime chip kingpin Intel and a stampede of startups are building and offering chips to power smart machines. Further competition comes from large tech companies designing their own AI chips. Google’s voice recognition and image search now run on in-house chips dubbed “tensor processing units,” while the face-unlock feature in Apple’s new iPhone is powered by a home-grown chip with a “ neural engine”.

Nvidia’s latest countermove is counterintuitive. This week the company released as open source the designs to a chip module it made to power deep learning in cars, robots, and smaller connected devices such as cameras. That module, the DLA for deep learning accelerator, is somewhat analogous to Apple’s neural engine. Nvidia plans to start shipping it next year in a chip built into a new version of its Drive PX computer for self-driving cars, which Toyota plans to use in its autonomous-vehicle program.

Why give away this valuable intellectual property for free? Deepu Talla, Nvidia’s vice president for autonomous machines, says he wants to help AI chips reach more markets than Nvidia can accommodate itself. While his unit works to put the DLA in cars, robots, and drones, he expects others to build chips that put it into diverse markets ranging from security cameras to kitchen gadgets to medical devices. “There are going to be hundreds of billions of internet of things devices in the future,” says Talla. “We cannot address all the markets out there.”



Source: S&P CapitalIQ
____________________

One risk of helping other companies build new businesses is that they’ll start encroaching on your own. Talla says that doesn’t concern him because greater use of AI will mean more demand for Nvidia’s other hardware, such as the powerful graphic chips used to train deep learning software before it is deployed. “There’s no good deed that goes unpunished but net-net it’s a great thing because this will increase the adoption of AI,” says Talla. “We think we can rise higher.”

Mi Zhang, a professor at Michigan State University, calls open sourcing the DLA design a “very smart move.” He guesses that while researchers, startups, and even large companies will be tempted by Nvidia’s designs, they mostly won’t change them radically. That means they are likely to maintain compatibility with Nvidia’s software tools and other hardware, boosting the company’s influence.

Zhang says it makes sense that devices beyond cars and robots have much to gain from new forms of AI chip. He points to a recent project in his research group developing hearing aids that used learning algorithms to filter out noise. Deep-learning software was the best at smartly recognizing what to tune out, but the limitations of existing hearing aid-scale computer hardware made it too slow to be practical.

Creating a web of companies building on its chip designs would also help Nvidia undermine efforts by rivals to market AI chips and create their ecosystems around them. In a tweet this week, one Intel engineer called Nvidia’s open source tactic a “devastating blow” to startups working on deep learning chips.

It might also lead to new challenges for Intel. The company bought two such startups in the past year: Movidius, focused on image processing, and Mobileye, which makes chips and cameras for automated driving.

wired.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/4/2017 9:58:23 AM
1 Recommendation   of 1580
 
Despite the hype, nobody is beating Nvidia in AI

Written by Dave Gershgorn
Quartz
October 02, 2017



NVIDIA CEO Jen-Hsun Huang shows NEW HGX with TESLA V100 VERSA GPU CLOUD COMPUTING as he talks about AI and gaming during the Computex Taipei exhibition at the world trade center in Taipei, Taiwan, Tuesday, May 30, 2017. (AP Photo/Chiang Ying-ying)
AI market in hand. (AP Photo/Chiang Ying-ying)
______________________

You have to wonder whether Nvidia is going to get sick of winning all the time.

The company’s stock price is up to $178—69% more than this time last year. Nvidia is riding high on its core technology, the graphics processing unit used in the machine-learning that powers the algorithms of Facebook and Google; partnerships with nearly every company keen on building self-driving cars; and freshly announced hardware deals with three of China’s biggest internet companies. Investors say this isn’t even the top for Nvidia: William Stein at SunTrust Robinson Humphrey predicts Nvidia’s revenue from selling server-grade GPUs to internet companies, which doubled last year, will continue to increase 61% annually until 2020.

Nvidia will likely see competition in the near future. At least 15 public companies and startups are looking to capture the market for a “second wave” of AI chips, which promise faster performance with decreased energy consumption, according to James Wang of investment firm ARK. Nvidia’s GPUs were originally developed to speed up graphics for gaming; the company then pivoted to machine learning. Competitors’ chips, however, are being custom-built for the purpose.

The most well-known of these next-generation chips is Google’s Tensor Processing Unit (TPU), which the company claims is 15-30 times faster than others’ central processing units (CPUs) and GPUs. Google explicitly mentioned performance improvements over Nvidia’s tech; Nvidia says the underlying tests were conducted on Nvidia’s old hardware. Either way, Google is now offering customers the option to rent use of TPUs through its cloud.

Intel, the CPU maker recently on a shopping spree for AI hardware startups—it bought Nervana Systems in 2016 and Mobileye in March 2017—also poses a threat. The company says it will release a new set of chips called Lake Crest later in 2017 specifically focused on AI, incorporating the technology it acquired through Nervana Systems. Intel is also hedging its bets by investing in neuromorphic computing, which uses chips that don’t rely on traditional microprocessor architecture but instead try to mimic neurons in the brain.

ARK predicts Nvidia will keep its technology ahead of the competition. Even disregarding the market advantage of capturing a strong initial customer base, Wang notes that the company is also continuing to increase the efficiency of GPU architecture at a rate fast enough to be competitive with new challengers. Nvidia has improved the efficiency of its GPU chips about 10x over the past four years.

Nvidia has also been investing since the mid-aughts in research to optimize how machine-learning frameworks, the software used to build AI programs, interact with the hardware, critical to ensuring efficiency. It currently supports every major machine-learning framework; Intel supports four, AMD supports two, Qualcomm supports two, and Google supports only Google’s.

Since GPUs aren’t specifically built for machine learning, they can also pull double-duty in a datacenter as video- or image-processing hardware. TPUs are custom-built for AI only, which means they’re inefficient at tasks like transcoding video into different qualities or formats. Nvidia CEO Jen-Hsun Huang told investors in August that “a GPU is basically a TPU that does a lot more,” since many social networks are promoting video on their platforms.

“Until TPUs demonstrate an unambiguous lead over GPUs in independent tests, Nvidia should continue to dominate the deep-learning data center.” Wang writes, noting that AI chips for smaller devices outside of the datacenter are still ripe for startups to disrupt.

qz.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/10/2017 4:08:36 PM
   of 1580
 
Nvidia says its new supercomputer will enable the highest level of automated driving

No steering wheels, no pedals, no mirrors

by Andrew J. Hawkins
The Verge
Oct 10, 2017, 6:00am EDT



Nvidia Founder, President and CEO Jen-Hsun Huang delivers a keynote address at CES 2017. Photo by Ethan Miller/Getty Images
________________________________

Nvidia, one of the world’s best known manufacturers of computer graphics cards, announced a new, more powerful computing platform for use in autonomous vehicles. The company claims its new system, codenamed Pegasus, can be used to power Level 5, fully driverless cars without steering wheels, pedals, or mirrors.

The new iteration of the GPU maker’s Drive PX platform will deliver over 320 trillion operations per second, which amounts to more than 10 times its predecessor’s processing power. Pegasus will be marketed to the hundreds of automakers and tech companies that are currently developing self-driving cars starting the second half of 2018, the company says.


Nvidia’s promise of Level 5 autonomy shouldn’t be taken lightly. Most automakers and tech companies speak carefully about the levels of autonomy, avoiding claims on which they may not ultimately be able to deliver. Nothing on the road today that’s commercially available is higher than a Level 2. Audi says its new A8 sedan is Level 3 autonomous — but we have to take the company’s word for it because present regulations won’t allow the German automaker to turn it on. Most car companies have said they will probably skip Level 3 and 4 because it’s too dangerous, and go right to Level 5. So for Nvidia to state definitively it can deliver the highest level of autonomous driving starting next year is pretty staggering — and maybe a little bit reckless.

Presently, self-driving cars that don’t require any human intervention are only theoretical. This vision of the future, where the vehicle can handle every task in all possible conditions, is the one that is most appealing to futurists and tech evangelists. But it will take years, if not decades, before our roads and rules catch up to robotic cars that can roam freely without limitations.

Nvidia’s Drive PX Pegasus computing platform Nvidia In a conference call with reporters Monday, Nvidia’s executives acknowledged that these driverless cars with their Level 5-empowering GPUs will most likely first be deployed in a ride-hailing capacity in limited settings, like college campuses or airports. But as soon as their life-saving potential is realized, they expect them to be rolled out onto more public roads. “These vehicles are going to save a lot of lives,” said Danny Shapiro, senior director of automated driving at Nvidia.

The type of computers produced by Nvidia and its competitors like Intel are arguably the most important part of the driverless car. Everything the vehicle “sees” with its sensors, all of the images, mapping data, and audio material picked up by its cameras, needs to be processed by giant PCs in order for the vehicle to make split-second decisions. All this processing must be done with multiple levels of redundancy to ensure the highest level of safety. This is why so many self-driving operators prefer SUVs, minivans, and other large wheelbase vehicles: autonomous cars need enormous space in the trunk for their big “brains.”

The trunk of a self-driving Ford Fusion Sam Abuelsamid But Nvidia claims to have shrunk down its GPU, making it an easier fit for production vehicles. Pegasus contains an amount of power equivalent to “a 100-server data center in the form-factor size of a license plate,” Shapiro said.

Nvidia began working on autonomous vehicles several years ago and has racked up partnerships with dozens of automakers and suppliers racing to develop self-driving cars, including Chinese search engine giant Baidu, Toyota, Audi, Tesla, and Volvo.

Nvidia’s original architecture for self-driving cars, introduced in 2015, is a supercomputer platform called Drive PX that can process all of the data coming from the vehicle’s cameras and sensors. The platform then uses an AI algorithm-based operating system and a cloud-based, high-definition 3D map to help the car understand its environment, know its location, and anticipate potential hazards while driving. The system’s software can be updated over the air — similar to how a smartphone’s operating system is updated — making the car become smarter over time.

A more powerful next-generation computer called Drive PX 2 — along with a suite of software tools and libraries aimed at speeding up the deployment of self-driving vehicles — followed in 2016. Nvidia has continued to push its tech further with the introduction last year of Xavier, a complete system-on-a-chip processor that is essentially an AI brain for self-driving cars. And Pegasus is the equivalent of two Xavier units, plus two next-generation discrete GPUs, Nvidia says. The new system was introduced at a GPU conference in Munich, Germany on Tuesday.

Nvidia also made two additional announcements at the conference: that it was partnering with Deutsche Post DHL Group and auto supplier ZF to deploy fully autonomous delivery trucks by 2019; and that it was offering early access to its virtual “Holodeck” technology to select designers and developers. (The Verge’s Adi Robertson wrote recently about the unlimited number of VR projects using “holodeck” terminology.

theverge.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/11/2017 6:40:43 AM
   of 1580
 
NVIDIA opens up its Holodeck VR design suite

Designers can model and interact with people, robots and objects in real time. \

Steve Dent, @stevetdent
Engadget
October 10, 2017

NVIDIA
___________________

Hardware makers have figured out that enterprises are the best way to make money off of VR and AR, not consumers. NVIDIA, a company that does both things well but has been particularly strong on the business side lately, has just opened up its Holodeck "intelligent" VR platform to select designers and developers. First unveiled in May, it allows for photorealistic graphics, haptics, real-world physics and multi-user collaboration.

That helps engineers and designers build and interact with photorealistic people, objects and robots in a fully simulated environment. The idea is to get new hardware prototyped in as much detail as possible before building real-world models. It also allows manufacturers to start training personnel well before hardware is market-ready. For instance, NVIDIA showed how the engineers that built the Koenigsegg supercar could explore the car "at scale and in full visual fidelity" and consult in real time on design changes.

Holodeck is built on a bespoke version of Epic Games' Unreal Engine 4 and uses NVIDIA's VRWorks, DesignWorks and GameWorks. It requires some significant hardware, either an NVIDIA 1080, Quadro P600, NVIDIA 1080 Ti or Titan XP GPU, but the firm says it will eventually lower the bar. It's not clear what kind of headsets are supported, but both of the major PC models (the HTC Vive and Oculus Rift) will likely work.

NVIDIA is already using its Holodeck as a way to train AI agents in its Isaac Simulator, a photorealistic machine-learning environment. With Holodeck, NVIDIA is taking on Microsoft and its Hololens in the enterprise and design arena -- though the latter AR system is more about letting engineers interact with real and virtual objects at the same time. Another player in the simulation scene is Google with Glass Enterprise, a product aimed more at training and manufacturing than design.

All of this doesn't seem like it's going to help you game or be entertained, but there is a silver lining. Much of this very advanced tech is bound to trickle down to consumers, hopefully making VR and AR good enough to actually become popular.

engadget.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen10/13/2017 11:02:00 PM
   of 1580
 
Nvidia Can Go to $250 on All the Data Center Opportunities, Says Needham

Nvidia's business in data centers has several avenues to tens of billions in revenue, including "inferencing," an emerging area of machine learning, but also selling chips to Uber and other "transportation as a service" companies, according to Rajvindra Gill of Neeedham & Co.

By Tiernan Ray
Barron's
Oct. 13, 2017 11:19 a.m. ET \

Another day, another Nvidia ( NVDA) price target increase, this one from Needham & Co.’s Rajvindra Gill, who reiterates a Buy rating, and raises his price target to $250 from $200, after attending the company’s “GTC” conference in Munich, Germany, and coming away upbeat about the prospects for the company’s data center market.

Gill’s new target beats the $220 that RBC Capital’s Mitch Steves offered yesterday on his own enthusiasm for Nvidia’s markets.

Gill talked with Nvidia CEO Jen-Hsun Huang at the event, along with other attendees, and the discussion mostly “centered around the growth drivers in data center,” he writes.

The market could be worth $21 billion to $35 billion over five years, writes Gill, in three buckets.

One big area is the current “training” market in machine learning:

Nearly all the hyperscalers, cloud and server vendors (Google, Alibaba, Cisco, Huawei, AWS, Microsoft Azure, IBM, Lenovo, Tencent) along with several A.I. startups will train on GPUs in the cloud — both internally and for their customers.

Inference, acting on the results of training, is another one, though “we are waiting to see evidence” of the GPU take up there, he writes:

The second major growth driver is inference. We estimate there are 20 million CPU nodes that will be accelerated over the next five years to support AI applications (live video on Internet, video surveillance cameras). At $500-$1,000 ASPs, we forecast the inference TAM at $10 billion to $20 billion.

And yet another part is spreading GPUs to new areas, including the “transportation as a service" companies such as Uber:

For example, Lyft or Uber could possibly deploy supercomputing GPUs to process the innumerable driving decisions needed to support AVs along with SQL databases being accelerated with AI-GPUs. Moreover, 15 of the top 500 supercomputers have GPUs. We believe over the next five years, 100% of those supercomputers will be accelerated. In a typical supercomputer node, we estimate NVDA receives $64k (8 GPUs X $8k each). This would translate to an HPC GPU TAM of ~$10BN.

barrons.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen11/1/2017 8:58:11 PM
   of 1580
 
NVDA reports on the 9th. A preview:

What to Watch When NVIDIA Reports Earnings

John Ballard, The Motley Fool
Motley Fool
October 31, 2017

NVIDIA (NASDAQ: NVDA) is scheduled to announce its fiscal third-quarter results on Thursday, Nov. 9, after the market closes. With the stock hitting new highs, investor optimism is high, which means NVIDIA has to deliver strong growth in key markets such as gaming and the data center to keep its momentum going.

Here's what investors need to know heading into the quarterly release.



Screenshot of Activision Blizzard's Destiny video game depicting a robotic character holding a gun against a city backdrop.
________________________________

Headline numbersOn a GAAP (unadjusted) basis, management expects revenue to grow about 17% year over year to $2.35 billion. Gross margin is expected to be slightly down year over year to 58.6%. In addition, operating expense is expected to be up year over year, which will cause earnings to grow at a slower rate than revenue.

There are a few reasons for the higher operating expense. The company is moving into a new headquarters building, and on top of that, NVIDIA normally has an increase in compensation expense in the third quarter. The graphics specialist is hiring more talent for growth initiatives in the areas of artificial intelligence, autonomous driving, and gaming, which will carry over to the third quarter.

Management doesn't provide specific guidance for earnings per share, but the average Wall Street analyst estimate is for $0.94 in earnings per share, which represents about 13% growth over last year's comparable quarter.

Gaming, data center, and cryptocurrency

The third-quarter earnings report will tell us a lot about whether NVIDIA is still in command in the gaming market. Despite Advanced Micro Devices' launch of its new Vega graphics processing units (GPUs), NVIDIA management believes its GeForce GPUs are in a "great strategic position" for the last half of the year. With several blockbuster games releasing this fall, there will be plenty of demand for new GPUs, so this will be a good quarter to see how well NVIDIA is holding up in an increasingly competitive market.

For perspective, NVIDIA started 2017 with a commanding lead in discrete graphics cards, with an estimated 72.5% share versus AMD's 27.5%. But in the second quarter, AMD picked up a few percent, reaching 29.4%. NVIDIA is still in pole position with 70.6%, so it will be important that NVIDIA continues to show strong growth in its gaming segment to hold its lead.

Gaming is NVIDIA's largest segment right now, but the company's biggest growth opportunity is in the data center. This has been a market where NVIDIA has consistently posted more than 100% year-over-year growth for several quarters in a row. The graphics specialist started shipping its new Volta GPUs for the data center last quarter, and it's in full production mode for the third quarter. With AMD announcing its own data center chips recently, it will be important for investors to watch how NVIDIA's data center segment performs, and also how management characterizes the demand it's seeing for Volta.

Investors should also listen for management's take on the cryptocurrency market, which generated just enough revenue to help NVIDIA outperform its own revenue guidance in the second quarter by about $250 million. On the last conference call, CEO Jen-Hsun Huang said: "As we go into this quarter, there's still cryptocurrency mining demand that we know is out there. And based on our analytics and understanding of the marketplace, there will be some amount of demand for the foreseeable future."

Huang expects there to be more cryptocurrencies introduced over time, driving higher demand from cryptocurrency who use GPUs to mine new digital coins.

Foolish final thoughts

Expectations are high for NVIDIA. The stock trades at over 50 times trailing earnings and is up 175% over the past 12 months. The company has handily beat analysts' estimates over the past several quarters, which has no doubt assisted those gains. For NVIDIA to continue hitting new highs, investors will want to see that the company's momentum is still strong in gaming and the data center -- its two biggest segments.

finance.yahoo.com

Share RecommendKeepReplyMark as Last Read


From: JakeStraw11/10/2017 12:15:11 PM
1 Recommendation   of 1580
 
Nvidia’s huge run continues as its data center business surges
techcrunch.com

Share RecommendKeepReplyMark as Last Read


From: Sr K11/10/2017 9:58:39 PM
   of 1580
 
nVIDEA raised their dividend by $.01 to $.15, a year after the last increase.

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen11/17/2017 10:04:09 AM
4 Recommendations   of 1580
 
This Man Is Leading an AI Revolution in Silicon Valley—And He’s Just Getting Started

By Andrew Nusca
6:30 AM EST

The cofounder and CEO of semiconductor and software maker Nvidia saw the future of computing more than a decade ago, and began developing products that could power the artificial intelligence era. Thanks to that vision, and relentless execution, his chipmaker today is perhaps the hottest company in Silicon Valley. And it may just be getting started.

Halfway through dinner at Evvia, a bustling Greek restaurant in downtown Palo Alto that Apple cofounder Steve Jobs used to frequent, Jen-Hsun “Jensen” Huang rolls up his shirtsleeve to show me his tattoo. It’s tribal in style, with thick curves extending across his shoulder cap. The black ink gleams in the warm glow of the restaurant’s low lights.

“So, I really want to extend it,” he says with a glint in his eye, gesturing along the length of his arm. “I actually kinda do. I would love to. But getting it really, really hurt. I was crying like a baby. My kids were with me, and they’re like, ‘Dad, you’ve gotta control yourself.’?”

Huang’s two adult children, speakeasy proprietor Spencer and hospitality professional Madison, also have tattoos. But at 54, their father, cofounder and CEO of the red-hot Silicon Valley semiconductor and software company Nvidia [url=](nvda, +0.73%)[/url], so far has only this one, an abstract version of the company’s logo. He got it about a decade ago.

“Every six months we have an off-site,” Huang says, leaning back in his chair to tell the story. “And at one, someone said, ‘What are we gonna do when the stock price hits $100?’ That was two splits ago. One person said they’d shave their head, or paint their hair blue, or get a mohawk, or something. And another said they’d get a nipple ring. And then by the time they come around to me, it was already at tattoo level. So I said, ‘Yeah all right, I’ll get a tattoo.’ And then the stock price hit $100.” He pauses and grimaces a little, remembering. “And it hurt so bad.”



Jen-Hsun “Jensen” Huang, photographed at Nvidia headquarters on Nov. 3, 2017.
Winni Wintermeyer
__________________________

Most Fortune 500 CEOs over 50 don’t have tattoos, let alone of the logos of the companies they run. But Huang, who was born in Taiwan, isn’t most Fortune 500 CEOs. For starters, he’s the rare cofounder still running his company 24 years later. He is both a trained electrical engineer (Oregon State; Stanford), and a formidable executive who leads employees with encouragement, inquiry, and often flurries of vacation emails. (Sent during his, not theirs.) And he is, according to many people in the industry, a visionary who foresaw a blossoming market for a new kind of computing early enough to reposition his company years in advance.

That vision and his company’s incredible financial performance make Huang the clear choice as Fortune’s Businessperson of the Year for 2017.

“Jensen is one of those rare individuals who combines incredible vision with ruthless focus on execution,” says Adobe CEO Shantanu Narayen. “Now with Nvidia’s focus on artificial intelligence, the opportunities for leadership are endless.”

“Jeff Bezos, Elon Musk—I put Jensen in that group,” says Todd Mostak, CEO of MapD, a San Francisco database company in which Nvidia has thrice invested.

If you haven’t heard of Nvidia, you can be forgiven. It doesn’t make a chat app or a search service or another kind of technology meant to appeal to the average smartphone-toting consumer. No, Nvidia makes the muscular mystery stuff that powers all of it. Its GPUs, or “graphics processing units,” crunch the complex calculations necessary for cryptocurrency markets, so-called deep neural networks, and the visual fireworks you see on the big screen. The same technology that makes brutally realistic shooter games come alive helps self-driving cars take an “S” curve without assistance—enabling computers to see, hear, understand, and learn.

Booming demand for its products has supercharged growth at Nvidia. Over the past three full fiscal years, it has increased sales by an average of 19% and profits by an astonishing 56% annually. In early November the company reported results that once again blew past Wall Street’s estimates, with earnings per share 24% higher than expected. In its past four quarters, it has generated total sales of $9 billion and profits of $2.6 billion.

Such results have made Huang’s company a darling of investors. Nvidia’s share price, just two years ago hovering around $30, was recently over $200. Its market capitalization, at about $130 billion, is approaching that of IBM.



Charts show NVIDIA stock price and breakdown of its revenues
Nic Rapp
________________________

Nvidia meanwhile has so far managed to retain its roughly 70% market share in GPUs despite competition from formidable rivals—among them Intel [url=](intc, +0.42%)[/url] and AMD [url=](amd, +1.26%)[/url]—who want their share of the billions in chip sales to come from this new tech revolution. “IBM dominated in the 1950s with the mainframe computer, Digital Equipment Corp. in the mid-1960s with the transition to mini-computers, Microsoft and Intel as PCs ramped, and finally Apple and Google as cellphones became ubiquitous,” wrote Jefferies equity analyst Mark Lipacis in a July note to clients. “We believe the next tectonic shift is happening now and Nvidia stands to benefit.”

Or as Jim Cramer, host of CNBC’s Mad Money, put it on air in November: “Nvidia is one of the great companies of our time.”

Battling with the world’s biggest tech companies for A.I. supremacy was far from Jensen Huang’s mind when he cofounded Nvidia with friends Chris Malachowsky and Curtis Priem in 1993. At the time, Malachowsky and Priem were engineers at Sun Microsystems, and Huang was a director at San Jose chipmaker LSI Logic. Malachowsky and Priem had lost a political battle within Sun over the direction of its technological development and were itching to leave. Huang, just 29 years old, was on firmer ground. The three men met at a Denny’s restaurant near Huang’s home to discuss what they believed was the proper direction for the next wave of computing: accelerated, or graphics-based, computing. Huang walked away from the meal with enough conviction to leave his position at LSI.

“We believed this model of computing could solve problems that general-purpose computing fundamentally couldn’t,” Huang says. “We also observed that video games were simultaneously one of the most computationally challenging problems and would have incredibly high sales volume. Those two conditions don’t happen very often. Video games was our killer app—a flywheel to reach large markets funding huge R&D to solve massive computational problems.”

With $40,000 in the bank, Nvidia was born. The company initially had no name. “We couldn’t think of one, so we named all of our files NV, as in ‘next version,’” Huang says. A need to incorporate the company prompted the cofounders to review all words with those two letters, leading them to “invidia,” the Latin word for “envy.” It stuck.

Nvidia’s early employees moved into an office in Sunnyvale, Calif., by the Lawrence Expressway. “It was a small office. We had lunch around a Ping-Pong table. We shared a bathroom with another company,” recalls Jeff Fisher, the company’s first salesman and currently an executive vice president. “The Wells Fargo bank that shared our parking lot got robbed two or three times.”

Nvidia’s first product, a multimedia card for personal computers called NV1, arrived in 1995 at a time when three-dimensional games began to gain traction. The card didn’t sell well, but the company kept tinkering with its technology over four more releases, gaining sales—and traction vs. rivals 3dfx, ATi, and S3—each time.

“We knew that in order for us to scale as a company, we had to provide more value than just a replaceable component in a PC,” says Fisher. “We had so much more value to add than just a commodity.”



Courtesy of Nvidia
_________________________

A successful IPO on the Nasdaq in 1999 set in motion a flurry of milestones for Nvidia. That year it released the GeForce 256, billed as the world’s first GPU. In 2006 it introduced CUDA, a parallel computing architecture that allowed researchers to run extremely complex exercises on thousands of GPUs, taking the chips out of the sole realm of video games and making them accessible for all types of computing. In 2014, the company revived a failing bid for the smartphone business by repositioning those chips, called Tegra, for automotive use. Over time, these moves proved prescient, unlocking new revenue streams for Nvidia in industries such as defense, energy, finance, health care, manufacturing, and security.

“There were some rough years there,” says Rev Lebaredian, a Hollywood veteran who serves as vice president of Nvidia’s GameWorks and LightSpeed Studios units. “Look at our stock price, say, 10 years ago. The world didn’t quite realize what we were building. What we’re doing is foundational to humanity. This form of computing is too important for it not to be valuable.”

Key to Nvidia’s ability to endure years of market doubt, Lebaredian adds, is Huang—a leader with deep conviction in the potential of graphics technology and an ability to think in 10-year time horizons.

While Huang says he didn’t anticipate how self-driving cars would evolve or when A.I. would arrive, he had utter conviction in the superiority of graphical computing. So he invested to make sure his company was ready to capitalize on the opportunities created by a major shift in tech. “I’ve been talking about the same story for 15 years,” Huang tells me. “I’ve barely had to change my slides.”

Dozens of people patiently stand outside awaiting the grand opening of Endeavor, Nvidia’s massive new headquarters in Santa Clara, Calif. The 500,000-square-foot structure is nothing less than imposing. A triangular foil to Apple’s circular new headquarters six miles away—its shape is drawn from the building block of computer graphics, the triangle—Endeavor’s glassy facade rises up over the San Tomas Expressway like the bow of a starship coming into port.

Unofficially, Endeavor has been open for a month, allowing more than 2,000 employees to acclimate to its tree-house-like structure. (Staffers enter from an underground parking garage and ascend at its center.) Today some 8,000 people are expected to stream through the doors for an open house for employees and their families. There are stations prepped with lines of finger food and beverages. Face painters await an inevitable onslaught of children. The smell of sawdust and paint lingers in the halls.

Inside, triangles abound. Floor tiles, privacy screens, lobby couches, window decals, skylights, cafeteria counters, even cross braces for the structure itself—all in shapes with three points. In a continuation of the theme set by Endeavor’s name, the building is bursting with rooms nodding to science fiction: Altair IV, Skaro, Skynet, Vogsphere, Hoth, Mordor.

Huang doesn’t keep an office, preferring to move around the building nomad-like, setting up shop in a variety of conference rooms. When Fortune visits, he’s taken up temporary residence in one called Metropolis, after the 1927 silent film—but the CEO is not present. A container stuffed with Clif Bars rests at the center of the table. Rolls of blueprints lie across a chair to the side.

When I finally locate Huang, he is wearing his signature leather moto jacket and nibbling on breaded chicken strips from a cup as he strides across the sprawling cafeteria with at least two dozen employees and their families in tow. At Huang’s side are his wife, Lori, as well as his son and daughter, who flew in from Taipei and Paris, respectively, to surprise their father. The CEO is apparently in a bind. He is trying, but failing, to complete a design review of Endeavor that was scheduled before the doors opened. But he’s already inundated with guests seeking handshakes and selfies, and he can’t resist a single one.

Daughter Madison plays photographer as Huang moves to take a photo with a family of four. He takes one knee to get on the same level as their two kids. “You built this,” he says to the parents after the photo is taken, gesturing to the space around them. “Have a good time today.”

Huang will repeat a version of this exchange hundreds of times during the open house, sometimes with handshakes, sometimes with hugs. Indeed, over the course of four hours, the CEO sits only once, for a photo with a young girl who resists her mother’s calls for a smile. (In a fatherly feat, Huang manages to wring one out of her.) The line to greet him never subsides.

The spectacle is a vivid example of what many former and current Nvidia employees say is the company’s secret sauce: its culture. For a publicly traded technology company with more than 11,000 employees, Nvidia is surprisingly tight-knit. It’s a credit to the many long-serving staffers who remain at the company (badge numbers are issued in serial; the lower the number, the longer the tenure) and the business battles they’ve endured together. It’s also the product of a founder CEO who embraces community, strategic alignment, and a core value system that promotes the pursuit of excellence through intellectual honesty.

Rene Haas, a senior executive at British semiconductor design company ARM, recalls six-hour meetings where Nvidia’s general managers would offer the CEO status updates for their lines of business. If Huang didn’t like what he heard—a roadblock, a missed goal—he would move to solve the problem then and there. “A head of software, a mid-level engineer, it didn’t matter—he would call those people and bring them to the conference room and determine the root cause of the issue,” Haas says. “If something had to be reprioritized and rescheduled to get it back on track, he would do it in real time, and the rest of the meeting was aborted. It was incredibly liberating. And he would never do it in a way that was diminishing. It might feel like that at the beginning, but then you’d realize that he’s trying to expedite the process by getting the right people in the room.”

The scientific pursuit of truth resonates at all levels of the company, employees say, helping tamp down on the organizational politics that obstruct other companies’ progress.

as Huang explains it: “Nobody is the boss. The project is the boss.”

Nvidia’s CEO takes off his wire-rimmed glasses and rubs his bloodshot eyes, fatigued after hours of slapping backs and pumping palms. He plops down at a wooden table where his wife and two kids are seated as the last of the open house attendees exit the building. Staffers working the event begin to sweep up the area around him, picking up plastic cups, wiping surfaces, arranging chairs. His security guards stand alert.

Huang leans toward me and asks me to pose the questions I had intended to get to earlier, when he was still busy working the rope line. I ask him what he believes is the next major application of artificial intelligence technology—the next billion-dollar opportunity for Nvidia, category competitors like Intel and Qualcomm [url=](qcom, +0.76%)[/url], and players like Google [url=](googl, +1.13%)[/url], Facebook [url=](fb, +0.92%)[/url], and Baidu.
“Nobody is the boss,” says Huang, explaining his egalitarian approach to problem-solving. “The project is the boss.”
“The thing that I believe is going to be really incredible that’s going to happen next is the ability for artificial intelligence to write artificial intelligence by itself,” he replies.

My eyes widen at the prospect as Huang continues. “In the future, companies will have an A.I. that is watching every single transaction—every business process—that is happening, all day long,” he says. “Certain transactions or patterns that are being repeated. The process could be very complicated. It could go through sales to engineering, supply chain, logistics, business operations, finance, customer service. And it could be observed that this pattern is happening all the time. As a result of this observation, the artificial intelligence software writes an artificial intelligence software to automate that business process. Because we won’t be able to do it. It’s too complicated.”

By now my head is spinning, lost in a bizarre vision that somehow combines the films Office Space, The Matrix, and Inception.

But Huang is still rolling. “We’re seeing early indications of it now,” he adds. “Generative adversarial networks, or GAN. I think over the next several years we’re going to see a lot of neural networks that develop neural networks. For the next couple of decades, the greatest contribution of A.I. is writing software that humans simply can’t write. Solving the unsolvable problems.”

Suddenly, a massive thud rips through the room, followed by the clatter of plastic cups. The space falls to a hush and Huang pauses, losing his train of thought. In one corner, two employees with overfilled arms had been precariously juggling the remnants of the wine and beer station. Gravity won.

“Lots and lots of perfectly good beer,” Huang says, breaking the silence. If only the humans in the room had detected the pattern; if only we were intelligent enough. “I felt that he was in an awkward position,” Huang says, hamming it up to giggles from the rest of his family. “My intelligence?…?I saw it coming. That’s why I was watching him. My eyes were getting bigger. It happened, exactly as I thought.”

It’s merely more evidence of Huang’s ability to see into the future.

A version of this article appears in the Dec. 1, 2017 issue of Fortune.

fortune.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen11/20/2017 8:53:52 PM
1 Recommendation   of 1580
 
China Challenges Nvidia's Hold on Artificial Intelligence Chips

Author: Tom Simonite Tom Simonite
Wired
November 20, 2017



Nvidia has built a large, lucrative market supplying hardware to AI projects, propelling its stock-market value up 10-fold in the past three years.
Getty Images
________________________________________

In July, China’s government issued a sweeping new strategy with a striking aim: draw level with the US in artificial intelligence technology within three years, and become the world leader by 2030. A call for research projects from China’s Ministry of Science and Technology posted online last month fills in some detail on the government’s plans. And it puts Silicon Valley chipmaker Nvidia, the leading supplier of silicon for machine-learning projects, in the cross hairs.

The Ministry of Science and Technology document lays out 13 “transformative” technology projects where it wants to put government money in coming months, hoping for delivery by 2021. One is to invent new chips to run artificial neural networks, the form of software propelling the AI ambitions of Google and other tech companies.

One criterion for the project refers specifically to Nvidia: the ministry says it wants a chip that delivers performance and energy efficiency 20 times better than that of Nvidia’s M40 chip, branded as an “accelerator” for neural networks. Now two years old, the M40 is not Nvidia’s latest and greatest chip, but is still used in AI projects.

The Chinese government has targeted Nvidia before. An October call for research proposals from the National Development and Reform Commission included another request for high-powered AI chips. In August, an investment fund owned by China's State Development & Investment Corp. led a $100 million funding round in Cambricon, a Beijing AI chip startup. Cambricon announced two server chips early this month that might substitute for Nvidia chips in some AI projects if they live up to their billing.

Cambricon is part of a boom of Chinese companies and startups working on AI chips—mirroring one in the US that has seen startups and even Google look to challenge Nvidia. In October, Beijing’s Horizon Robotics, founded by veterans of search company Baidu, raised $100 million, and Deephi hauled in $40 million. Established gadget maker Huawei is collaborating with Cambricon on AI chips for phones and other devices.

Chinese officials and tech companies each have good reasons to target Nvidia, which has built a large, lucrative market supplying hardware to AI projects. The company’s stock-market value grew 10-fold in the past three years as more companies invested in AI. It has begun offering chips for robots, drones, and autonomous vehicles, and signed up partners like Toyota and Volvo.

On the government side, Chinese officials want a domestic supplier in part because of concerns about relying on foreign chips for military and other applications, says Elsa Kania, an adjunct fellow at think tank the Center for a New American Security.

The Chinese government faces many challenges in making its AI and hardware dreams come true. China produces more computer-science graduates and machine-learning research papers than the US. But the country still lags in the high-level expertise needed for advanced AI projects, says Kania.

China has struggled for years to make its chip industry more competitive with those from the US, Korea, and Japan. Attempts to foster alternatives to Intel and other US processors helped birth chips used in some of the country’s world-beating supercomputers, but not widely used alternatives for servers and PCs. Alibaba, China’s biggest cloud provider, relies on Intel and Nvidia chips, for example. “They may have aspirations, but when it comes to designing chips and building fabs to make them at scale the Chinese are still multiple generations behind,” says Paul Triolo, who tracks Chinese technology and related policy at Eurasia Group.

China’s chip initiatives have been hampered by wary US officials scrutinizing acquisitions of US semiconductor technology, who may soon turn warier. President Obama blocked a Chinese fund from acquiring a US chip firm in December 2016; President Trump quashed a similar deal in September. This month, a bipartisan group of lawmakers introduced bills to sharpen the teeth of the committee that advised on those decisions, in part motivated by China’s ambitions in chips and AI.

For now, few of China’s AI chip companies are directly attacking Nvidia’s core market selling chips for servers—despite the government’s apparent interest. Startups Horizon Robotics and Deephi, and the much larger Huawei, are instead focused on chips to bring AI functions such as understanding video to devices such as cars and cameras.

The healthy market for surveillance in China is one driver of that trend, says Chris Rowen, an investor who previously led Silicon Valley chip-design companies. Putting AI chips into cameras can make them capable of automatically spotting people, objects, or actions. Google recently touted the image-recognition ability of its Clips camera that boasts an AI chip, although it uses it that capability to take candid family snapshots, not for surveillance.

Rowen says China’s chip startups are also eyeing the vast market potential of putting AI functions into home appliances, car parts, and other Chinese-made gadgets. “The way to make this technology proliferate is to knock down the costs,” Rowen says.

Despite the emerging trans-Pacific competition in AI, there are not clear battle lines between Chinese and American companies. State-backed Cambricon is licensing designs from Silicon Valley chip designer Arteris for the “backbone” of interconnects that move data around a chip, for example. Intel led the $100 million funding round into Horizon Robotics, which is working on AI chips for autonomous vehicles, even though it has products of its own in that market. And in September, Nvidia CEO Jensen Huang announced new deals with Chinese internet giants Alibaba, Baidu, and Tencent. “China’s AI applications are going to be running on US hardware for a while, that’s the reality at this point,” says Triolo.

wired.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.