SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNVIDIA Corporation (NVDA)


Previous 10 Next 10 
From: Frank Sully9/17/2021 8:33:35 PM
1 Recommendation   of 2485
 
842 Chips Per Second: 6.7 Billion Arm-Based Chips Produced in Q4 2020

By Anton Shilov February 13, 2021

Arm-based chips surpass x86, ARC, Power, and MIPS-powered chips, combined



(Image credit: Arm)

Being the most popular microprocessor architecture, Arm powers tens of billions of devices sold every year. The company says that in the fourth quarter of 2020 alone, the Arm ecosystem shipped a record 6.7 billion Arm-based chips, which works out to an amazing production rate of 842 chips per second. This means that Arm outsells all other popular CPU instruction set architectures — x86, ARC, Power, and MIPS — combined.

6.7 Billion of Arm Chips Per Quarter

Arm's Cortex-A, Cortex-R, Cortex-M, and Mali IP powers thousands of processors, controllers, microcontrollers, and graphics processing units from over 1,600 companies worldwide. As the world is rapidly going digital, demand for all types of chips is at all times high, giving a great boost to Arm given the wide variety of applications its technologies are used for.

Arm says that as many as 842 chips featuring its IP were sold every second in the fourth quarter of 2020. Meanwhile, it is noteworthy that although Arm’s Cortex-A-series general-purpose processor cores get the most attention from the media (because they are used inside virtually all smartphones shipped these days), Arm’s most widely used cores are its Cortex-M products for microcontrollers that are virtually everywhere, from thermometers to spaceships. In Q4 alone, 4.4 billion low-power Cortex-M-based microcontrollers were sold.

"The record 6.7 billion Arm-based chip shipments we saw reported last quarter is testament to the incredible innovation of our partners: from technology inside the world’s number one supercomputer down to the tiniest ultra-low power devices," said Rene Haas, president of IP Products Group at Arm. "Looking ahead, we expect to see increased adoption of Arm IP as we signed a record 175 licenses in 2020, many of those signed by first-time Arm partners."

tomshardware.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/18/2021 9:52:10 AM
   of 2485
 
Chip makers like Nvidia are set to soar as semiconductor sales to reach $544 billion in 2021, Bank of America says

Carla Mozée

Sep. 18, 2021, 08:30 AM



Nvidia is a top stock pick for Bank of America.

Krystian Nawrocki/Getty Images
  • Bank of America on Friday raised its 2021 outlook for sales growth in the semiconductor industry to 24% from 21%.
  • Semiconductor companies have newfound pricing power in the ongoing global chip shortage.
  • Nvidia, ON Semiconductor and KLA-Tencor are among the investment bank's top stock picks in the sector.
Bank of America bumped up its sales outlook for the semiconductor industry as it sees growing demand for chips that make computers and cars run, and named Nvidia and auto chip supplier ON Semiconductor among its top stock picks heading into the final quarter of 2021.

The persistent global chip shortage that has dogged companies ranging from automakers, to video game publishers, to consumer electronics producers, has contributed to strengthening sales for chip companies. BofA expects above-trend growth to last through next year and now projects total industry sales in 2021 to increase by 24% to $544 billion, up from its previous view for an increase of 21%.

"We remain firmly in the stronger-for-longer camp for semis given their critical role in the rapidly digitizing global economy and the newfound pricing power and supply discipline of this remarkably profitable industry operating with a very lean supply chain," said analysts led by Vivek Arya in a Friday research note.

BofA's semiconductor analysts outlined their fourth-quarter playbook before investors headed into the fourth quarter. It said between 2010 and 2020, the fourth and first quarters have been the two best quarters to own semiconductor stocks as the PHLX Semiconductor Sector has outperformed the benchmark S&P 500.

There are three hot spots in the industry: computing, which includes cloud services and AI, gaming and networking; cars; and capex, or capital spending by businesses and the government.

In the computing group, BofA raised its price target on Nvidia to $275 from $260 and said the graphics-cards maker is a top pick along with AMD and Marvell. In the car group, it increased its price target on top pick ON Semiconductor to $60 from $55.

The investment bank called KLA-Tencor its top pick in the capex segment and raised its price target by 6% to $450 from $425. The stock traded around $369 on Friday.

markets.businessinsider.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/19/2021 10:20:54 AM
   of 2485
 
Latest Research From NVIDIA AI and Mass General Brigham Explains The Importance of Federated Learning in Medical AI and Other Industries

By
Amreen Bawa

September 18, 2021



Federated learning is a new way to train artificial intelligence models with data from multiple sources while maintaining anonymity. This removes many barriers and opens up the possibility for even more sharing in machine learning research.

The latest results published in Nature Medicine show promising new research wherein the federated learning models build powerful AI models that can be generalized among healthcare institutions. These findings are currently for the healthcare industry. It shows that further down the line, it could have a significant role in energy, financial services, and manufacturing applications. Given the pandemic, healthcare institutions decided to take matters into their own hands and work together and found out that institutions in any industry can develop predictive AI models, and collaboration amongst professionals could set new standards in the domain of both accuracy and generalizability, the two factors that usually do not work together.

There are a lot of limitations for AI models when it comes to data. Data can be biased, small organizations don’t have enough information or resources in their training set. Even big datasets might not give the complete picture if they come from different sources with differing demographics.

To build a robust, generalizable model with your data you will need enough training examples. But in many cases, privacy regulations limit the ability for organizations to share their patient medical records or datasets on common supercomputers and cloud servers.

Federal learning is the solution to the above problems. As per the latest research, Dubbed EXAM (for EMR CXR AI Model) published in Nature Medicine by NVIDIA and Mass General Brigham , the researchers collected data from 20 hospitals across five continents. This data was being used to train a neural network to predict the requirement of medical oxygen required by a patient with COVID-19 symptoms within 24-72 hours of arrival in the Emergency room. According to the research team, this is one of the most extensive study of federated learning till now.

With Federated Learning, the EXAM Collaborators created a model that learned from every participating hospital’s chest X-ray images and lab values without ever seeing private patient data stored in each location. A copy of this neural network was created at a global scale by training on local GPUs sent periodically back up for analysis during runtime while also being aggregated together into one version across all collaborating hospitals to be updated with new weights. This was like an exam answer key without sharing any of the study material used to develop the solutions.

The EXAM global model, shared with all participating sites, resulted in an improvement of 16% of the AI model’s average performance. Researchers saw 38% greater generalizability when compared to models trained at any single site.

Paper: nature.com

Source: blogs.nvidia.com

Related Paper: nature.com

marktechpost.com


Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/20/2021 4:57:54 PM
   of 2485
 
Pawsey Offers 1st Look at ‘Setonix’ $48M HPE Cray EX Supercomputer

September 20, 2021

by staff



Seton is supercomputer

Perth, Australia — The first phase of what the Pawsey Supcomputer Centre said will be the fastest public research supercomputer in the Southern Hemisphere has been unveiled at its new home at the Pawsey Centre in Western Australia, “resplendent in artwork that reflects the skies it will help researchers to unlock,” Pawsey said.

Stage 1 of the new $48 million HPE Cray EX supercomputer known as Setonix – the scientific name for Western Australia’s cheerful marsupial, the quokka – now stands in the white space at the Pawsey Centre next to its supercomputer cousins, Magnus and Galaxy.

Stage 1 delivery of Setonix will increase the computing power of the centre by 45 per cent and when fully operational, Setonix will be up to 30 times more powerful than Pawsey’s existing two systems combined.

It promises a potential peak of 50 petaFLOPS of compute power, making it the fastest research supercomputer in Australia and across the Southern Hemisphere.

Pawsey Centre Executive Director Mark Stickells says through Setonix, Australia is on the cusp of the biggest computing power advance in the nation’s history.

“This new system will accelerate discovery and enable more universities, scientific institutions and researchers — as well as our partners in industry and business — to access next-generation computing speed and data analysis,” he said.

“Setonix marks a step change in Pawsey’s supercomputing firepower, and this additional capacity will allow more researchers and industries to access next-generation computing speed and data analysis.”

Pawsey currently supports the work of more than 2600 Australian and international researchers from its Perth facility but expects to be able to support more projects and contribute additional space to the national merit pool thanks to additional capacity.

Stage 1 will see a team of early adopter researchers run code to fine tune Setonix before it becomes available to 2022 allocations early in the year.

Stage 2 will be delivered by mid-2022 and will be operational in the second half of the year.

Setonix will include an eight-cabinet HPE Cray EX system built on the same architecture used in world-leading exascale supercomputer projects when fully deployed.

It will include more than 200,000 AMD compute cores across 1600 nodes, over 750 next-generation AMD GPUs, and more than 548 TB of CPU and GPU RAM, connected by HPE’s Slingshot interconnect.

It will eventually deliver at least 200Gb/sec of bandwidth into every compute node, and 800Gb/sec into the GPU nodes, as well as interoperability with the Pawsey Centre ethernet networks.

The first look at Setonix reveals cabinets that continue the theme of Indigenous art casing that began with Magnus. Wajarri Yamatji visual artist Margaret Whitehurst produced the artwork for Setonix, inspired by the stars that shine over Wajarri country in Western Australia’s Mid-West.

“Margaret’s design is a beautiful representation of a tradition of Aboriginal astronomy that dates back thousands of years,” Stickells says. “Margaret and the Wajarri people are the traditional owners of CSIRO’s Murchison Radio-astronomy Observatory in Western Australia where one part of the world’s largest radio astronomy observatory, the Square Kilometre Array, will be built.

“Setonix will process vast amounts of radio telescope data from SKA-related projects, and many other projects of national and international significance that we are proud to support.”

insidehpc.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/20/2021 8:16:33 PM
   of 2485
 
WHERE CHINA’S LONG ROAD TO DATACENTER COMPUTE INDEPENDENCE LEADS

September 20, 2021 Timothy Prickett Morgan



The Sunway TaihuLight machine has a peak performance of 125.4 petaflops acrpss 1-,649,600 cores. It sports 1.31 petabytes of main memory. To put the peak performance figure in some context, recall that the current (by far top) supercomputer until this announcement had been Tianhe-2 with 33.86 pea petaflop capability. One key difference, other than the clear peak potential, is that TianhuLight came out of the gate with demonstrated high performance on real-world applications, some of which are able to utilize over 8 million of the machine’s 10 million-plus cores.

While we are big fans of laissez faire capitalism like that of the United States and sometimes Europe – right up to the point where monopolies naturally form and therefore competition essentially stops, and thus monopolists need to be regulated in some fashion to promote the common good as well as their own profits – we also see the benefits that accrue from a command economy like that which China has built over the past four decades.

A recently rumored announcement of a GPU designed by Chinese chip maker Jingjia Micro and presumably etched by Semiconductor Manufacturing International Corp (SMIC), the indigenous foundry in China that is playing catch up to Taiwan Semiconductor Manufacturing Co, Intel, GlobalFoundries, and Samsung Semiconductor, got us to thinking about this and what it might mean when – and if – China ever reaches datacenter compute independence.

Taking Steps Five Years At A Time

While China has been successful in many areas, particularly in becoming the manufacturing center of the world, it has not been particularly successful in achieving independence in datacenter compute. Some of that has to do with the immaturity of its chip foundry business, some of it has to do with its experience in making big, wonking, complex CPU and GPU designs that can take on the big loads in the datacenter. China has a bit of a chicken and egg problem here, and as usual, the smartphone and tablet markets is giving the Middle Kingdom’s chip designers and foundries the experience they need to take it up another notch to take on the datacenter.

The motivations are certainly there for China to achieve chip independence. The current supply chain issues in semiconductors as well as the messy geopolitical situation between China and the United States, which draws in Taiwan, South Korea, Japan, and Europe as well. Like every other country on Earth, China has an imbalance between semiconductor production and semiconductor consumption, and that is partly a function of the immense amount of electronics and computer manufacturing that has been moved to China over the past two decades.

According to Dauxe Consulting, which provides research into the Chinese market, back in 2003 China consumed about 18.5 percent of semiconductors (that’s revenue, not shipments), which was a little bit less than the Americas (19.4 percent), Europe (19.4 percent), or Japan (23.4 percent). SMIC was only founded in 2000 and had negligible semiconductor shipment revenue at the time. Fast forward to 2019, which is the last year for which data is publicly available, and China’s chip manufacturing accounts for about 30 percent of chip revenues in the aggregate, but the chips that Chinese companies buy to build stuff account for over 60 percent of semiconductor consumption (which is revenues going to SMIC as well as all of the other foundries, big and small, around the world). This is a huge imbalance, and it is not surprising that the Chinese government wants to achieve chip independence.

While there may be strong political and economic reasons why Chinese chip independence might mean China’s reach outside of its own markets diminishes in proportion to how much it can take care of its own business. China can compel its own state, regional, and national governments as well as state-controlled businesses to Buy China, but it can’t do that outside of its political borders. It can make companies and governments in Africa and South America attractive orders they probably won’t refuse. It will be a harder sell indeed in the United States and Europe and their cultural and economic satellites.

More about that in a moment.

Let’s start our Chinese datacenter compute overview with that GPU chip from Jingjia Micro that we heard about last week as a starting point because it illustrates the problem China has. We backed through all of the stories and found that a site called MyDrivers is the originator of the story, as far as we can see, and has this table nicked from Jingjia Micro to show how the JM9 series of GPUs stacks up against the Nvidia GeForce GTX 1050 and GTX 1080 GPUs that debuted in late 2015 and that started shipping in 2016 in volume:

There are two of these JM9 series GPUs from Jingjia, and they are equal or better to the Nvidia equivalents. The top end JM9271 is the interesting one as far as we are concerned because it has a PCI-Express 4.0 interface and thanks to HBM2 stacked memory weighing in at 16 GB, it has twice the capacity of the GTX 1080 and at 512 GB/sec of bandwidth has 60 percent more memory bandwidth at 512 GB/sec while burning 11.1 percent more power and delivering 9.8 percent lower performance at 8 teraflops at FP32 single precision.

This Jingjia card is puny compared to the top-of-the-line “Ampere” GA100 GPU engine from Nvidia, which runs at 1.41 GHz, has 40 GB or 80 GB of HBM2E stacked memory, and 19.49 teraflops at single precision. The cheaper Ampere GA102 processor used in the GeForce RTX 3090 gamer GPU (as well as the slower RTX 3080) runs at 1.71 GHz, has 24 GB of GDDR6X memory, and delivers an incredible 35.69 teraflops at FP32 precision– and has ray tracing accelerators that can also be used to boost machine learning inference. The Ampere A100 and RTX 3090 devices burn 400 watts and 350 watts, respectively, because the laws of physics must be obeyed. If you want to run faster these days, you also have to run hotter because Moore’s Law transistor shrinks are harder to come by.

Architecturally speaking, the JM9 series is about five years behind Nvidia, with the exception of the HBM memory and the PCI-Express 4.0 interface. The chip is implemented in SMIC’s 28 nanometer processes, which is not even close to the 14 nanometer process that SMIC has working or its follow-on, which is akin to TSMC’s 10 nanometer node and Samsung’s 8 nanometer node (the latter process being used to make the Ampere RTX GPUs). Jingjia is hanging back, getting its architecture out there and tested before it jumps to a process shrink. TSMC has had 28 nanometer in the field for a decade now.

This is not even close to China’s best effort. Tianshu Zhixin is working on a 7 nanometer GPU accelerator called “Big Island” that looks to be etched by TSMC and including its CoWoS packaging (the same one used by Nvidia for its GPU accelerator cards). The Big Island GPU is aimed squarely at HPC and AI acceleration in the datacenter, not gaming, and it will absolutely be competitive if the reports (on very thin data and a lot of big talk it looks like) pan out. Another company called Biren Technology is working on its own GPU accelerator for the datacenter, and thin reports out of China say the Biren chip, etched using TSMC 7 nanometer processes, will compete with Nvidia’s next-gen “Hopper” GPUs. We shall see when Biren ships its GPU next year.

We are skeptical of such claims, and reasonably so. If you looked at the plan for the “Godson” family of MIPS-derived and X86-eumlating processors that were created by the Institute of Computing Technology at the Chinese Academy of Sciences. (You know CAS, they are the largest shareholder in Chinese IT gear maker Lenovo.) We reported with great interest on the Godson processors (also known by the synonymous name Loongson) and the roadmap to span them from handhelds to supercomputers way back in February 2011. These processors made their way into the Dawning 6000 supercomputers made by Sugon, but as far as we know they did not really get any of the traction that Sugon had hoped in the datacenter.

It remains to be seen if the Loongson 3A5000 clone of the AMD Epyc processor, which is derived from the four-core Ryzen chiplet used in the “Naples” Epyc processor from 2017 and which is said to have its own “in-house” GS464V microarchitecture (oh, give me a break. . . .), will do better in the broader Chinese datacenter market. With the licensing limited to the original Zen 1 cores and the four-core chiplets, the AMD-China joint venture, called Tianjin Haiguang Advanced Technology Investment Co, has the Chinese Academy of Sciences as a big (but not majority) shareholder, and it is expected that a variant of this processor will be at the heart of at least one of China’s exascale HPC systems.

By the way, the old VIA Technologies (the third company with an X86 license) has partnered with the Shanghai Municipal Government to create the Zhaoxin Semiconductor partnership, which makes client devices based on the X86 architecture. Zhaoxin could be tapped to make a big, bad X86 processor at some point. Why not?

Thanks to being blacklisted by the US government, Huawei Technologies, one of the dominant IT equipment suppliers on Earth, has every motivation to help create an indigenous and healthy market for CPUs, GPUs, and other kinds of ASICs in China, and has a good footing with the design efforts of its arm’s length (pun intended) fabless semiconductor division, HiSilicon. The HiSilicon Kunpeng CPUs and Kirin GPUs hew pretty close to the Arm Holdings roadmaps, which is fine, and there is no reason to believe that if properly motivated – meaning enough money is thrown at it and China takes an attitude that it is going to be very aggressive with Hauwei sales outside of the United States and Europe – it could do more custom CPUs and even GPUs. It could acquire Jingia, Tianshu Zhixin, or Biren, for that matter.

For a while there, it looks like Suzhou PowerCore, a revamped PowerPC re-implementer that joined IBM’s OpenPower Consortium and that delivered a variant of the Power8 processor for the Chinese market, might try to extend into the Power9 and Power10 eras with its own Power chip designs. But that does not seem to have happened, or if it did, it is being done secretly.

The future Sunway exascale supercomputer at the National Supercomputing Center in Wuxi, which is one of the three exascale systems being funded by the Chinese government. It has a custom processor, a kicker to the SW26010 processor used in the original Sunway TaihuLight supercomputer, which also dates from 2016. The SW26010 had 260 cores, 256 of them skinny cores for doing math and four of the fat cores for managing data that feeds the cores, and we think that the Sunway exascale machine won’t have a big architectural change, but have some tweaks, add more compute element blocks to the die, and ride down the die shrink to reach exascale. The SW26010 and its kicker, which we have jokingly called the SW52020 because it has double of everything, mixes architectural elements of CPUs and math accelerators, much as Fujitsu’s A64FX Arm chips do. The A64FX is used in the “Fugaku” pre-exascale supercomputer at the RIKEN lab in Japan. Hewlett Packard Enterprise is reselling the A64FX in Apollo supercomputer clusters, but as far as we know, no one is reselling SW26010 in any commercial machines.

Arm server chip maker Phytium made a lot of noise back in 2016 with its four-core “Earth” and 64-core “Mars” Arm server chips, but almost immediately went mostly dark thanks to the trade war between the US and China that really got going in 2018.

The most successful indigenous accelerator to be developed and manufactured in China is the Matrix2000 DSP accelerator used at the National Super Computer Center in Guangzhou. That Matrix2000 chip, which uses DPs to do single-precision and double-precision math acceleration in an offload model from CPU hosts, just like GPUs and FPGAs, was created because Intel’s “Knights” many-core X86 accelerators were blocked for sale to China back in 2013 for supercomputers. The Matrix2000 DSP engines, along with the proprietary TH-Express 2+ interconnect, were deployed in the Tianhe-2A supercomputer with 4.8 teraflops of oomph each at FP32 single precision. That was back in 2015, mind you, when the GTX 1080 was being unveiled by Nvidia, for comparison.

As far as we know, these Matrix2000 DSP engines were not commercialized beyond this system and the upcoming Tianhe-3 exascale system, which will use a 64-core Phytium 2000+ CPU and a Matrix2000+ DSP accelerator. One-off or two-off compute engines are interesting, of course, but they don’t change the world except inasmuch as they show what can be done with a particular technology. But the real point is to bring such compute engines to the masses, thereby lowering their unit costs as volumes increase.

And China surely has masses. But a lot of Chinese organizations, both in government and in industry, have free will when it comes to architectures. But that could change. China could whittle down the choices for datacenter compute to a few architectures, all of them homegrown and all of them isolated from the rest of the world. It has enough money – and enough market of its own – to do that.

nextplatform.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/21/2021 11:32:00 AM
   of 2485
 
3 Top Artificial Intelligence Stocks to Buy in September

Nvidia, Palantir, and Salesforce are all solid AI stocks.



Leo Sun

Sep 21, 2021 at 8:45AM

Key Points
  • Nvidia’s data center GPU sales will surge as the world’s software platforms process more machine learning and AI tasks.
  • Palantir’s margin expansion rate suggests its platforms have impressive pricing power.
  • Salesforce expects its annual revenue to more than double in five years as companies automate.
Many investors might think of sentient robots when tech pundits discuss the booming artificial intelligence (AI) market. However, intelligent robots only represent a tiny silver of a worldwide AI market that is projected to grow at a compound annual growth rate (CAGR) of 35.6% from 2021 to 2026, according to Facts and Factors.

A large portion of that market actually revolves around algorithms and software platforms that help companies make data-driven decisions, automate repetitive tasks, streamline their operations, and cut costs.

Let's examine three top AI stocks that will benefit from the market's expansion.



IMAGE SOURCE: GETTY IMAGES.

1. Nvidia

Nvidia ( NASDAQ:NVDA) is the world's top producer of discrete GPUs. It controlled 83% of the market in the second quarter of 2021, according to JPR, while its rival Advanced Micro Devices controlled the remaining 17%.

Nvidia's discrete GPUs are usually associated with high-end PC gaming, but it also sells high-end GPUs for data centers that process AI and machine learning tasks more efficiently than stand-alone CPUs.

Nvidia's main data center products include its A100 Tensor Core GPU and the DGX A100 AI system, which bundles together eight A100 GPUs. All three of the public cloud leaders -- Amazon, Microsoft, and Alphabet's Google -- currently use Nvidia's A100 GPUs to power some of their AI services.

Nvidia also acquired the data center networking equipment maker Mellanox last April to further strengthen that core business. Nvidia's data center revenue surged 124% to $6.7 billion, or 40% of its top line, in fiscal 2021 (which ended in January). Its total revenue rose 53% to $16.7 billion.

Analysts expect Nvidia's revenue and earnings to rise 54% and 65%, respectively, this year, as it sells more gaming and data center GPUs. It faces some near-term headwinds with the ongoing chip shortage and its delayed takeover of Arm Holdings, but its stock still looks reasonably valued at 47 times forward earnings.

2. Palantir

Palantir ( NYSE:PLTR) is a data mining and analytics company that operates two main platforms: Gotham for government agencies and Foundry for large enterprise customers.



IMAGE SOURCE: GETTY IMAGES.

Palantir's platforms collect data from disparate sources, process it with AI algorithms, and help organizations make informed decisions. The U.S. military uses Gotham to plan missions, while the CIA -- one of Palantir's earliest investors -- uses it to gather intel. Palantir leverages that hardened reputation to attract big enterprise customers like BP and Rio Tinto to its Foundry platform.

Palantir's revenue rose 47% to $1.1 billion in 2020, and it expects its revenue to grow at least 30% annually from 2021 to 2025. That ambitious forecast suggests it will generate more than $4 billion in revenue in 2025. Palantir isn't profitable yet, but its adjusted gross and operating margins are expanding and suggest its platforms still have impressive pricing power.

Palantir's stock isn't cheap at 37 times this year's sales, but its ambitious growth targets and its ultimate goal of becoming the "default operating system for data across the U.S. government" make it a top AI stock to buy.

3. Salesforce

Salesforce ( NYSE:CRM) is the world's largest cloud-based customer relationship management (CRM) service provider. It also provides cloud-based e-commerce, marketing, and analytics services.

Salesforce's services help companies manage their sales teams and customer relationships more efficiently, automate tasks, and reduce their overall dependence on on-site human employees. It unites all those platforms with its data visualization platform Tableau, its newly acquired enterprise communication platform Slack, and its AI-powered Einstein assistant.

Salesforce's revenue rose 24% to $21.25 billion in fiscal 2021 (which ended this January), and it expects to more than double its annual revenue to over $50 billion by fiscal 2026. It expects that growth to be buoyed by the secular expansion of all of its five main end markets -- which include sales, service, marketing & commerce, platform, and analytics & integration.

That's an impressive forecast for a stock that trades at 59 times forward earnings and less than 10 times this year's sales. A few concerns about Salesforce's $27.7 billion takeover of Slack have been depressing the stock's valuations lately, but it's still well-poised to profit from a growing need for cloud-based CRM services and other AI-powered data-crunching tools.

fool.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 11:20:20 AM
   of 2485
 
NVIDIA CEO Jensen Huang Special Address | NVIDIA Cambridge-1 Inauguration


Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 11:21:14 AM
   of 2485
 
NVIDIA Calls UK AI Strategy “Important Step,” Will Open Cambridge-1 Supercomputer to UK Healthcare Startups September 22, 2021

Sept. 22, 2021 — NVIDIA today called the U.K. government’s launch of its AI Strategy an important step forward, and announced a program to open the Cambridge-1 supercomputer to U.K. healthcare startups.

David Hogan, vice president of Enterprise EMEA at NVIDIA, said, “Today is an important step in furthering the U.K.’s strategic advantage as a global leader in AI. NVIDIA is proud to support the U.K.’s AI ecosystem with Cambridge-1, the country’s most powerful supercomputer, and our Inception program that includes more than 500 of the U.K.’s most dynamic AI startups.”

NVIDIA will also today announce the next phase for Cambridge-1, in which U.K.-based startups will be able to submit applications to harness the system’s capabilities, during a talk at the Wired Health: Tech conference by Kimberly Powell, NVIDIA’s vice president of Healthcare.

“AI and digital biology are reshaping the drug discovery process, and startups are by definition on the bleeding edge of innovation,” she said. “Cambridge-1 is the modern instrument for science and we look forward to opening the possibilities for discovery even wider to the U.K. startup ecosystem.”

Powell will also describe work underway with U.K. biotech company and NVIDIA Inception member Peptone, which will have access to Cambridge-1. Peptone is developing a protein engineering system that blends generative AI models and computational molecular physics to discover therapies to fight inflammatory diseases like COPD, psoriasis and asthma.

“Access to the compute power of Cambridge-1 will be a game-changer in our effort to fuse computation with laboratory experiments to change the way protein drugs are engineered,” said Dr. Kamil Tamiola, Peptone CEO and founder. “We plan to use Cambridge-1 to vastly improve the design of antibodies to help treat numerous inflammatory diseases.”

NVIDIA anticipates that giving U.K. startups the opportunity to use Cambridge-1 will accelerate their work, enabling them to bring innovative products and services to market faster, as well as ensure that the U.K. remains a compelling location in which to develop and scale up their businesses.

Startups that are selected for the new program will not only gain access to Cambridge-1. They will also be invited to meet with the system’s founding partners to amplify collaboration potential, and access membership benefits of NVIDIA Inception, a global program designed to nurture startups, which has more U.K. startups as members than from any other country in Europe.

Founding partners of Cambridge-1 are: AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore Technologies.

NVIDIA Inception provides startups with critical go-to-market support, training, and technology. Benefits include access to hands-on, cloud-based training through the NVIDIA Deep Learning Institute, preferred pricing on hardware, invitations to exclusive networking events, opportunities to engage with venture capital partners and more. Startups in NVIDIA Inception remain supported throughout their entire life cycle, helping them accelerate both platform development and time to market.

Startup applications can be submitted here before December 30 at midnight GMT, with the announcement of those selected expected early in 2022.

About Cambridge-1

Launched in July 2021, Cambridge-1 is the U.K.’s most powerful supercomputer. It is the first NVIDIA supercomputer designed and built for external research access. NVIDIA will collaborate with researchers to make much of this work available to the greater scientific community.

Featuring 80 DGX A100 systems integrating NVIDIA A100 GPUs, Bluefield-2 DPUs and NVIDIA HDR InfiniBand networking, Cambridge-1 is an NVIDIA DGX SuperPOD that delivers more than 400 petaflops of AI performance and 8 petaflops of Linpack performance. The system is located at a facility operated by NVIDIA partner Kao Data.

Cambridge-1 is the first supercomputer NVIDIA has dedicated to advancing industry-specific research in the U.K. The company also intends to build an AI Center for Excellence in Cambridge featuring a new Arm-based supercomputer, which will support more industries across the country.

Cambridge-1 was launched with five founding partners: AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore.

About NVIDIA

NVIDIA’s invention of the GPU in 1999 sparked the growth of the PC gaming market and has redefined modern computer graphics, high performance computing and artificial intelligence. The company’s pioneering work in accelerated computing and AI is reshaping trillion-dollar industries, such as transportation, healthcare and manufacturing, and fueling the growth of many others. More information at https://nvidianews.nvidia.com.

hpcwire.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 12:30:20 PM
   of 2485
 
This Catalyst Could Give Nvidia Stock a Big Boost

The graphics card specialist is making waves in a lucrative market.



Harsh Chauhan

Key Points
  • Cloud gaming adoption is increasing at a terrific pace.
  • Nvidia has already made a solid dent in the cloud gaming space with the GeForce NOW service.
  • GeForce NOW's increased coverage, affordable pricing, and big game library will be tailwinds for Nvidia in this market.
The video gaming industry has been a big catalyst for Nvidia ( NASDAQ:NVDA) in recent years, helping the company clock terrific revenue and earnings growth and boosting its stock price as gamers have lapped up its powerful graphics cards to elevate their gaming experience.

The good news for Nvidia investors is that graphics card demand is going to boom in the coming years, and the company is in a solid position to take advantage of that thanks to its dominant market share. However, there is an additional catalyst that could give Nvidia's video gaming business a big shot in the arm over the next few years: cloud gaming. Let's take a closer look at the cloud gaming market, and check how Nvidia is looking to make the most of this multibillion-dollar opportunity.



NVDA DATA BY YCHARTS

Cloud gaming adoption is growing rapidly

Newzoo, a provider of market research and analytics for video gaming and esports, estimates that the global cloud gaming market is on track to generate $1.6 billion in revenue this year, with the number of paying users jumping to 23.7 million. That may not look like a big deal for Nvidia right now given that it has generated nearly $22 billion in revenue in the trailing 12 months. However, the pace at which the cloud gaming market is growing means that it could soon reach a point where it moves the needle in a big way for Nvidia.

Newzoo estimates that the cloud gaming market could hit $6.5 billion in revenue by 2024, growing more than four-fold compared to this year's estimated revenue. The research firm also points out that the addressable user market for cloud gaming could be as big as 165 million by the end of 2021, indicating that there are millions of users out there that could buy cloud gaming subscriptions.

In fact, Newzoo points out that 94% of the gamers it surveyed have either tried cloud gaming already or are willing to try it, which means that the market could quickly expand. Nvidia is becoming a dominant player in the cloud gaming space, which could add billions of dollars to its revenue in the long run.



IMAGE SOURCE: GETTY IMAGES

Nvidia is pulling the right strings to tap this massive opportunity

Nvidia pointed out in March this year that its GeForce NOW cloud gaming service was nearing 10 million members. This is impressive considering that the service was launched in February 2020 with a subscription costing $5 per month. The company is now offering a premium subscription service priced at $9.99 per month or $99.99 a year.

The introductory $5-a-month subscription will remain available to members who were already on that plan before the new Priority membership was rolled out. This effectively means that the new GeForce NOW customers will increase Nvidia's revenue per user from the cloud gaming business. It wouldn't be surprising to see the service gain traction among gamers because of the benefits on offer.

The premium subscription will give gamers access to ray-tracing-enabled games, as well as its deep learning super sampling (DLSS) feature that upscales selected games to a higher resolution for a more immersive experience. What's more, Nvidia has a library of 1,000 PC (personal computer) games on the GeForce NOW platform, giving gamers a wide range of titles to choose from.

It is also worth noting that Nvidia is rapidly opening new data centers and upgrading the capacity of existing ones to capture more of the cloud gaming market. The company has 27 data centers that enable GeForce NOW in 75 countries.

Another important insight worth noting is that 65% of Nvidia's 10 million GeForce NOW members play games on underpowered PCs or Chromebooks. Those users wouldn't have been able to run resource-hungry games without Nvidia's data centers, which do the heavy lifting and transmit the gameplay to users' screens. Nvidia says that 80% of the gaming sessions on GeForce NOW take place on devices that wouldn't have been able to run those games locally because of weak hardware or incompatibility.

This explains why the demand for cloud gaming has spiked substantially -- consumers need not invest in expensive hardware, nor do they need to buy game titles separately. They can simply buy subscriptions from Nvidia and choose from over a thousand games that the GeForce NOW library provides.

More importantly, Nvidia is expanding into new markets such as Southeast Asia, while bolstering its presence in other areas such as Latin America and the Middle East. As such, the company's GeForce NOW subscriber count could keep growing at a fast clip in the future.

Gauging the financial impact

With paying users of cloud gaming expected to hit nearly 24 million this year and Nvidia already having scored 10 million GeForce NOW subscribers, the company has got off to a good start in this market.

The addressable market that Nvidia could tap into is also expected to hit 165 million potential subscribers by the end of 2021, as discussed earlier. If Nvidia manages to corner half of those potential paying cloud gaming subscribers in the next few years and get $100 a year from each subscriber (based on the annual GeForce NOW subscription plan), the company could be looking at substantial annual revenue from the cloud gaming business. This should give investors yet another reason to buy this growth stock that is already winning big in graphics cards and data centers.

fool.com

Share RecommendKeepReplyMark as Last Read


From: Frank Sully9/22/2021 2:01:11 PM
   of 2485
 
NVIDIA Extends AI Inference Performance Leadership, with Debut Results on Arm-based Servers

The latest MLPerf benchmarks show NVIDIA has extended its high watermarks in performance and energy efficiency for AI inference to Arm as well as x86 computers.

September 22, 2021 by DAVE SALVATOR

NVIDIA delivers the best results in AI inference using either x86 or Arm-based CPUs, according to benchmarks released today.

It’s the third consecutive time NVIDIA has set records in performance and energy efficiency on inference tests from MLCommons, an industry benchmarking group formed in May 2018.

And it’s the first time the data-center category tests have run on an Arm-based system, giving users more choice in how they deploy AI, the most transformative technology of our time.

Tale of the Tape

NVIDIA AI platform-powered computers topped all seven performance tests of inference in the latest round with systems from NVIDIA and nine of our ecosystem partners including Alibaba, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, Nettrix and Supermicro.

And NVIDIA is the only company to report results on all MLPerf tests in this and every round to date.



Inference is what happens when a computer runs AI software to recognize an object or make a prediction. It’s a process that uses a deep learning model to filter data, finding results no human could capture.

MLPerf’s inference benchmarks are based on today’s most popular AI workloads and scenarios, covering computer vision, medical imaging, natural language processing, recommendation systems, reinforcement learning and more.

So, whatever AI applications they deploy, users can set their own records with NVIDIA.

Why Performance Matters

AI models and datasets continue to grow as AI use cases expand from the data center to the edge and beyond. That’s why users need performance that’s both dependable and flexible to deploy.

MLPerf gives users the confidence to make informed buying decisions. It’s backed by dozens of industry leaders, including Alibaba, Arm, Baidu, Google, Intel and NVIDIA, so the tests are transparent and objective.

Flexing Arm for Enterprise AI

The Arm architecture is making headway into data centers around the world, in part thanks to its energy efficiency, performance increases and expanding software ecosystem.

The latest benchmarks show that as a GPU-accelerated platform, Arm-based servers using Ampere Altra CPUs deliver near-equal performance to similarly configured x86-based servers for AI inference jobs. In fact, in one of the tests, the Arm-based server out-performed a similar x86 system.

NVIDIA has a long tradition of supporting every CPU architecture, so we’re proud to see Arm prove its AI prowess in a peer-reviewed industry benchmark.

“Arm, as a founding member of MLCommons, is committed to the process of creating standards and benchmarks to better address challenges and inspire innovation in the accelerated computing industry,” said David Lecomber, a senior director of HPC and tools at Arm.

“The latest inference results demonstrate the readiness of Arm-based systems powered by Arm-based CPUs and NVIDIA GPUs for tackling a broad array of AI workloads in the data center,” he added.

Partners Show Their AI Powers

NVIDIA’s AI technology is backed by a large and growing ecosystem.

Seven OEMs submitted a total of 22 GPU-accelerated platforms in the latest benchmarks.

Most of these server models are NVIDIA-Certified, validated for running a diverse range of accelerated workloads. And many of them support NVIDIA AI Enterprise, software officially released last month.

Our partners participating in this round included Dell Technologies, Fujitsu, Hewlett Packard Enterprise, Inspur, Lenovo, Nettrix and Supermicro as well as cloud-service provider Alibaba.

The Power of Software

A key ingredient of NVIDIA’s AI success across all use cases is our full software stack.

For inference, that includes pre-trained AI models for a wide variety of use cases. The NVIDIA TAO Toolkit customizes those models for specific applications using transfer learning.

Our NVIDIA TensorRT software optimizes AI models so they make best use of memory and run faster. We routinely use it for MLPerf tests, and it’s available for both x86 and Arm-based systems.

We also employed our NVIDIA Triton Inference Server software and Multi-Instance GPU ( MIG) capability in these benchmarks. They deliver for all developers the kind of performance that usually requires expert coders.

Thanks to continuous improvements in this software stack, NVIDIA achieved gains up to 20 percent in performance and 15 percent in energy efficiency from previous MLPerf inference benchmarksjust four months ago.

All the software we used in the latest tests is available from the MLPerf repository, so anyone can reproduce our benchmark results. We continually add this code into our deep learning frameworks and containers available on NGC, our software hub for GPU applications.

It’s part of a full-stack AI offering, supporting every major processor architecture, proven in the latest industry benchmarks and available to tackle real AI jobs today.

To learn more about the NVIDIA inference platform, check out our NVIDIA Inference Technology Overview.

blogs.nvidia.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10