From: Frank Sully | 8/6/2021 9:07:18 AM | | | | Blue Whale Growth adds Nvidia to top-10 06 August 2021
Lead manager Stephen Yiu shares more insight into why he recently added semiconductor firm Nvidia to the £1bn LF Blue Whale Growth fund. By Abraham Darwyne, Senior reporter, Trustnet
trustnet.com |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/6/2021 5:51:06 PM | | | | Nvidia has absolutely trounced AMD in this vital chip area
By Sead Fadilpašic about 7 hours ago
AI is the future and Nvidia knows it
Despite the competition coming in droves, Nvidia has managed to maintain its leading position in the global market for Artificial Intelligence (AI) chips used in cloudand data centers.
Nvidia has also managed to maintain a large gap between itself and the rest, according to a new report from technology research firm Omdia, which claims that it took 80.6% of the market share of global revenue in 2020.
Last year, the company generated $3.2 billion in revenue, up from $1.8 billion the year before. The bulk of its earnings came from GPU-derived chips, for which Omdia says are the leading type of AI processors used in cloud and data center equipment.
Whether or not Nvidia keeps its dominant position in the future remains to be seen, as Omdia expects the market for AI processors to quickly grow and attract many new suppliers. Global market revenue for cloud and data center AI processors rose 79% last year, hitting $4 billion.
By the time we reach 2026, the company expects revenue to increase ninefold, to $37.6 billion.
For Jonathan Cassell, principal analyst, advanced computing, at Omdia, one advantage Nvidia has over the competition is its familiarity among the clients.
“NVIDIA’s Compute Unified Device Architecture (CUDA) Toolkit is used nearly universally by the AI software development community, giving the company’s GPU-derived chips a huge advantage in the market," he noted.
"However, Omdia predicts that other chip suppliers will gain significant market share in the coming years as market acceptance increases for alternative GPU-based chips and other types of AI processors.”
Growing competition
Omdia sees Xilinx, Google, Intel and AMD as the biggest contenders for at least a larger piece of the AI pie. Xilinx offers field-programmable gate array FPGA products, Google’s Tensor Processing Unit (TPU) AI ASIC is employed extensively in its own hyperscale cloud operations, while Intel’s racehorse comes in the form of its Habana AI proprietary-core AI ASSPs and its FPGA products for AI cloud and data center servers.
AMD, currently ranked fifth, offers GPU-derived AI ASSPs for cloud and data center servers. |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/7/2021 1:28:32 PM | | | | NVIDIA, Qualcomm Will Use TSMC’s 5nm And 3nm Nodes For 2022 – Pokde.Net
QUALCOMM By Financial News On Aug 7, 2021
TSMC is reportedly looking at another great year in 2022, with both NVIDIA and Qualcomm returning to TSMC’s advanced processes. We recently saw both NVIDIA and Qualcomm make the jump to Samsung, with NVIDIA’s Ampere GPUs and also Qualcomm’s flagship Snapdragon 888 manufactured by Samsung.
Business is apparently going so well at TSMC that not only have they fully booked their existing 7nm and 5nm nodes, but the production capacity for the upcoming 3nm node is also completely exhausted. TSMC recently increased the target capacity for their 5nm and 3nm nodes, with the queue for their 7nm process also full.
According to the report, TSMC is looking quite optimistic, with large orders from Apple, Qualcomm, NVIDIA and even Intel. They are also expanding their capacity with fabs being set up in several countries outside of Taiwan. Their main competition would remain Samsung, but apparently the Korean foundry is not so favorable.
Reports have suggested that Samsung’s advanced process nodes are rather lackluster in terms of yield and stability. But as we’ve always emphasized, if the price is right, you’d definitely see chipmakers using Samsung’s foundries. Samsung has traditionally offered a cost advantage, but the recent price hike on their part could mitigate this somewhat.
fishinvestment.com |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/7/2021 2:55:41 PM | | | | Cattle-ist for the Future: Plainsight Revolutionizes Livestock Management with AI Vision AI leader helps cattle industry improve livestock health and enhance operations — from farming through protein production. August 6, 2021 by Clarissa Garza Computer vision and edge AI are looking beyond the pasture.
Plainsight, a San Francisco-based startup and NVIDIA Metropolis partner, is helping the meat processing industry improve its operations — from farms to forks. By pairing Plainsight’s vision AI platform and NVIDIA GPUs to develop video analytics applications, the company’s system performs precision livestock counting and health monitoring.
With animals such as cows that look so similar, frequently shoulder-to-shoulder and moving quickly, inaccuracies in livestock counts are common in the cattle industry, and often costly.
On average, the cost of a cow in the U.S. is between $980 and $1,200, and facilities process anywhere between 1,000 to 5,000 cows per day. At this scale, even a small percentage of inaccurate counts equates to hundreds of millions of dollars in financial losses, nationally.
“By applying computer vision powered by edge AI and NVIDIA Metropolis, we’re able to automate what has traditionally been a very manual process and remove the uncertainty that comes with human counting,” said Carlos Anchia, CEO of Plainsight. “Organizations begin to optimize existing revenue streams when accuracy can be operationally efficient.”
Plainsight is working with JBS USA, one of the world’s largest food companies, to integrate vision AI into its operational processes. Vision AI-powered cattle counting was among the first solutions to be implemented.
At JBS, cows are counted by fixed-position cameras, connected via a secured private network to Plainsight’s vision AI edge application, which detects, tracks and counts the cows as they move past.
Plainsight’s models count livestock with over 99.5 percent accuracy — about two percentage points better than manual livestock counting by humans in the same conditions, according to the company.
For a vision AI solution to be widely adopted by an organization, the accuracy must be higher than humans performing the same activity. By monitoring and tracking each individual animal, the models simplify an otherwise complex process.
Highly robust and accurate computer vision models are only a portion of the cattle counting solution. Through continued collaboration with JBS’s operations and innovation teams, Plainsight provided a path to the digital transformation required to more accurately provide accountability when receiving livestock at scale and thus ensure that the payment for livestock received is as accurate as possible. Higher Accuracy with GPMoos
For JBS, the initial proof of value involved building models and deploying on an NVIDIA Jetson AGX Xavier Developer Kit.
After quickly achieving nearly 100 percent accuracy levels with their models, the teams moved into a full pilot phase. To augment the model to handle new and often challenging environmental conditions, Plainsight’s AI platform was used to quickly and easily annotate, build and deploy model improvements in preparation for a nationwide rollout.
As a member partner of NVIDIA Metropolis, an application framework that brings visual data and AI together, Plainsight continues to develop and improve models and AI pipelines to enable a national rollout with the U.S. division of JBS.
There, Plainsight uses a technology stack built on the NVIDIA EGX platform, incorporating NVIDIA-Certified Systems with NVIDIA T4 GPUs. Plainsight’s application processes multiple video streams per GPU in real time to count and monitor livestock as part of managing the accounting of livestock when received.
“Innovation is fundamental to the JBS culture, and the application of AI technology to improve efficiencies for daily work routines is important,” said Frederico Scarin do Amaral, Senior Manager Business Solutions of JBS USA. “Our partnership with Plainsight enhances the work of our team members and ensures greater accuracy of livestock count, improving our operations and efficiency, as well as allowing for continual improvements of animal welfare.” Milking It
Inaccurate counting is only part of the problem the industry faces, however. Visual inspection of livestock is arduous and error-prone, causing late detection of diseases and increasing health risks to other animals.
The same symptoms humans can identify by looking at an animal, such as gait and abnormal behavior, can be approximated by computer vision models built, trained and managed through Plainsight’s vision AI platform.
The models identify symptoms of particular diseases, based on the gait and anomalies in how livestock behave when exiting transport vehicles, in a pen or in feeding areas.
“The cameras are an unblinking source of truth that can be very useful in identifying and alerting to problems otherwise gone unnoticed,” said Anchia. “The combination of vision AI, cameras and Plainsight’s AI platform can help enhance the vigilance of all participants in the cattle supply chain so they can focus more on their business operations and animal welfare improvements as opposed to error-prone manual counting.”
Legen-Dairy
In addition to a variety of other smart agriculture applications, Plainsight is using its vision AI platform to monitor and track cattle on the blockchain as digital assets.
The company is engaged in a first-of-its-kind co-innovation project with CattlePass, a platform that generates a secure and unique digital record of individual livestock, also known as a non-fungible token, or NFT.
Plainsight is applying its advanced vision AI models and platform for monitoring cattle health. The suite of advanced technologies, including genomics, health and proof-of-life records, will provide 100 percent verifiable proof of ownership and transparency into a complete living record of the animal throughout its lifecycle. Cattle ranchers will be able to store the NFTs in a private digital wallet while collecting and adding metadata: feed, heartbeat, health, etc. This data can then be shared with permissioned viewers such as inspectors, buyers, vets and processors.
The data will remain with each animal throughout its life through harvest, and data will be provided with a unique QR code printed on the beef packaging. This will allow for visibility into the proof of life and quality of each cow, giving consumers unprecedented knowledge about its origin.
|
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/8/2021 3:19:15 PM | | | | These 2 Things Make NVIDIA the Best Semiconductor Stock For the 2020s
Nvidia’s chip licensing and software make it the best chip stock for the next decade.
Key Points
- Nvidia is trying to acquire leading chip licensor ARM, but has its own extensive chip licensing business already established.
- A number of in-house cloud-based software services have been developed on Nvidia's hardware platform.
- More than just a semiconductor company, Nvidia is increasingly looking like a tech platform.
Demand for high-end computing is booming in the wake of the pandemic, and showing no signs of letting up. And yet at the same time, the semiconductor industry -- the provider of all technology's basic building blocks -- is consolidating to address the economy's tech needs.
There are good reasons for this, and NVIDIA's ( NASDAQ:NVDA) bid for chip architecture licensing leader ARM Holdings embodies the issue. At the same time, Nvidia is pouring vast resources into research and development, and coming up with an expanding suite of cloud-based software as a result. The rulebook is changing for semiconductor industry success, and Nvidia's combo of tech hardware licensing and software makes it the best bet for the 2020s.
A new battle looming with tech giants
Cloud computing is reshaping the economy. The pandemic and the remote work movement it spawned have shoved the world further down the cloud path, cementing the data center (from which cloud services are built and delivered to users) as a critical computing unit for the decades ahead.
This has of course been a boon for semiconductor companies, but it's also presented a potential problem. Chip companies' biggest customers could eventually become its biggest competitors. You see, massive secular growth trends have turned multiple companies into tech titans with vast resources at their disposal. And several of them -- including Apple ( NASDAQ:AAPL), Alphabet's ( NASDAQ:GOOGL)( NASDAQ:GOOG) Google, and Amazon ( NASDAQ:AMZN) -- have started developing their own semiconductors to best suit their needs. All three have licensed ARM's extensive portfolio of chip designs to help get the ball rolling.
To be fair, tech giants represent a tiny share of the silicon market at this point. Even with the help of ARM's advanced blueprints, it takes incredible scale to engineer circuitry in-house and then partner with a fab (like Taiwan Semiconductor Manufacturing ( NYSE:TSM)) to start production. But that's the point. Companies like Apple, Google, and Amazon are large enough and have enough spare cash that it's beginning to make financial sense for them to journey down this path. The potential of this is concerning for the semiconductor industry.
That's why Nvidia's bid for ARM is such an incredible move. Granted, Nvidia has promised to keep ARM independent and won't deny anyone access to its designs if the merger is approved (there are still lots of question marks on whether regulators in the U.K. where ARM is based, as well as those in Europe and China, will sign off on the deal). Nevertheless, if Nvidia does get ARM, it says it will devote more research dollars to the firm and add its own extensive tech licensing know-how -- especially in the artificial intelligence department. Rather than diminish the competitive landscape, this could give purpose-built semiconductor firms a fighting chance to continue developing best-in-class components for a world that is increasingly reliant on digital systems.
And if Nvidia doesn't get to acquire ARM? There's nothing stopping it from accessing ARM's portfolio and adding its own flair to the design. In fact, even ahead of the merger decision, Nvidia has announced a slew of new products aimed at the data center market. And if it can't redirect some of its research budget to ARM, it can continue to develop on its own. In fact, Nvidia is one of the top spenders on research and development as a percentage of revenue out there, doling out nearly one-quarter of its $19.3 billion in sales on research and development over the last trailing 12 months.
With or without ARM, Nvidia is in prime position to dominate the tech hardware market in the decade ahead as data centers and AI grow in importance in the global economy.
Becoming a partner on the cloud software front
Of course, when it comes to designing semiconductors, the real end goal is to build a killer product or software service. Once chip companies do their job, that process has historically been out of their hands, and in the realm of engineers and software developers.
Historically, Nvidia has played by the same playbook -- but that's changed in recent years. The company has been planting all sorts of seed for its future cloud software and service library. It has its own video game streaming platform GeForce Now, Nvidia DRIVE has partnered with dozens of automakers and start-ups to advance autonomous vehicle software and system technology, and the creative collaboration tool Omniverse, which builds on Nvidia's digital communicationscapabilities, is in beta testing.
New cloud services like AI Enterprise and the Base Command Platform demonstrate the power of Nvidia's hardware, as well as Nvidia's scale to be able to build business tools it can directly go to market with. While public cloud computing firms like Amazon, Microsoft ( NASDAQ:MSFT), and Google get all the attention, don't ignore Nvidia. It's going after the massive and still fast-expanding software world as secular trends like the cloud and AI are forcing the transformation of the tech world.
Between its top-notch tech hardware licensing business and newfound software prowess, it's clear Nvidia is no normal semiconductor company. It may not be the most timely purchase ever -- shares currently value the company at 95 times trailing 12-month free cash flow, partially reflecting the massive growth this year from elevated demand for its chips. The stock price could also get volatile later this year and next, especially as a more definitive answer on the ARM acquisition emerges. However, if you're looking for a top semiconductor investment for the next decade, look no further than Nvidia, as it's poised to rival the scale of the biggest of the tech giants. |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/9/2021 12:40:28 AM | | | | Soar into the Hybrid-Cloud: Project Monterey Early Access Program Now Available to Enterprises
Dell Technologies, VMware and NVIDIA partner to help organizations boost data center performance, manageability and security.
August 3, 2021 by MOTTI BECK
Modern workloads such as AI and machine learning are putting tremendous pressure on traditional IT infrastructure.
Enterprises that want to stay ahead of these changes can now register to participate in an early access of Project Monterey, an initiative to dramatically improve the performance, manageability and security of their environments.
VMware, Dell Technologies and NVIDIA are collaborating on this project to evolve the architecture for the data center, cloud and edge to one that is software-defined and hardware-accelerated to address the changing application requirements.
AI and other compute-intensive workloads require real-time data streaming analysis, which, along with growing security threats, puts a heavy load on server CPUs. The increased load significantly increases the percentage of processing power required to run tasks that aren’t an integral part of application workloads. This reduces data center efficiency and can prevent IT from meeting its service-level agreements.
Project Monterey is leading the shift to advanced hybrid-cloud data center architectures, which benefit from hypervisor and accelerated software-defined networking, security and storage.
Project Monterey – Next-Generation VMware Cloud Foundation Architecture
With access to Project Monterey’s preconfigured clusters, enterprises can explore the evolution of VMware Cloud Foundation and take advantage of the disruptive hardware capabilities of the Dell EMC PowerEdge R750 server equipped with NVIDIA BlueField-2 DPU (data processing unit). Selected functions that used to run on the core CPU are offloaded, isolated and accelerated on the DPU to support new possibilities, including:
- Improved performance for application and infrastructure services
- Enhanced visibility, application security and observability
- Offloaded firewall capabilities
- Improved data center efficiency and cost for enterprise, edge and cloud.
Interested organizations can register for the NVIDIA Project Monterey early access program. Learn more about NVIDIA and VMware’s collaboration to modernize the data center.
blogs.nvidia.com |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/9/2021 10:29:23 AM | | | | Can You Trust The Solutions That AI Technologies Deliver? With Mathematical Optimization, You Can
Edward Rothberg Forbes Councils Member
Forbes Technology CouncilCOUNCIL POST| Membership (fee-based) Innovation
Edward Rothberg is CEO and Co-Founder of Gurobi Optimization, which produces the world’s fastest mathematical optimization solver.
Every day, more and more enterprises are using AI technologies — such as machine learning, mathematical optimization and heuristics — to make high-stakes decisions in industries like healthcare, electric power, logistics, financial services and in public sector areas like the military, infrastructure and criminal justice.
But as our reliance on these AI technologies to make critical decisions has increased, concerns over the reliability of the solutions delivered by these technologies have grown.
Numerous high-profile incidents — like self-driving cars failing to recognizeslightly modified stop signs, machine learning-based scoring systems demonstrating racial bias when predicting the likelihood of criminals committing future crimes, Google Trends wrongly predicting flu outbreaks based on search data and the algorithms used by Apple to determine credit-worthiness apparently discriminating against women — have shone a spotlight on some of the inherent shortcomings and unintended biases of AI technologies, shaking our confidence in the accuracy of these solutions.
Indeed, these and other incidents have left many wondering: Can we really trust the solutions delivered by AI technologies?
The Importance Of Interpretability
The root of the problem is that many of these AI tools are black boxes – meaning that users have little or no understanding of their inner workings and how they arrive at a given solution or make a particular automated decision.
The opaqueness of many AI technologies — which are based on sophisticated algorithms and complex mathematical models — has fueled concerns that AI may be producing inaccurate solutions and perpetuating bias in decision-making — and sowed distrust in AI-based solutions and decisions. This has spurred demands for greater transparency and accountability (with some formally calling for a “ right to explanation” for decisions made by algorithms) and illuminated the importance of interpretability in AI.
Interpretability — the capability to understand how an AI system works and explain why it generated a solution or made a decision — is a hot topic in the business world and an area of active research, with developers across the AI software space striving to make technologies such as machine learning more interpretable. There has been significant progress with introduction of new approaches that help improve the interpretability of machine learning and other AI tools.
There are, however, some AI technologies that are inherently interpretable — and mathematical optimization is, without a doubt, one of these technologies.
Indeed, interpretability is (and always has been) an essential characteristic and a key strength of mathematical optimization. As the CEO of a mathematical optimization software firm, I witness every day how organizations across the business world depend on this prescriptive analytics technology to deliver solutions they can understand, trust and use to make pivotal decisions.
Assessing Interpretability
How can we gauge the interpretability of AI technologies?
The US National Institute of Standards and Technology (NIST) has developed four principles that encompass the “core concepts of explainable AI” — and this framework provides a useful lens through which we can explore and evaluate the interpretability of AI technologies.
Let’s take a look at how mathematical optimization stacks up against these four NIST principles:
1. Knowledge Limits
AI systems should only operate within the “limits” and “under conditions” they were designed for.
Mathematical optimization systems consist of two components: a mathematical optimization solver (an algorithmic engine) and a mathematical model (a detailed, customized representation of your business problem, which encapsulates all of your decision-making processes, business objectives and constraints). Users of mathematical optimization can design their models and thereby define what the “limits” and “conditions” of their systems are.
2. Explanation
AI systems should “supply evidence, support or reasoning” for each output or solution.
A mathematical optimization model is essentially a digital twin of your business environment, and the constraints (or business rules) that must be satisfied are embedded into that model. Any solution generated by your mathematical optimization system can easily be checked against those constraints.
3. Explanation Accuracy
The explanations provided by AI systems should “accurately describe how the system came to its conclusion.”
Mathematical optimization is a problem-solving technology that can rapidly and comprehensively comb through trillions or more possible solutions to incredibly complex business problems and find the optimal one. Since mathematical optimization conducts this search in a systematic fashion, the solutions delivered by this AI technology come with a mathematically backed guarantee of quality — and this fact can be audited and validated.
4. Meaningful
AI systems should “provide explanations that are understandable to individual users.”
Most AI tools — like neural networks and random forests — run on black box models. You feed them data, they work their magic and automatically spit out a solution. It’s essentially impossible (even for many developers) to gain insight into how these systems actually work or why they are making specific predictions or decisions. Mathematical optimization models, in contrast, are transparent and interpretable and are meaningful by design (as they capture the fundamental features of your real-world operating environment). The models themselves (and the solutions they deliver) reflect reality — and are thus understandable for users.
As you can see, mathematical optimization fulfills all four NIST principles and excels in the area of interpretability. With mathematical optimization, you can attain a deep understanding of how and why the AI system makes certain decisions — and thereby gain trust in those decisions.
It’s important to note that mathematical optimization is not the only AI tool that can deliver interpretable solutions — there are a number of other AI technologies that have this capability (and other technologies that are developing in this area).
When you’re deciding whether or not to invest in one of these AI technologies, one critical factor to consider is their interpretability — and the NIST principles provide a good framework through which to assess this.
Understanding “The Why” Of AI
The issue of interpretability in AI continues to captivate the business world, with “explainable AI” trends and technological breakthroughs grabbing news headlines and “explainable AI” initiatives topping the agendas of IT executives. While many companies in the AI space grapple with questions of how to make their technologies more transparent and trustworthy, there are already AI tools out there — like mathematical optimization — that are innately equipped to deliver interpretable, reliable and optimal solutions to today’s problems. |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/9/2021 11:14:38 AM | | | | Synopsys, Cadence, Google And NVIDIA All Agree: Use AI To Help Design Chips
Karl Freund Contributor Enterprise Tech
Synopsys created a buzz in 2020, and now Google, NVIDIA, and Cadence Design have joined the party. What lies ahead?
Introduction
Designing modern semiconductors can take years and scores of engineers armed with state-of-the-art EDA design tools. But the semiconductor landscape and the world around us is being revolutionized by hundreds of new chips, primarily driven by AI. Some entrepreneurial thought leaders believe that the expensive and lengthy chip design process could shrink from 2-3 years to 2-3 months if hardware development was to become more agile, more autonomous. And chief among a new breed of agile design tools is AI itself.
The Semiconductor Design Landscape
This discussion began in earnest when the EDA leader Synopsys announced DSO.ai, Design Space Optimization AI, an software product that could more autonomously identify optimal ways to arrange silicon components (layouts) on a chip to reduce the area and reduce power consumption, even while increasing performance. Using reinforcement learning, DSO.ai could evaluate billions of alternatives against design goals, and produce a design that was significantly better than that produced by talented engineers. The size of the problem/solution space DSO.ai addresses is staggering: there are something like 1090,000possible ways to place components on a chip. That compares to 10360 possible moves in the game of Go which was mastered by Google AI in 2016. Since reinforcement learning can play Go better than the world champion, one could conceivably design a better chip if one is willing to spend the compute time to do it.
Results are quite impressive, realizing 18% faster operating frequency at 21% lower power, while reducing engineering time from six months to as little as one. In a recent interview, Synopsys’ Founder and Co-CEO Aart de Geus disclosed that Samsung have a working chip in-house today that was designed with DSO.ai. This would indeed be the world’s first use of AI to create a chip layout in production – from RTL to tapeout.
Recently Google published results of doing something similar, as has NVIDIA. And Cadence Design Systems just announced an AI-based optimization platform similar to Synopsys DSO.ai. Before we take a look at these efforts, lets back up a little and look at the entire semiconductor design space. A good place to start is the Gajski-Kuhn Chart that outlines all the steps of chip design along three axes: the Behavioral level where architects defines what the chip is supposed to do, the Structural level where they determine how the chip is organized, and the Geometry level where engineers define how the chip is laid out.
Based on this model, each step towards the center (which is when the team “tapes out” the chip to the fabrication partner) feeds the work in the next phase in a clockwise direction. To date, all the application of AI has been in the geometry space, or physical design, to address the waning of Moore’s Law.
Synopsys DSO.ai
As I covered at launch, Synopsys DSO.ai was the first entrant to apply AI to the physical design process, producing floor plans that consumed lower power, ran at higher frequencies, and occupied less space than the best an experienced design could produce. What really attracted my attention was the profound effect of AI on productivity; DSO.ai users were able to achieve in a few days what it used to take teams of experts, many weeks.
Google Research and NVIDIA Research
Both companies have produced research papers that describe the use of reinforcement learning to assist in the physical design of the floor plan. In Google’s case, AI is being used to lay out the floor plan of the next generation TPU Chip and the company is investigating additional uses of AI such as in architectural optimization.
NVIDIA similarly has focused on that same low-hanging fruit: floorplanning, and with all the compute capacity they have in-house, I’d expect NVDIA to continue to eat their own dogfood and use AI to design better AI chips.
google.com |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
| |