From: Frank Sully | 8/5/2021 11:04:57 AM | | | | Interview With Murali Gopalakrishna, GM, Robotics @ NVIDIA
05/08/2021
NVIDIA created the Isaac robotics platform, including the Isaac Sim application on the NVIDIA Omniverse platform for simulation and training of robots in virtual environments
For this week’s practitioners series, Analytics India Magazine (AIM) got in touch with Murali Gopalakrishna, Head of Product Management, Autonomous Machines and General Manager for Robotics. He also leads the business development team focusing on robots, drones, industrial IoT and enterprise collaboration products at NVIDIA. In this interview, we discuss in detail the robotics solutions developed by NVIDIA and their significance.
AIM: Can you tell us about how NVIDIA is building robotics solutions to be used at scale?Murali: Robotics algorithms can be mainly classified into (1) sensing/perception, (2) mobility (motion/path planning), and (3) robot control. All these fields are seeing significant innovation in the recent past with AI/Deep Learning playing an important role. With NVIDIA GPU-accelerated AI at the edge computing platforms, manufacturers will be able develop complex algorithms and deploy robotics at scale.
Robots have to sense, plan and act. To develop robots that are autonomous and efficient, developers have to accelerate algorithms for the complete stack. Algorithms such as object detection, pose estimation and depth estimation are used to perceive the environment, create a map of the environment and localise the robot in the environment. Algorithms such as free space segmentation are used for planning the efficient path for the robot, while control algorithms determine the commands for the robot to go on the planned path. Advances in AI and GPU-accelerated computing are making all these algorithms more accurate and faster, creating robots that are more capable and safe.
Ease of use and deployment have made the NVIDIA Jetson platform a logical choice for over half a million developers, researchers, and manufacturers building and deploying robots worldwide. We provide a full suite of tools and SDKs for developers and companies scaling robotics and automation applications:
- Open source packages for ROS/ROS2 (Human Pose Estimation, Accelerated AprilTags), Docker containers, Cuda library support and more.
- For training: NVIDIA Transfer Learning Toolkit (TLT) helps reduce costs associated with large scale data collection, labeling, and eliminates the burden of training AI/ML models from the ground up. This enables developers to build and scale production quality pre-trained models faster with no code. Auto Mixed Precision allows developers to train with half precision while maintaining the network accuracy achieved with single precision, enabling significantly faster training time.
- For real-time inference: NVIDIA TensorRT is a high-performance SDK for deep learning, including a DL inference optimizer and runtime that delivers low latency and high throughput for inference applications. NVIDIA Triton Inference Server simplifies the deployment of AI models at scale in production. It is an open source inference serving software that lets teams deploy trained AI models from any framework on any GPU or CPU-based infrastructure (cloud, data center, or edge).
- For perception: NVIDIA DeepStream SDK helps developers build and scale AI-powered Intelligent Video Analytics apps and services. DeepStream offers a multi-platform scalable framework with TLS security to deploy on the edge and connect to any cloud.
- NVIDIA Fleet Command is a hybrid-cloud platform for managing and scaling AI at the edge. From one control plane, anyone with a browser and internet connection can deploy applications, update software over the air, and monitor location health.
- NVIDIA Jarvis is an application framework for multimodal conversational AI services that delivers real-time performance on GPUs.
AIM: What is the scope of these solutions?
Muralli: Powerful GPU-based AI-at-the-edge computing, along with a full spectrum of sensors, are widely implemented in the field today. Fueled by AI and DL, sensor technologies that power perception for real-time decision making have revolutionised several areas of robotics, including navigation, visual recognition and object manipulation.
Today’s AI-enabled robots perform myriad tasks and functions, allowing them to work as “cobots” in close collaboration with humans in complex environments including warehouses, retail stores, hospitals and industrial environments as well as in our homes. AI and DL continue to play a significant role in the programming of robots, speeding development time for roboticists and helping advance these systems from single functionality to multi functionality.
And there’s no arguing the pandemic accelerated the need and urgency for robotics deployment, especially in healthcare, logistics, manufacturing and retail.
- Healthcare: To minimise contact and support shortage of staff and resources, robots have found invaluable use in the delivery of medicine/supplies, patient monitoring, medical procedures, temperature detection, and UV disinfectant applications in public and private spaces.
- Logistics: From pick-n-place to last mile delivery, robots have clearly become indispensable with the ever-increasing need for efficiencies across the supply chain and e-commerce.
- Manufacturing: Using AI/DL to create the factory-of-the future, leveraging robots and cobots for no-touch manufacturing as well as enabling zero downtime to increase productivity and efficiency.
- Retail: From cleaning, inventory and safety (temperature detection, mask detection, social distancing) to shelf-scanning and self-checkout, robots are transforming the shopping experience.We have a large customer base in a diverse set of industries like agriculture, manufacturing, healthcare and logistics (e.g., John Deere in agriculture and Komatsu in construction). Most of the last mile delivery robots are using NVIDIA technology (Postmates, JD-X , Cianio, etc.)
AIM: Tell us about NVIDIA Isaac Sim.Murali: NVIDIA created the Isaac robotics platform, including the Isaac Sim application on the NVIDIA Omniverse platform for simulation and training of robots in virtual environments before deploying them in the real world. NVIDIA Omniverse is the underlying foundation for all our simulators, including the Isaac platform. We’ve added many features in our latest Isaac Sim open beta release including ROS/ROS2 compatibility and multi camera support, as well as enhanced synthetic data generation and domain randomization capabilities which are important for generating datasets to train perception models for AI based robots.
Simulation technology like Isaac Sim on Omniverse can be used for every aspect: from design and development of the mechanical robot, then training the robot in navigation and behavior, to deploying in a “digital twin” in which the robot is simulated and tested in an accurate and photorealistic environment before deployed in the real world.
AIM: What are the current challenges and what does the future hold for robotics?
Muralli: One of the most interesting areas of development is cobots, which can be deployed in areas where robots have not been used thus far. Traditionally, the use of robots on factory floors posed safety risks and were deemed too dangerous to work alongside humans, and therefore these machines were typically placed in isolated environments or caged. Enter cobots. Though designed to work in close proximity with humans, cobots faced several challenges like limited capabilities and inability to think, putting a damper on their widespread adoption.
But now, thanks to advancements in AI, which brings intelligence to cobots, we’re seeing these systems make real-time decisions that ensure safety in the factory-of-the future, while maintaining and optimizing productivity. This includes training a cobot to perceive the environment around it and adapt accordingly — allowing it to reduce its speed, adjust its force/strength, detect changing working conditions, or even shut down safely before it interferes with a human in its proximity. By leveraging the power of AI, coupled with changes in cobot design (softer materials, new types of joints, removal of sharp edges, etc.), we’re seeing the emergence of applications and use cases that were not previously feasible (e.g., robots in commercial kitchens, etc.)
Robots are being taught what to do, and how to improve upon complex tasks, as quickly as within a few hours or overnight (versus what used to take weeks or even months)! AI techniques such as one-shot learning, transfer learning, imitation learning, reinforcement learning, etc. are no longer confined to research papers; many of these methods are in practical use today for real-world robotics deployments.
AIM: How do you see the Robotics landscape evolving in India?
Muralli: Manufacturing is increasingly reliant on robotic production. For example, the automotive industry. Our collaboration with BMW for example, begins with creating a digital twin of a future factory in Omniverse and laying out the entire robotic managed production lines digitally, before committing to physical construction. Other industries benefiting from robotics are the industrial & nuclear power sectors. For example, warehouse and inventory management, materials transportation, quality inspection and predictive maintenance in the former & internal reactor inspection and emergency response in the latter. |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/6/2021 9:07:18 AM | | | | Blue Whale Growth adds Nvidia to top-10 06 August 2021
Lead manager Stephen Yiu shares more insight into why he recently added semiconductor firm Nvidia to the £1bn LF Blue Whale Growth fund. By Abraham Darwyne, Senior reporter, Trustnet
trustnet.com |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/6/2021 5:51:06 PM | | | | Nvidia has absolutely trounced AMD in this vital chip area
By Sead Fadilpašic about 7 hours ago
AI is the future and Nvidia knows it
Despite the competition coming in droves, Nvidia has managed to maintain its leading position in the global market for Artificial Intelligence (AI) chips used in cloudand data centers.
Nvidia has also managed to maintain a large gap between itself and the rest, according to a new report from technology research firm Omdia, which claims that it took 80.6% of the market share of global revenue in 2020.
Last year, the company generated $3.2 billion in revenue, up from $1.8 billion the year before. The bulk of its earnings came from GPU-derived chips, for which Omdia says are the leading type of AI processors used in cloud and data center equipment.
Whether or not Nvidia keeps its dominant position in the future remains to be seen, as Omdia expects the market for AI processors to quickly grow and attract many new suppliers. Global market revenue for cloud and data center AI processors rose 79% last year, hitting $4 billion.
By the time we reach 2026, the company expects revenue to increase ninefold, to $37.6 billion.
For Jonathan Cassell, principal analyst, advanced computing, at Omdia, one advantage Nvidia has over the competition is its familiarity among the clients.
“NVIDIA’s Compute Unified Device Architecture (CUDA) Toolkit is used nearly universally by the AI software development community, giving the company’s GPU-derived chips a huge advantage in the market," he noted.
"However, Omdia predicts that other chip suppliers will gain significant market share in the coming years as market acceptance increases for alternative GPU-based chips and other types of AI processors.”
Growing competition
Omdia sees Xilinx, Google, Intel and AMD as the biggest contenders for at least a larger piece of the AI pie. Xilinx offers field-programmable gate array FPGA products, Google’s Tensor Processing Unit (TPU) AI ASIC is employed extensively in its own hyperscale cloud operations, while Intel’s racehorse comes in the form of its Habana AI proprietary-core AI ASSPs and its FPGA products for AI cloud and data center servers.
AMD, currently ranked fifth, offers GPU-derived AI ASSPs for cloud and data center servers. |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/7/2021 1:28:32 PM | | | | NVIDIA, Qualcomm Will Use TSMC’s 5nm And 3nm Nodes For 2022 – Pokde.Net
QUALCOMM By Financial News On Aug 7, 2021
TSMC is reportedly looking at another great year in 2022, with both NVIDIA and Qualcomm returning to TSMC’s advanced processes. We recently saw both NVIDIA and Qualcomm make the jump to Samsung, with NVIDIA’s Ampere GPUs and also Qualcomm’s flagship Snapdragon 888 manufactured by Samsung.
Business is apparently going so well at TSMC that not only have they fully booked their existing 7nm and 5nm nodes, but the production capacity for the upcoming 3nm node is also completely exhausted. TSMC recently increased the target capacity for their 5nm and 3nm nodes, with the queue for their 7nm process also full.
According to the report, TSMC is looking quite optimistic, with large orders from Apple, Qualcomm, NVIDIA and even Intel. They are also expanding their capacity with fabs being set up in several countries outside of Taiwan. Their main competition would remain Samsung, but apparently the Korean foundry is not so favorable.
Reports have suggested that Samsung’s advanced process nodes are rather lackluster in terms of yield and stability. But as we’ve always emphasized, if the price is right, you’d definitely see chipmakers using Samsung’s foundries. Samsung has traditionally offered a cost advantage, but the recent price hike on their part could mitigate this somewhat.
fishinvestment.com |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/7/2021 2:55:41 PM | | | | Cattle-ist for the Future: Plainsight Revolutionizes Livestock Management with AI Vision AI leader helps cattle industry improve livestock health and enhance operations — from farming through protein production. August 6, 2021 by Clarissa Garza Computer vision and edge AI are looking beyond the pasture.
Plainsight, a San Francisco-based startup and NVIDIA Metropolis partner, is helping the meat processing industry improve its operations — from farms to forks. By pairing Plainsight’s vision AI platform and NVIDIA GPUs to develop video analytics applications, the company’s system performs precision livestock counting and health monitoring.
With animals such as cows that look so similar, frequently shoulder-to-shoulder and moving quickly, inaccuracies in livestock counts are common in the cattle industry, and often costly.
On average, the cost of a cow in the U.S. is between $980 and $1,200, and facilities process anywhere between 1,000 to 5,000 cows per day. At this scale, even a small percentage of inaccurate counts equates to hundreds of millions of dollars in financial losses, nationally.
“By applying computer vision powered by edge AI and NVIDIA Metropolis, we’re able to automate what has traditionally been a very manual process and remove the uncertainty that comes with human counting,” said Carlos Anchia, CEO of Plainsight. “Organizations begin to optimize existing revenue streams when accuracy can be operationally efficient.”
Plainsight is working with JBS USA, one of the world’s largest food companies, to integrate vision AI into its operational processes. Vision AI-powered cattle counting was among the first solutions to be implemented.
At JBS, cows are counted by fixed-position cameras, connected via a secured private network to Plainsight’s vision AI edge application, which detects, tracks and counts the cows as they move past.
Plainsight’s models count livestock with over 99.5 percent accuracy — about two percentage points better than manual livestock counting by humans in the same conditions, according to the company.
For a vision AI solution to be widely adopted by an organization, the accuracy must be higher than humans performing the same activity. By monitoring and tracking each individual animal, the models simplify an otherwise complex process.
Highly robust and accurate computer vision models are only a portion of the cattle counting solution. Through continued collaboration with JBS’s operations and innovation teams, Plainsight provided a path to the digital transformation required to more accurately provide accountability when receiving livestock at scale and thus ensure that the payment for livestock received is as accurate as possible. Higher Accuracy with GPMoos
For JBS, the initial proof of value involved building models and deploying on an NVIDIA Jetson AGX Xavier Developer Kit.
After quickly achieving nearly 100 percent accuracy levels with their models, the teams moved into a full pilot phase. To augment the model to handle new and often challenging environmental conditions, Plainsight’s AI platform was used to quickly and easily annotate, build and deploy model improvements in preparation for a nationwide rollout.
As a member partner of NVIDIA Metropolis, an application framework that brings visual data and AI together, Plainsight continues to develop and improve models and AI pipelines to enable a national rollout with the U.S. division of JBS.
There, Plainsight uses a technology stack built on the NVIDIA EGX platform, incorporating NVIDIA-Certified Systems with NVIDIA T4 GPUs. Plainsight’s application processes multiple video streams per GPU in real time to count and monitor livestock as part of managing the accounting of livestock when received.
“Innovation is fundamental to the JBS culture, and the application of AI technology to improve efficiencies for daily work routines is important,” said Frederico Scarin do Amaral, Senior Manager Business Solutions of JBS USA. “Our partnership with Plainsight enhances the work of our team members and ensures greater accuracy of livestock count, improving our operations and efficiency, as well as allowing for continual improvements of animal welfare.” Milking It
Inaccurate counting is only part of the problem the industry faces, however. Visual inspection of livestock is arduous and error-prone, causing late detection of diseases and increasing health risks to other animals.
The same symptoms humans can identify by looking at an animal, such as gait and abnormal behavior, can be approximated by computer vision models built, trained and managed through Plainsight’s vision AI platform.
The models identify symptoms of particular diseases, based on the gait and anomalies in how livestock behave when exiting transport vehicles, in a pen or in feeding areas.
“The cameras are an unblinking source of truth that can be very useful in identifying and alerting to problems otherwise gone unnoticed,” said Anchia. “The combination of vision AI, cameras and Plainsight’s AI platform can help enhance the vigilance of all participants in the cattle supply chain so they can focus more on their business operations and animal welfare improvements as opposed to error-prone manual counting.”
Legen-Dairy
In addition to a variety of other smart agriculture applications, Plainsight is using its vision AI platform to monitor and track cattle on the blockchain as digital assets.
The company is engaged in a first-of-its-kind co-innovation project with CattlePass, a platform that generates a secure and unique digital record of individual livestock, also known as a non-fungible token, or NFT.
Plainsight is applying its advanced vision AI models and platform for monitoring cattle health. The suite of advanced technologies, including genomics, health and proof-of-life records, will provide 100 percent verifiable proof of ownership and transparency into a complete living record of the animal throughout its lifecycle. Cattle ranchers will be able to store the NFTs in a private digital wallet while collecting and adding metadata: feed, heartbeat, health, etc. This data can then be shared with permissioned viewers such as inspectors, buyers, vets and processors.
The data will remain with each animal throughout its life through harvest, and data will be provided with a unique QR code printed on the beef packaging. This will allow for visibility into the proof of life and quality of each cow, giving consumers unprecedented knowledge about its origin.
|
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/8/2021 3:19:15 PM | | | | These 2 Things Make NVIDIA the Best Semiconductor Stock For the 2020s
Nvidia’s chip licensing and software make it the best chip stock for the next decade.
Key Points
- Nvidia is trying to acquire leading chip licensor ARM, but has its own extensive chip licensing business already established.
- A number of in-house cloud-based software services have been developed on Nvidia's hardware platform.
- More than just a semiconductor company, Nvidia is increasingly looking like a tech platform.
Demand for high-end computing is booming in the wake of the pandemic, and showing no signs of letting up. And yet at the same time, the semiconductor industry -- the provider of all technology's basic building blocks -- is consolidating to address the economy's tech needs.
There are good reasons for this, and NVIDIA's ( NASDAQ:NVDA) bid for chip architecture licensing leader ARM Holdings embodies the issue. At the same time, Nvidia is pouring vast resources into research and development, and coming up with an expanding suite of cloud-based software as a result. The rulebook is changing for semiconductor industry success, and Nvidia's combo of tech hardware licensing and software makes it the best bet for the 2020s.
A new battle looming with tech giants
Cloud computing is reshaping the economy. The pandemic and the remote work movement it spawned have shoved the world further down the cloud path, cementing the data center (from which cloud services are built and delivered to users) as a critical computing unit for the decades ahead.
This has of course been a boon for semiconductor companies, but it's also presented a potential problem. Chip companies' biggest customers could eventually become its biggest competitors. You see, massive secular growth trends have turned multiple companies into tech titans with vast resources at their disposal. And several of them -- including Apple ( NASDAQ:AAPL), Alphabet's ( NASDAQ:GOOGL)( NASDAQ:GOOG) Google, and Amazon ( NASDAQ:AMZN) -- have started developing their own semiconductors to best suit their needs. All three have licensed ARM's extensive portfolio of chip designs to help get the ball rolling.
To be fair, tech giants represent a tiny share of the silicon market at this point. Even with the help of ARM's advanced blueprints, it takes incredible scale to engineer circuitry in-house and then partner with a fab (like Taiwan Semiconductor Manufacturing ( NYSE:TSM)) to start production. But that's the point. Companies like Apple, Google, and Amazon are large enough and have enough spare cash that it's beginning to make financial sense for them to journey down this path. The potential of this is concerning for the semiconductor industry.
That's why Nvidia's bid for ARM is such an incredible move. Granted, Nvidia has promised to keep ARM independent and won't deny anyone access to its designs if the merger is approved (there are still lots of question marks on whether regulators in the U.K. where ARM is based, as well as those in Europe and China, will sign off on the deal). Nevertheless, if Nvidia does get ARM, it says it will devote more research dollars to the firm and add its own extensive tech licensing know-how -- especially in the artificial intelligence department. Rather than diminish the competitive landscape, this could give purpose-built semiconductor firms a fighting chance to continue developing best-in-class components for a world that is increasingly reliant on digital systems.
And if Nvidia doesn't get to acquire ARM? There's nothing stopping it from accessing ARM's portfolio and adding its own flair to the design. In fact, even ahead of the merger decision, Nvidia has announced a slew of new products aimed at the data center market. And if it can't redirect some of its research budget to ARM, it can continue to develop on its own. In fact, Nvidia is one of the top spenders on research and development as a percentage of revenue out there, doling out nearly one-quarter of its $19.3 billion in sales on research and development over the last trailing 12 months.
With or without ARM, Nvidia is in prime position to dominate the tech hardware market in the decade ahead as data centers and AI grow in importance in the global economy.
Becoming a partner on the cloud software front
Of course, when it comes to designing semiconductors, the real end goal is to build a killer product or software service. Once chip companies do their job, that process has historically been out of their hands, and in the realm of engineers and software developers.
Historically, Nvidia has played by the same playbook -- but that's changed in recent years. The company has been planting all sorts of seed for its future cloud software and service library. It has its own video game streaming platform GeForce Now, Nvidia DRIVE has partnered with dozens of automakers and start-ups to advance autonomous vehicle software and system technology, and the creative collaboration tool Omniverse, which builds on Nvidia's digital communicationscapabilities, is in beta testing.
New cloud services like AI Enterprise and the Base Command Platform demonstrate the power of Nvidia's hardware, as well as Nvidia's scale to be able to build business tools it can directly go to market with. While public cloud computing firms like Amazon, Microsoft ( NASDAQ:MSFT), and Google get all the attention, don't ignore Nvidia. It's going after the massive and still fast-expanding software world as secular trends like the cloud and AI are forcing the transformation of the tech world.
Between its top-notch tech hardware licensing business and newfound software prowess, it's clear Nvidia is no normal semiconductor company. It may not be the most timely purchase ever -- shares currently value the company at 95 times trailing 12-month free cash flow, partially reflecting the massive growth this year from elevated demand for its chips. The stock price could also get volatile later this year and next, especially as a more definitive answer on the ARM acquisition emerges. However, if you're looking for a top semiconductor investment for the next decade, look no further than Nvidia, as it's poised to rival the scale of the biggest of the tech giants. |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/9/2021 12:40:28 AM | | | | Soar into the Hybrid-Cloud: Project Monterey Early Access Program Now Available to Enterprises
Dell Technologies, VMware and NVIDIA partner to help organizations boost data center performance, manageability and security.
August 3, 2021 by MOTTI BECK
Modern workloads such as AI and machine learning are putting tremendous pressure on traditional IT infrastructure.
Enterprises that want to stay ahead of these changes can now register to participate in an early access of Project Monterey, an initiative to dramatically improve the performance, manageability and security of their environments.
VMware, Dell Technologies and NVIDIA are collaborating on this project to evolve the architecture for the data center, cloud and edge to one that is software-defined and hardware-accelerated to address the changing application requirements.
AI and other compute-intensive workloads require real-time data streaming analysis, which, along with growing security threats, puts a heavy load on server CPUs. The increased load significantly increases the percentage of processing power required to run tasks that aren’t an integral part of application workloads. This reduces data center efficiency and can prevent IT from meeting its service-level agreements.
Project Monterey is leading the shift to advanced hybrid-cloud data center architectures, which benefit from hypervisor and accelerated software-defined networking, security and storage.
Project Monterey – Next-Generation VMware Cloud Foundation Architecture
With access to Project Monterey’s preconfigured clusters, enterprises can explore the evolution of VMware Cloud Foundation and take advantage of the disruptive hardware capabilities of the Dell EMC PowerEdge R750 server equipped with NVIDIA BlueField-2 DPU (data processing unit). Selected functions that used to run on the core CPU are offloaded, isolated and accelerated on the DPU to support new possibilities, including:
- Improved performance for application and infrastructure services
- Enhanced visibility, application security and observability
- Offloaded firewall capabilities
- Improved data center efficiency and cost for enterprise, edge and cloud.
Interested organizations can register for the NVIDIA Project Monterey early access program. Learn more about NVIDIA and VMware’s collaboration to modernize the data center.
blogs.nvidia.com |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/9/2021 10:29:23 AM | | | | Can You Trust The Solutions That AI Technologies Deliver? With Mathematical Optimization, You Can
Edward Rothberg Forbes Councils Member
Forbes Technology CouncilCOUNCIL POST| Membership (fee-based) Innovation
Edward Rothberg is CEO and Co-Founder of Gurobi Optimization, which produces the world’s fastest mathematical optimization solver.
Every day, more and more enterprises are using AI technologies — such as machine learning, mathematical optimization and heuristics — to make high-stakes decisions in industries like healthcare, electric power, logistics, financial services and in public sector areas like the military, infrastructure and criminal justice.
But as our reliance on these AI technologies to make critical decisions has increased, concerns over the reliability of the solutions delivered by these technologies have grown.
Numerous high-profile incidents — like self-driving cars failing to recognizeslightly modified stop signs, machine learning-based scoring systems demonstrating racial bias when predicting the likelihood of criminals committing future crimes, Google Trends wrongly predicting flu outbreaks based on search data and the algorithms used by Apple to determine credit-worthiness apparently discriminating against women — have shone a spotlight on some of the inherent shortcomings and unintended biases of AI technologies, shaking our confidence in the accuracy of these solutions.
Indeed, these and other incidents have left many wondering: Can we really trust the solutions delivered by AI technologies?
The Importance Of Interpretability
The root of the problem is that many of these AI tools are black boxes – meaning that users have little or no understanding of their inner workings and how they arrive at a given solution or make a particular automated decision.
The opaqueness of many AI technologies — which are based on sophisticated algorithms and complex mathematical models — has fueled concerns that AI may be producing inaccurate solutions and perpetuating bias in decision-making — and sowed distrust in AI-based solutions and decisions. This has spurred demands for greater transparency and accountability (with some formally calling for a “ right to explanation” for decisions made by algorithms) and illuminated the importance of interpretability in AI.
Interpretability — the capability to understand how an AI system works and explain why it generated a solution or made a decision — is a hot topic in the business world and an area of active research, with developers across the AI software space striving to make technologies such as machine learning more interpretable. There has been significant progress with introduction of new approaches that help improve the interpretability of machine learning and other AI tools.
There are, however, some AI technologies that are inherently interpretable — and mathematical optimization is, without a doubt, one of these technologies.
Indeed, interpretability is (and always has been) an essential characteristic and a key strength of mathematical optimization. As the CEO of a mathematical optimization software firm, I witness every day how organizations across the business world depend on this prescriptive analytics technology to deliver solutions they can understand, trust and use to make pivotal decisions.
Assessing Interpretability
How can we gauge the interpretability of AI technologies?
The US National Institute of Standards and Technology (NIST) has developed four principles that encompass the “core concepts of explainable AI” — and this framework provides a useful lens through which we can explore and evaluate the interpretability of AI technologies.
Let’s take a look at how mathematical optimization stacks up against these four NIST principles:
1. Knowledge Limits
AI systems should only operate within the “limits” and “under conditions” they were designed for.
Mathematical optimization systems consist of two components: a mathematical optimization solver (an algorithmic engine) and a mathematical model (a detailed, customized representation of your business problem, which encapsulates all of your decision-making processes, business objectives and constraints). Users of mathematical optimization can design their models and thereby define what the “limits” and “conditions” of their systems are.
2. Explanation
AI systems should “supply evidence, support or reasoning” for each output or solution.
A mathematical optimization model is essentially a digital twin of your business environment, and the constraints (or business rules) that must be satisfied are embedded into that model. Any solution generated by your mathematical optimization system can easily be checked against those constraints.
3. Explanation Accuracy
The explanations provided by AI systems should “accurately describe how the system came to its conclusion.”
Mathematical optimization is a problem-solving technology that can rapidly and comprehensively comb through trillions or more possible solutions to incredibly complex business problems and find the optimal one. Since mathematical optimization conducts this search in a systematic fashion, the solutions delivered by this AI technology come with a mathematically backed guarantee of quality — and this fact can be audited and validated.
4. Meaningful
AI systems should “provide explanations that are understandable to individual users.”
Most AI tools — like neural networks and random forests — run on black box models. You feed them data, they work their magic and automatically spit out a solution. It’s essentially impossible (even for many developers) to gain insight into how these systems actually work or why they are making specific predictions or decisions. Mathematical optimization models, in contrast, are transparent and interpretable and are meaningful by design (as they capture the fundamental features of your real-world operating environment). The models themselves (and the solutions they deliver) reflect reality — and are thus understandable for users.
As you can see, mathematical optimization fulfills all four NIST principles and excels in the area of interpretability. With mathematical optimization, you can attain a deep understanding of how and why the AI system makes certain decisions — and thereby gain trust in those decisions.
It’s important to note that mathematical optimization is not the only AI tool that can deliver interpretable solutions — there are a number of other AI technologies that have this capability (and other technologies that are developing in this area).
When you’re deciding whether or not to invest in one of these AI technologies, one critical factor to consider is their interpretability — and the NIST principles provide a good framework through which to assess this.
Understanding “The Why” Of AI
The issue of interpretability in AI continues to captivate the business world, with “explainable AI” trends and technological breakthroughs grabbing news headlines and “explainable AI” initiatives topping the agendas of IT executives. While many companies in the AI space grapple with questions of how to make their technologies more transparent and trustworthy, there are already AI tools out there — like mathematical optimization — that are innately equipped to deliver interpretable, reliable and optimal solutions to today’s problems. |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: Frank Sully | 8/9/2021 11:14:38 AM | | | | Synopsys, Cadence, Google And NVIDIA All Agree: Use AI To Help Design Chips
Karl Freund Contributor Enterprise Tech
Synopsys created a buzz in 2020, and now Google, NVIDIA, and Cadence Design have joined the party. What lies ahead?
Introduction
Designing modern semiconductors can take years and scores of engineers armed with state-of-the-art EDA design tools. But the semiconductor landscape and the world around us is being revolutionized by hundreds of new chips, primarily driven by AI. Some entrepreneurial thought leaders believe that the expensive and lengthy chip design process could shrink from 2-3 years to 2-3 months if hardware development was to become more agile, more autonomous. And chief among a new breed of agile design tools is AI itself.
The Semiconductor Design Landscape
This discussion began in earnest when the EDA leader Synopsys announced DSO.ai, Design Space Optimization AI, an software product that could more autonomously identify optimal ways to arrange silicon components (layouts) on a chip to reduce the area and reduce power consumption, even while increasing performance. Using reinforcement learning, DSO.ai could evaluate billions of alternatives against design goals, and produce a design that was significantly better than that produced by talented engineers. The size of the problem/solution space DSO.ai addresses is staggering: there are something like 1090,000possible ways to place components on a chip. That compares to 10360 possible moves in the game of Go which was mastered by Google AI in 2016. Since reinforcement learning can play Go better than the world champion, one could conceivably design a better chip if one is willing to spend the compute time to do it.
Results are quite impressive, realizing 18% faster operating frequency at 21% lower power, while reducing engineering time from six months to as little as one. In a recent interview, Synopsys’ Founder and Co-CEO Aart de Geus disclosed that Samsung have a working chip in-house today that was designed with DSO.ai. This would indeed be the world’s first use of AI to create a chip layout in production – from RTL to tapeout.
Recently Google published results of doing something similar, as has NVIDIA. And Cadence Design Systems just announced an AI-based optimization platform similar to Synopsys DSO.ai. Before we take a look at these efforts, lets back up a little and look at the entire semiconductor design space. A good place to start is the Gajski-Kuhn Chart that outlines all the steps of chip design along three axes: the Behavioral level where architects defines what the chip is supposed to do, the Structural level where they determine how the chip is organized, and the Geometry level where engineers define how the chip is laid out.
Based on this model, each step towards the center (which is when the team “tapes out” the chip to the fabrication partner) feeds the work in the next phase in a clockwise direction. To date, all the application of AI has been in the geometry space, or physical design, to address the waning of Moore’s Law.
Synopsys DSO.ai
As I covered at launch, Synopsys DSO.ai was the first entrant to apply AI to the physical design process, producing floor plans that consumed lower power, ran at higher frequencies, and occupied less space than the best an experienced design could produce. What really attracted my attention was the profound effect of AI on productivity; DSO.ai users were able to achieve in a few days what it used to take teams of experts, many weeks.
Google Research and NVIDIA Research
Both companies have produced research papers that describe the use of reinforcement learning to assist in the physical design of the floor plan. In Google’s case, AI is being used to lay out the floor plan of the next generation TPU Chip and the company is investigating additional uses of AI such as in architectural optimization.
NVIDIA similarly has focused on that same low-hanging fruit: floorplanning, and with all the compute capacity they have in-house, I’d expect NVDIA to continue to eat their own dogfood and use AI to design better AI chips.
google.com |
| NVIDIA Corporation (NVDA) | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
| |