|NVIDIA Corporation's (NVDA) Management Presents at Piper Sandler 2021 Virtual Global Technology Conference (Transcript)|
Sep. 14, 2021 8:32 PM ET
NVIDIA Corporation (NVDA)
NVIDIA Corporation (NASDAQ: NVDA)Piper Sandler 2021 Virtual Global Technology Conference Call September 14, 2021 2:00 PM ET
Manuvir Das - Vice President of Enterprise Computing
Conference Call Participants
Harsh Kumar - Piper Sandler & Co.
Thanks, everybody, for joining us for a very exciting session that's coming up now. We are very fortunate to join Manuvir Das, who is the Vice President of Enterprise Computing at NVIDIA. NVIDIA is, of course, the largest – single largest market cap company, doing some extremely exciting things, of course, through all of its businesses, but I think the most exciting thing, no one will argue with this, are happening with what they're doing within the data center where Manuvir is deep into it.
So with that, I'm going to turn it over to Manuvir. He's got a short slide deck that he wants to talk about. And Manuvir, the floor is yours.
Thank you so much, Harsh, for having me and for giving NVIDIA this opportunity to talk to the audience. It's a real privilege. I thought what I'd do at the outset is just share with you the big-picture view of what NVIDIA is doing and where we are headed in the data center and with artificial intelligence before we do some Q&A here. So I'll start with a statement about what we are sharing in the slides as we always do.
So the first picture I have here is something we've shared before, when we announced a new software product from NVIDIA called NVIDIA AI Enterprise. And I thought I would start with this to just level set. This is the news we've shared prior and why we did this work, right? So if you think about the state of the union for artificial intelligence in the enterprise, for enterprise customers at large, we are at a state today where we've had a lot of success with early adopters.
There is a few thousand companies across the world that have had great success improving their business, improving the experience of their customers with artificial intelligence, but the broad base of the enterprise customers is yet to adopt AI, right? And what is the fundamental reason for this?
The fundamental reason is that there are two very different sets of people within every enterprise company. On the one hand, you have the data scientists. These are the people who understand AI, who understand the tools, Jupyter notebooks, all these kinds of things. They knew the development of new AI capabilities, and they move fast and they are pretty agile, and they are the state-of-the-art cutting-edge, doing new things every night.
On the other hand, you’ve got IT administrators, who are accountable and responsible for making sure that the actual applications running in the enterprise data center are safe, secure, stable because the business of the company depends on it, right, and the experience of the customer depends on it. And these two personas, these two worlds are pretty much apart because the one world of the data scientist wants to use the tools and frameworks that they are comfortable with, whereas the IT administrator is used to a different model for how to deploy applications. And there is a disconnect because IT does not know how to pick up what the data scientists produced and the data scientists don't know how to operate in a world where IT lives.
And so we created NVIDIA AI Enterprise to address this gap. And what we did is we took NVIDIA’s AI software for training, for inference, for data science, and we made it work on top of VMware vSphere, which is sort of the de facto platform in the data center. If you look at any enterprise data center today, you will find virtualized servers rather than VMware vSphere. And so that's what this picture shows, right? And it achieves two things at the same time.
On the one hand, for the data scientists, they see all the tools and frameworks that they are comfortable and experienced with to do their work. That's the layer in green provided by NVIDIA. On the other hand, for the IT administrator, it's the same VMware vSphere environment, they are used to the same tooling, how do I provision, how do I get access to people, but now with these new workloads for AI. And so this is really a way of bringing these two worlds together. So this is what we've announced earlier this year in conjunction with VMware, which is NVIDIA AI Enterprise, really NVIDIA’s way of becoming mainstream for enterprise customers for making AI a mainstream workload for enterprise customers.
Now, this is just actually the beginning. And so what I really wanted to share with you today was that this is something NVIDIA has been thinking about and working on for many years, right? And what we realized is this is mainstream artificial intelligence in enterprise data centers is a full stack problem. Of course, you need the right hardware, that is the layer I've shown you in green. But then you also need all of these pieces of software, sort of the operating system of AI, all the essential tools, so that you can run your different AI workloads.
And then finally, if you think about it, there are just different use cases, whether it's Vision AI detecting interesting things that are going on in video feeds or cybersecurity, finding attacks that are happening in your data center. And so you would love to have pieces of software that are customized frameworks for each of these use cases that are easy to adopt.
And so I've drawn this abstract picture for you that is representative, you can think of it as a brick wall, right, that if you really want to solve the artificial intelligence problem end-to-end, you need to fully construct this brick wall of all these different boxes to get a complete solution. And at NVIDIA, that is exactly what we've done.
This is the same picture, but I've replaced every one of those abstract concepts with an NVIDIA product at the bottom hardware products, but in the middle and the top all software products that NVIDIA has produced over the last few years, and especially over the last year to really complete this brick wall. This is not a vision slide. This is an execution slide. All of these things I'm showing you on this slide today already exist, are already usable by customers.
The fact of the matter is that today NVIDIA is much more a software company than a hardware company. We have thousands of software engineers within NVIDIA who work on this – on all of these things every day. And so we built this entire stack a set of frameworks for these different use cases, the essential software that allows all of this to run on mainstream servers as I said in conjunction with folks like VMware, Cloudera et cetera, all of the hardware.
And then what we announced recently was we have a partnership with Equinix to put all of this technology, the hardware and the software into Equinix data centers around the world. So that for customers as they get going, it's very easy for them to start their journey where NVIDIA has pre-deployed all of these things for them. And then as they proceed in their journey, of course, they can procure and deploy these things for themselves in their own data centers or in a colocation facility.
Before I come back to you Harsh, the final point I wanted to make was that NVIDIA is a pretty fast moving company, right? This is our general philosophy. And so I did this exercise for myself or if I were to show you this picture, this same picture from last year, but only show you the things that were in execution mode that we actually produced. What would this slide actually look like? And this is what it would look like. Whereas today it looks like this, right?
And so I just wanted to end by making this point that NVIDIA is an R&D-first, innovation-first company. The business results we have today are based on the work we've done in the last few years. And what our teams are working on every day today, all of these software stacks that we have been producing and are putting out are to unlock the opportunity in the years ahead. And that's what we are really focused on as a company.
So Harsh, that's what I had as a bit of an opening context setting statement, if you will. Artificial intelligence and the enterprise data center is a full stack problem. It's an end-to-end problem. It requires a broad ecosystem. This is where NVIDIA is focused. We've built the hardware. We've built the software. We've created an ecosystem. We have more than 2.5 million developers who use different parts of our stack to develop their own applications and solutions.
And that's our contribution to make AI feasible for enterprise customers. And with that, we have a go-to-market motion that is in conjunction with established partners, the OEMs who produce servers, folks like VMware who produce software stacks for the data center. And we are really looking forward to this journey of democratizing AI for enterprise customers over the next few years.
And so with that, I'll stop sharing my slide deck and hand back to you, Harsh.
Q - Harsh Kumar
Manuvir, that is simply incredible to see the number of products you guys have introduced in just the last 12 months to be able to fill up the gaps of where you were and where you're trying to go. And that brings us to an interesting topic. There has been a lot of changes here not just with COVID, but just generally data center is always morphing, always changing. Can you talk about how it's changing and what are the large changes that are happening in the industry that sort of you wake up and think about and say, this is the kind of direction that NVIDIA maybe needs to think about going into?
Yes. That's a great question, Harsh. And it's amazing how much the landscape of data centers has changed in the last decade. You know, you’ll hear some of these buzzwords these days, like cloud, Kubernetes, containers, all these things. What's the common threat to all of that, right? The common threat to all of it is that for quite some time, computing in the data center was done in a scale-up manner. You take one server, you run your application on it, and as the applications get more demanding, you make your server bigger and bigger and more capable, right? And then you buy a few of these servers and they're super expensive.
And then what happened with the advent of the public cloud was the proliferation of a different model, which is scale-out rather than scale-up. Instead of having one giant server, let me have many small servers that cooperate to run a workload, right? This is what in computer science for decades has been referred to as distributed computing that the public cloud already did. And Kubernetes and containers are just a mechanism for building your application as a distributed computing workload, right? And this is how data centers have really evolved in the last decade. So what does this mean? This means now that when you run an application, instead of running on one server, you're running on a set of servers that are working in conjunction to run your workload.
So when you think about computing now, you're not just thinking about building the best server, you have to think about the networking because the data is flowing across all these servers. You have to think about security because if you – if a malicious thing intercepts one server, they have access to all the other servers. You have to think about how you store your data, so it's accessible to all the servers, right?
So computing is really evolving to data center scale. Every workload runs within a complete data center rather than a single server. And so because of that, you have to solve this as a full stack problem. You have to think about what's the right servers, what's the right networking gear, what's the right networking software so that it goes fast, what is the software stack for orchestrating the workload and running the workload. You have to put it all together, right? And this is how NVIDIA has really evolved, that we've become a full stack company for that reason.
Now, the other thing I would say, Harsh, as you know, our Genesis at NVIDIA was as a hardware company, right, with the GPU. So the other insight that we had in NVIDIA was that in order to make this full stack go, you're going to need three essential components in every server. Of course, you need a CPU, which is what applications have traditionally run on. You need a GPU, which is the way of accelerating the workload, so you can do more in every server, and then you need this new form factor that we call it, DPU, a data processing unit, which sits on the network interface and really runs not the workload, but the infrastructure of the data center itself, okay? So every server needs a CPU, a GPU, and a DPU in conjunction, this is our vision of the data center. And this is why we, of course, have GPUs. We have the BlueField DPU from NVIDIA.
We also recently announced that we are working on a CPU optimized for artificial intelligence called the Grace CPU based on Arm technology. And we really see this as the future direction of the data center where every server will have a CPU, a GPU and a DPU inside, right?
So just to summarize all that, I would say, because I know I said a lot there, Harsh. We really think that computing going forward in the data center becomes a data center scale problem, a full stack problem. We believe every server needs to have a CPU, GPU and a DPU inside of it as the essential hardware components and then you need the right layers of software that I showed on my slides to bring it all together within the data center.
Amazing Manuvir, it seems like the opportunity is getting bigger and bigger as the data center compute sort of gets distributed and flattens out, if you will. So you guys, I'm sure talked to a lot of customers and I'm sure the highest end customers actually come to you with their problems and say, this is kind of what we need to solve. What are you seeing in terms of what's actually strategically important to the customers? And what areas are these customers emphasizing versus deemphasizing, particularly as a result of, for example, COVID-19 that we're caught up right now?
Right. I think and you mentioned the pandemic and that's had two profound impacts, Harsh, that we have seen from talking to customers, right? And they are two sides of the same coin, which is namely that the amount of in-person connection has gone down dramatically, right?
One side of the coin is for the companies doing their own work and their own business across the employees, et cetera. The employees are not able to sit in a room together, right? So the question is how could the company remain as productive as before, even though the employees are all in different places and working from home, right? That's one consideration.
The second consideration is the company's engagement with their customer base has also not changed because of the pandemic, right? It's become much more online and digital even more so than before. And so with that change in how they are interacting with customers, what should they do, right? So let me just take a minute to break down each of these, right?
So if I take the first one, which is that employees are not all sitting in the same room together, instead our approach at NVIDIA with our customers is – instead of looking at this as a loss, this is actually a forcing function for a new opportunity for companies that there are actually – a technology can allow companies to be far more productive by leveraging people all over the world, rather than just leveraging people in the room. And this is why we created a platform called NVIDIA Omniverse, which we now make available to enterprise customers.
And the way to think about NVIDIA Omniverse is it is a digital real-time remote collaboration environment for people working on the same project. It could be engineers designing a building together. It could be designers creating the facade of a display somewhere together. And with Omniverse all of these people can essentially log into the same place, they can collaborate in real-time, one person makes a change and other person can see the change, right? So it creates a whole new model for collaboration and working together, right? And this is why we put so much emphasis on Omniverse. It's a big, big initiative for NVIDIA. And of course, there's a bit of a bias here because for such a model to work well, one of the core technologies you need is really good graphics, and that's something NVIDIA knows a thing or two about, right? So it’s a natural pathway, but it's also a distributed computing problem, it’s also a scale, it's a data center scale problem because you're running this giant sort of thing that different people can connect to and work on, right? So that's one change, Harsh. So that is within the company's work. That's why we did Omniverse.
And of course, there've been technologies for remote work like VDI that NVIDIA has been working on for quite some time with our GPU technology and we continue to do that, right? And we see a lot on adoption, for example, of workstations now because if you think about it, if you're an employee working from home, right, you need a proper workstation in your home, if you're going to do all your work from home, right? And it changes the dynamic there, right? So that's the one side.
The other side is the company's engagement with their customer base, which is now much more digital and online than it was even two years ago, right? And so you see – look at how we're doing this conference right now, right? We’re on a video conferencing technology. These are proliferated, right? But you see things like, the need to converse with your customers, what is called conversational AI. So many more customer conversations, you don't have enough humans in your company to do all these conversations, so you need some automation, you need AI, you need a chatbot that can interact with your customers and your website, right, so you can handle more requests and more inquiries.
So on that side of the coin, we've seen a lot more companies now interested in adopting AI because they see it as a way to greatly enhance their communication with their customer base in this new era where their customer is relatively disconnected from them physically, right? So those are the two things I would point out.
And what about – I mentioned that, maybe something that's become important versus something that's become less important to the customer. Can you talk about – if you have any example, I would appreciate it of something that the customers are not as focused on today as they were maybe before?
Yes. I know you'd love an answer to that one, Harsh, but I'm going to pass on that because for better or worse, I happen to be in a position where, when I talk to customers, it's mostly about the things they want to do now, and so that they are really focused.
That's fair. Let's talk about off of that first question as well. Big companies that want complex things done come to NVIDIA, and I suspect that you're probably more of a partner today and increasingly in the future than you were before because they’re actually coming to you saying, we need this X, Y, Z and help us with that. And you're sort of involved earlier on, is that – am I correct in thinking about it correctly? And is it really happening? And are you seeing enhanced interaction with your customers on a daily basis with the requirements that they want to fulfill?
I think it's a great observation, Harsh, that you have. It is true. I will put it to you this way, right? AI is actually – is AI is hard for any customer to implement. That's the truth of it, right? And we sort of went through this phase where we were just proving it out. The technology was complex and we were working with a small number of customers who really needed it. Yes, like for example, I'm an online shopping site and I need to recommend to a customer what they should buy next. And I know that AI will help me and as painful or difficult it might be, I'll just jump in and do it. And so those are the people we worked with earlier, right? But that has now evolved.
And as we’ve sort of broadened our reach across the enterprise customer, they're not looking for point – pieces of technologies that deploy the software or putting this piece of hardware. They want a solution, right? They want to solve a business problem. And so more and more, we find our conversations to be of that nature. Hey, this is my use case. This is the problem I'm trying to solve. Tell me, what is the recipe? What hardware do I need? What software do I need? What ISV application vendor do I need to work with? What data sets do I need to acquire in order to do my training? It's a complete discussion.
We believe in this so much, Harsh, that if you look inside NVIDIA, we have a very large organization, a dedicated organization of what we call solution architects. And these are people they're not sellers, they're not sales reps, they're not product engineers, they sit in the middle. And what they do is every customer conversation begins with what's the problem you're trying to solve. Here is our SLAs, they will sit down with you as almost consultants, right, and as partners, and we'll design the solution with you. And as we designed our solution with you, maybe you use our technology, maybe you won't use our technology. That's fine. Either way, if you adopt AI as NVIDIA, we’re super excited about that, right?
And we do think we have good piece of technology. Even for folks like myself, Harsh, like when we go and have conversations at the executive level with the customer, right, I never have a conversation as a vendor. I never have a conversation about here is a product I want to sell you, right? My conversation is what's the problem you're trying to solve? How have you architected things so far? We think you might want to architect your infrastructure and data center to go solve this problem. And if we align on that, maybe we can be of help to you with some parts of that architecture, right? That's going to hold it.
It's an amazing way to think about customer interaction because the customer in this situation will more than likely to feel you're there to help them with their issues. Then just when they're trying to sell a product, like you put it best?
Yes. Harsh, if you don't mind, I might get into trouble for saying this, but my boss, Jensen, who is the CEO of NVIDIA, you know, I would say the one word he uses the most in meetings with folks like myself at NVIDIA is empathy. That's sort of the most important word in this dictionary. And that is it starts with that, have some empathy for the customer, right, understand the situation they're in, what problem they're trying to solve, what opportunity they're trying to take advantage of and then make it.
And what I bet you, it allows NVIDIA to connect at a completely different level versus the rest of the vendors. Let's move on to software, you mentioned software earlier on. So NVIDIA I've noticed, more over the last three years is bringing increasingly more and more amount of software to the marketplace specifically whether it relates to AI, which is, I would say a core competency of NVIDIA. Can you talk about NVIDIA’s AI software? What is the differentiating factor here? Where are we in the adoption curve? And like, if I dreamed the dream, it’s a long question, but if I dreamed the dream, what is the opportunity for NVIDIA here?
Yes. So I will do this in reverse order with the punch line, Harsh. I think we believe that if we execute well on our plans, there is at least a multi-billion dollar incremental software opportunity here on top of what we are really doing, right, because today our revenue in enterprise AI is primarily based on the hardware that we provide, the GPUs and the networking gear, et cetera. But if you think about it – a simple way to think about it is if you look inside an enterprise data center, there are certain layers of software. For example, VMware or SAP, et cetera, that are deployed across servers, and there's a commercial model for that software. And the reason is because that software solves a very important problem for the customer, which is, how do I run my workloads? And the software is almost more important than the hardware because the software is what the customer is experiencing.
And the customer has an expectation that the software is supported. It has a certain level of quality and performance. It is updated regularly, those sorts of things, right? And that's why there's a commercial model for the software. And we are now entering that world for the first time with NVIDIA, right? To date, we have produced software, but it's been made available to the community to do other people to make their life easier. But now for the first time with NVIDIA AI Enterprise, we really have a similar kind of product that can be sold because that thereby a customer can rely on it, right? And there's a simple math you can do about how many servers that are in the world, how many servers we expect would be useful to AI, what sort of licensing you could do for the software and every server that would be fair to the customer, then you multiply those things out, and it's at least multiple billions of dollars of incremental revenue for that layer of the software, right? So I'd stop with that.
Now, to your first question, let’s call it, where are we with this, right? The truth of the matter is we are in early days, right? This year is when we have rolled out the software, in fact, in NVIDIA AI Enterprise went to general availability just last month, right? And have just beginning to rollout now. There's a new version of VMware that supports that, which has been rolling out. So we are in the beginning of this journey, but we certainly expect that there will be broad adoptions. And you can think of that adoption on two fronts, Harsh. One is the software itself being adopted.
But the other thing is what the software is really doing? Is it's making it possible that you can take your regular mainstream servers you have in your data center today that you would normally not think of using for AI, but now you can use them for AI. So it also – so its expands the balloon in two different ways. One way is there's this new thing called the software. That is a commercial proposition. But the second is that the software brings a lot more servers into the picture to be used for AI and so it expands the balloon and it expands the reach, right? So that's kind of what I would say. We see it as a big opportunity. We’re early in the adoption curve. We are in the steep part of the S-curve, if you will.
And then just one thing I might say about the piece parts themselves. The simplest way I would describe this is when you adopt AI, you need to do two things. Number one, with your data scientists and other people you need to develop in AI. And then once you've developed it, you've got these great models, then you need to deploy the AI within application so that you can actually use the AI, for example, to see what's going on in your weekends, okay?
And so essentially we produced two platforms. We have something called NVIDIA Base Command, which is one of the enterprise uses to develop the AI. And we've got a platform called NVIDIA Fleet Command, which you use to then deploy your AI out to all the places where you need to deploy it out, right. So that's the highest level, the simplest way of thinking about our platform. There's Base Command, there’s Fleet Command, and we're very excited about these, but as I said, we are very much in the early stages.
Manuvir, just one more thing on that. Do you think there's anybody in the space that's even close to the level of work you guys are doing? I know, historically, NVIDIA has been just a pioneer in AI on the hardware side, and now I see this focus on the software side. Is there anyone even in the zip code of where NVIDIA operates in bringing the complete package together?
Yes. We – of course, I am biased with it. But I would say we do not think so, right? But I will elaborate on that, right? If you think about the picture shown in my slide, the point we made was this is really a full stack problem from the piece parts of the hardware to the systems, to the low level of software, to the frameworks on top. We're the only company in the planet that has been working on all of these limits.
And as I said, my slide was not a vision side. My slide was a reality slide of the things we've been. Now, we operate at all these different levels and we are big believer in the ecosystem, whether its cloud service providers or server manufacturers or whatever, right? So our model is we are happy to partner with anybody at any level. For example, you might be a company that focuses on building frameworks, the top level. But then we have API, so you can use the middle layer of our software as the basis for developing your frameworks.
You might be a system manufacturer like a Dell or HPE, you can incorporate our GPUs and our DPUs into your servers, right? So there are certainly companies at every leader. In fact, we fostered that ecosystem very intentionally, but we believe we are really the only company in the planet, Harsh, that has focused on the entire stack, right? And that's why we need to really optimize it and tailor it for these businesses.
Well, absolutely. No question about it. You guys have been there at the forefront with compute and with AI for a very long time already. Well, you brought up something Manuvir, earlier on, that was fascinating Omniverse. How does Omniverse fit into your software strategy? You've talked in terms of collaboration, but obviously there's got to be a longer game plan I would think if NVIDIA is putting so much upfront into it. What is the opportunity for adoption for this in the next couple of years? And so then maybe I'll hit you with that first and then go from there.
Yes. I'll do this one backwards too with the punch line – punch line first, Harsh. Our math, basically when we look at the target audience for Omniverse in the work that we've done, we think there's about 20 million designers and engineers out there for whom Omniverse would be a great platform for them to do their day-to-day work. And if you just do some simple math of a subscription-based model that we've already put out, and then fix the norms and standards of the industries, if you will. This is again, definitely a multi-billion dollar net incremental market opportunity from the use of Omniverse, right? So that's one way of answering the question.
The other way of answering is, as you pointed out, I talked about collaboration and that's certainly a use case of remote collaboration, but do we see a bigger opportunity, right? The bigger opportunity we see, Harsh, is that one way actually of time together everything that NVIDIA has done from its inception as a company, but there is graphics or AI or robotics or self-driving cars or any things is that fundamentally we're a simulation company, okay. We build technologies in different domains that allow you to simulate something without actually having to do it. That's the core of our technology. Like, for example, think about our platform for self-driving cars, yes, you can drive cars around and you can capture what's happening in the roads and make your cars better. Of course, we do that. But we also have a complete simulation platform that you could use to do miles and miles of driving “without actually driving,” right? So you can learn them more.
So we really believe that going forward no matter what industry you're in, as the world evolves, simulation would become more and more routine as the basis for how you're productive and really what Omniverse is, it has dramatically changed the state-of-the-art in terms of being a platform for simulation for real-time simulation, so you can actually model things and see what's happening, right? And we think that is a massive opportunity that goes beyond the just the real-time collaboration.
Thank you. Thank you for that. In the most recent earnings call, I think Jensen focused a lot on software for one reason or the other. And then in your presentation, you're talking a lot about software, so we see the change happening. My question is about, when do you think in the future how far out are we before you start generating meaningful amount of opportunity in revenues from this software stack that NVIDIA is bringing to the table?
Yes. I think I'll answer that for you. I'll apologize and answer that for you in a relatively generic way, Harsh, instead of putting specific numbers, right? I think this is definitely a journey that we are beginning off. We are on the steep part of the curve. We are seeing massive interest, so we know we're heading in the right direction. But certainly right now, our revenue is primarily driven by the things we have been working on over many years, right? And these things will begin to pay off as we go forward.
But as I said, what I quoted to you for both NVIDIA enterprise as well as for Omniverse enterprise, there have been multi-billion dollar opportunities. We see these as very real opportunities, right? Yes, I would also say, Harsh, that – I also want to paint the picture accurately for the audience, right? There's in fact a next level of software opportunity for NVIDIA that is in some ways more powerful than what I described, right? So what I’ve talking about here is sort of the essential software for artificial intelligence or for collaboration and simulation of Omniverse. But if you think of the real AI journey, what is the real AI journey about? It's about saying that in every walk of life, no matter what industry your company is in, there are certain functions that humans are performing, right? And each of those functions is one-by-one.
Then if we can figure out a way to automate that function with AI, then you can do it much more cost-effectively and you can free up your humans to focus on other things. A good example is that you can use NVIDIA’s software frameworks to look at x-rays and detect whether there's a fracture in the person's bone, right? That's something that today radiologist has to do, but you can take that function and you can automate it, right?
In the space of retail, you can look at the camera feeds from across the store and determine who's shopping for what, and what are they walking out of the store with, right? Instead of having humans in a backroom having to sit there and look at the videos with weary eyes all the time, right? So one-by-one, you can take each of these human functions and replace them with some NVIDIA software.
So now the question you ask is what is the potential business value of the software? The business value of the software is not a function of how much did it cost NVIDIA in terms of engineers to develop the software. The business value is in terms of how valuable is it to that enterprise customer to replace that human function or augment that human function with this automated software, right? And so we see a rich landscape of business opportunity from the software there that we are yet to unlock, right? And that's a whole other domain of opportunities.
So I wanted to shift gears a little bit, Manuvir. We were at the last kind of seven, eight minutes, and I wanted to hit upon this. So off late, since maybe the acquisition of Mellanox, we hear NVIDIA talk a lot about SmartNICs and DPUs. And I guess, connectivity is a core theme now you touched upon with the distributed compute channel methodology. Can you update us on how you feel, A, about the importance of things like SmartNICs and DPUs, maybe what's the difference between the two? And then where you are in the roadmap as a company on these two particular connectivity products?
Yes. Let me do that, Harsh. So firstly, starting with – let’s just disambiguate these things, SmartNICs and DPUs because there's a number of people, number of companies out there that work on SmartNICs, right? So I think the best way to think about it is SmartNICs are sort of step one, which is to say, I've got a network interface card. The data is flowing through there. If I put a little bit of computing power, maybe some Arm – Arm CPU cores over there, there's some more processing I can do on the data as it's flowing through the network.
Now we took this to the next level and created this concept of the DPU. Our DPU product family is called BlueField. And the idea of the DPU is it has so much horsepower in that processor that what it actually does is it takes over the functions of the data center itself. So we've heard a lot in the last decade about software-defined data centers. What does that really mean? What that means is that all these things you’re doing in your data center firewalls and things for which you had this dedicated hardware will now turn into software that was running on the server itself. But as this happened, more and more of this load going to the server, which mean that there was less and less place for the application themselves to actually run. So whereas you would have needed five servers to run an application, you now need 10 because of the servers being consumed on this stuff.
And what our DPU really does is it says, offload all of that work on to this other processor. Move it there. You free up the CPU and the main server to run your workload. And the way we build the DPU, it actually accelerates, it’s like the GPU. If you take the firewall software and you move it from the CPU to the DPU, it’s not just shifting the problem, it runs a 100x faster and so you need much less silicon in the DPU to do the job, than you would have on the CPU, right? So it actually saves money in the data center, right? So this is why we are so high on the DPU because it can dramatically change the way data centers are architected. So our view is every server needs a DPU.
Now, two specific things we have done here, Harsh, that we think distinguish NVIDIA. The first thing is we learned a great lesson from when we did GPUs. We created a software SDK interface called CUDA, which was a simple way for developers to interact with GPU. We said, no matter what GPU you use, CUDA is CUDA, right? So it makes your work portable. We've done the same thing here with DPUs. We've created an SDK called DOCA, and it's a consistent SDK across our DPU family. And so again, what we say to the ecosystem is program to this API, this SDK and your work will translate as we make better and better DPUs and your software will just become better.
And the proof point of this, the second point I want to make is we have a roadmap. We are already working on BlueField-3, the third generation. We've already announced the architecture of BlueField-4, right? And it's not just making that processor better, but we now are working on versions of that processor where we've actually got the GPU capabilities inside the DPU as well, right? So you can do AI now inside the network, right? So think about what that enables, right? So that's how I'd summarize it, Harsh.
On the one hand, we have a rich hardware roadmap for how much more powerful DPUs are becoming. But we've created is an interface called DOCA that rides along. So for the ecosystem, you just develop once and as the processor gets better, your software will just get better along the way.
Manuvir, it's amazing. You described it so well. I think maybe 15, 17 years ago, maybe even 20 years ago, when the first NICs were coming out, I was trying to understand what they did. And the point was, it takes away some of the complex functionality off of the CPU and does it for the CPU. And it seems like the same thing is happening, except the functions are getting more complex, they are software richer, but the basic functionality is the same, but we're moving up the stack, which is great for companies like you and actually makes the data center simpler in some ways because like you said, it's more cost-effective. And so anyways, fantastic stuff. So a lot to think about there, a lot to unpack.
Manuvir, as always, pleasure to have you. Thank you so much for your time. Thank you, anybody that joined in and listened to this presentation, and we really appreciate your time. Thank you, Manuvir.
Thank you, Harsh. It was my pleasure. And on behalf of Jensen and the entire team at NVIDIA, really appreciate the opportunity to be with you today.
Thank you so much. Take care.