We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor. We ask that you disable ad blocking while on Silicon
Investor in the best interests of our community. If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Last year, Nvidia (NASDAQ: NVDA) agreed to acquire SoftBank's ( OTCPK:SFTBF, OTCPK:SFTBY) Arm chip unit for $40B. While that deal crawls through the regulatory hurdles, Arm is rolling out its first new chip architecture in a decade.
Armv9 architecture was designed to address the demands for high security and artificial intelligence processing.Arm says the architecture should deliver 30% improved performance over the next two generations of mobile and data center processors. The latter processors are currently dominated by Intel (NASDAQ: INTC).The architecture includes the new Arm Confidential Compute architecture, a security feature that places applications in a hardware-protected area of memory that's accessible but not fully controlled by the operating system. If an app became infected with a virus, it wouldn't spread to the rest of the system.
The first-gen Armv9-based processors will include Arm Memory Tagging, which allows developers to lock strings of data using a "tag." The data can then only be accessed with the correct key that's held by the code that calls data from memory. The setup aims to cut off memory corruption as a hacking tool.Scalable Vector Extension 2 allows developers to choose a vector length in multiples of 128, up to 2048 bits. And SVE moves from its high-performance computing focus to support a range of augmented and virtual reality workloads from genomics to computer vision.
Arm sells its process designs and licenses a semiconductor-controlling instruction set to a wide range of tech companies, including Qualcomm, Samsung, NXP Semiconductors, and Google.“As we look toward a future that will be defined by AI, we must lay a foundation of leading-edge compute that will be ready to address the unique challenges to come,” says Arm CEO Simon Segars. "Armv9 is the answer. It will be at the forefront of the next 300 billion Arm-based chips driven by the demand for pervasive specialized, secure and powerful processing built on the economics, design freedom and accessibility of general-purpose compute.”
“Arm’s next-generation Armv9 architecture offers a substantial improvement in security and machine learning, the two areas that will be further emphasized in tomorrow's mobile communications devices. As we work together with Arm, we expect to see the new architecture usher in a wider range of innovations to the next generation of Samsung’s Exynos mobile processors.” says Min Goo Kim, executive vice president of SoC development at Samsung Electronics.
My old clunky Toshiba laptop was seven years old and getting temperamental. My upgrade is a bit of overkill since I don't do gaming, but I got one of those powerful gaming laptops with an Intel i7 CPU and a NVIDIA GTX 3070 GPU for $2,400. It's really fast and the graphics refresh is almost instantaneous. Now I can do crypto-mining for Etherium. <LOL>
CPU Intel i7 Comet Lake with 8 cores running at 2.6 GHz.;
The Lightmatter photonic computer is 10 times faster than the fastest NVIDIA artificial intelligence GPU while using far less energy. And it has a runway for boosting that massive advantage by a factor of 100, according to CEO Nicholas Harris. In the process, it may just restart a moribund Moore’s Law.
Or completely blow it up.
“On typical workloads we’re up to 10 times faster than existing technologies like NVIDIA’s A100 chip,” Harris told me on a recent episode of the TechFirst podcast. “If you look at ResNet-50, which is a neural network that a lot of people operate; or BERT, which is a natural language processing neural network; or DLRM, which is a network that people use to recommend products to you ... we’re typically more than 10 times faster.”
10X faster than an NVIDIA A100 is a big deal.
The Lightmatter photonic computer core
NVIDIA markets the A100 as a component of “the most powerful accelerated server platform for AI and high performance computing,” saying it’s the “world’s first 5 petaFLOPS AI system.”
A petaflop is one thousand trillion, or one quadrillion, floating point operations per second. In comparison — and at serious risk of comparing apples to oranges — Apple’s new M1 chip reportedly delivers 2.6 teraflops per second. One petaflop is 1000 (or 1024) teraflops, so the A100 is screaming fast.
But Harris says the Lightmatter photonic computer is 10 times faster.
That’s impressive, to say the least, because it suggests a compute capacity of 50 petaflops or more per chip. In comparison, supercomputers run at performance levels of hundreds of petaflops, often using hundreds or thousands of chips to do so. Again, use a grain of salt: comparing vastly different computing infrastructures’ speeds using a single number is probably not a super-apt comparison. Lightmatter is not a general-purpose computer, and one-to-one speed comparisons may not completely make sense.
The point, however, is that it’s fast. Blazing fast.
But computing isn’t just about speed. It’s also about energy use — and heat. As everyone in technology knows, heat is a major problem impacting server farms all over the globe and limiting the speed CPUs can run at.
“Every time we shrink transistors they’re supposed to decrease how much energy they use, and that hasn’t been the case for the past 15 years,” Harris says. “And it’s turned into a really big energy problem and a challenge in cooling computer chips.”
Moore’s Law, named for Fairchild Semiconductor co-founder Gordon Moore, says that the number of transistors in chips doubles about every two years. The problem is that Moore’s Law petered out: as we’ve shrunk transistors, they’ve gotten on the scale of the electron, Harris says ... and now they’re getting leaky and less reliable. We’re no longer fitting more and more on a chip; instead we’re adding additional cores to chips.
A photonic computer, as the name implies, uses photons, not electrons. They’re not magic, and they’re not as good as electrons for some kinds of computing, like logic operations, control flows, and if/then statements.
As Harris says, a photonic computer is not going to run Windows.
But there are some things photonic computers like Lightmatter’s are really good at. And they turn out to be the sorts of things that are growing at exponential rates in today’s server farms and cloud computing centers:
AI. Machine learning. Neural nets.
Lightmatter will be shipping its photonic computers in a product it calls Envise by the end of the year, Harris says. Envise packages photonic processing cores with traditional transistor-based systems to offer the best of both worlds. An Envise blade includes 16 Envise chips in a 4U server configuration that uses a miserly three kilowatts of power.
“Envise is really the first photonic computer, period, that you can buy, and it addresses any kind of neural net,” says Harris. “So if you want to run algorithms behind Alexa or Siri or any of the voice assistants, Envise can run those. If you want to do translation, Envise can run that. If you want to identify things in images for your self-driving car, Envise can do that too.”
According to Lightmatter, engineers can use PyTorch or TensorFlow or Onyx — all the languages and formats they’re used to — to build neural networks on Envise. Lightmatter offers a compiler in Idiom to compile programs to native code for photonic processing.
Perhaps the most exciting part of photonic computing, however, is a quality of photons that is totally impossible for electrons to duplicate: color.
Because light has different colors which occupy different places on the electromagnetic spectrum, you can run photonic computers on different colors. Simultaneously. Using the same same hardware.
And that’s where Lightmatters’ photonic computers get scary fast.
“For every color we add, we increase the throughput by that number,” Harris says. “So two colors is twice as fast. Three colors is three times as fast, and the efficiency scales about the same way. So we think you can probably do 64 colors in the future. We’re not there yet, but we think that’s possible. Imagine having 64 virtual processors on a chip, and it’s just the area of one.”
Normal processors do one job at a time, even if they appear to human senses and timeframes to be multitasking. Photonic processors would run multiple jobs in multiple colors at the same time.
Now you’re getting scary fast.
“I think that we have a roadmap that extends beyond 100X the current speed of accelerators,” Harris told me.
That essentially means you’d have the power of a room-sized supercomputer in a package you can carry in a larger-sized piece of carry-on luggage, running at up 20 GHz or more.
Eventually, you might get small photonic systems on a laptop or even a smartphone. Much quicker, they’re going to turn up in cloud-based systems.
“It’d be a dream of mine to eventually power a Google search,” says Harris. “A lot of that is run on neural networks.”
Lightmatter is pretty confident about shipping product in 2021. That said, everything is experimental until it’s not, and there’s likely some manufacturing and scaling challenges in the company’s way.
Assuming it all works out, however, we need photonic computers sooner rather than later. Already, data centers consume significant portions of the world’s total electricity supply: easily 1% but perhaps as much as 5% of all the electricity we generate. By 2025, some estimates say global computing could suck as much as 20% of all the world’s power ... with all the environmental damage that entails.
Photonic computer is low-power, and doesn’t need cooling like existing CPUs and GPUs. And, at far greater throughput for exactly the kinds of computing that are growing fast, it could just be the technology that reverses that power trend.
And, of course, enables continued growth in our use of AI and machine learning.
NVIDIA Corporation NVDA 0.58% shares have recovered from the market-wide tech sell-off. The company has an immediate catalyst in the form of its Analyst Day, scheduled for Mon. April 12, 1 p.m. to 3 p.m. EST.
The Nvidia Analyst: Credit Suisse analyst John Pitzer maintained an Outperform rating and $620 price target for Nvidia shares.
The Nvidia Thesis: The key near-term issues the company has to address are regarding Gaming over-earning, timeline for reaccelerating year-over-year growth in core Data Center Group and the regulatory process around the Arm Holdings acquisition, analyst Pitzer said in a note.
The event, the analyst said, is likely to underscore key long-term EPS drivers, which continue to increase.
The company will likely highlight growing proof-points of a $100 billion+ total addressable market for the DCG, including $45 billion for Cloud, $30 billion for Enterprise and $15 billion for Edge, Pitzer said. Nvidia could shed light on growing software monetization, with AI Application Frameworks, the analyst added.
The company is also likely to emphasize the still-robust Gaming market, with or without crypto, and the growing momentum in autonomous driving, the analyst said.
The opportunity, according to the analyst, clearly supports a long-term gross margin of 70%, an operating margin of 50% and a free cash flow margin of 30%. For the calendar year 2020, these metrics are expected at 66%, 41% and 24%, respectively, supporting the calendar year 2021 EPS of $13.35, roughly in line with the consensus, the analyst said.
NVDA Price Action: Nvidia shares were down 0.58% at $576 at market close Friday.