|From: FJB||3/8/2019 1:50:40 PM|
|THE BUGATTI 'LA VOITURE NOIRE' IS THE WORLD'S MOST EXPENSIVE NEW CAR|
The Batmobile-like hypercar just sold for an astounding $18.9 million.
BRANDON FRIEDERICHMAR 5, 2019
Bugatti just blew the minds of gearheads and luxury aficionados everywhere by unveiling the world's most expensive new car at the 2019 Geneva Motor Show.
The Bugatti "La Voiture Noire"—French for "the black car"—is a one-off hypercar created to commemorate the 110th anniversary of the French marque's founding.
A Bugatti enthusiast picked it up for a truly astounding $18.9 million, according to Business Insider.
That's nearly $6 million more than the Rolls-Royce "Sweptail" sold for when it set the previous record back in 2017.
And if we're being honest, La Voiture Noire looks way cooler than the Rolls. Basing the design on company founder Jean Bugatti's Type 57 SC Atlantic, engineers aimed to sculpt an exterior that's "all of a piece" by integrating the bumpers into the body and creating a uniform windshield that flows into the side windows.
“Every single component has been handcrafted and the carbon fiber body has a deep black gloss only interrupted by the ultra-fine fiber structure," said Bugatti designer Etienne Salome.
“We worked long and hard on this design until was nothing that we could improve. For us, the coupe represents the perfect form with a perfect finish."
For that astronomical price tag, the anonymous buyer also got the same ludicrous, 1,500-horsepower W16 that powers the 236-mph Divo and the 260-mph Chiron, along with six freakin' tailpipes.
Enjoy viewing it now, because you'll almost certainly never see the Bugatti La Voiture Noire in real life.
TAGS HYPERCARS GENEVA INTERNATIONAL MOTOR SHOW LUXURY MONEY RIDES BUGATTI BUGATTI LA VOITURE NOIRE WORLD RECORDS
|RecommendKeepReplyMark as Last Read|
|From: FJB||6/11/2019 8:29:58 PM|
|Cray, AMD to Extend DOE's Exascale Frontier|
By Tiffany Trader
Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Oak Ridge National Laboratory has selected American HPC company Cray–and its technology partner AMD–to provide the lab with its first exascale supercomputer for 2021 deployment.
The $600 million award marks the first system announcement to come out of the second CORAL (Collaboration of Oak Ridge, Argonne and Livermore) procurement process ( CORAL-2). Poised to deliver “greater than 1.5 exaflops of HPC and AI processing performance,” Frontier (ORNL-5) will be based on Cray’s new Shasta architecture and Slingshot interconnect and will feature future-generation AMD Epyc CPUs and Radeon Instinct GPUs.
In a media briefing ahead of today’s announcement at Oak Ridge, the partners revealed that Frontier will span more than 100 Shasta supercomputer cabinets, each supporting 300 kilowatts of computing. Single-socket nodes will consist of one CPU and four GPUs, connected by AMD’s custom high bandwidth, low latency coherent Infinity fabric.
Oak Ridge Director Thomas Zacharia indicated that 40 MW of power, the maximum power draw set out in the CORAL-2 RFP, would be available for Frontier.
“Cray’s Slingshot system interconnect ties together this massive supercomputer and a new system software stack fuses the best of high performance computing and cloud capabilities,” said Cray CEO Pete Ungaro. “We worked together with AMD to design a new high density heterogeneous computing blade for Shasta and new programming environment for this new CPU-GPU node.”
Frontier will use a custom AMD Epyc processor based on a future generation of AMD’s Zen cores (beyond Rome and Milan). “[The future-gen Epycs] will have additional instructions in the microarchitecture as well as in the architecture itself for both optimization of AI as well as supercomputing workloads,” said AMD CEO Lisa Su, adding that the new Radeon Instinct GPU incorporates “extensive optimization for the AI and the computing performance, [with] mixed-precision operations for optimum deep learning performance, and high bandwidth memory for the best latency.”
The CPU and GPUs will be linked by AMD’s new coherent Infinity fabric and each GPU will be able to talk directly to the Slingshot network, enabling each node “to get the optimum performance for both supercomputing as well as AI,” said Su. All these components were designed for Frontier but will be available to enterprise applications after the system debuts, according to AMD.
Frontier marks a return for Cray and AMD to Oak Ridge, home to another Cray-AMD system, Titan. Benchmarked at 17.6 Linpack petaflops, Titan was the number one system in the world when it debuted (as an upgrade to Jaguar) in 2012. With Titan set to be decommissioned on August 1, 2019, and Frontier scheduled to be deployed in the back half of 2021 and accepted in 2022, Oak Ridge won’t be without a Cray-AMD machine for too long. While Titan used AMD (Opteron) CPUs and Nvidia (K20X) GPUS, Frontier will rely on AMD for all its in-node processing elements.
Frontier is Oak Ridge’s third machine to use a heterogeneous design. In addition to the aforementioned Titan, Oak Ridge is of course home to Summit, which became the world’s fastest supercomputer in June 2018. Its 143.5 GPU-accelerated Linpack petaflops are owed to 9,216 Power9 22-core CPUs and 27,648 Nvidia Tesla V100 GPUs.
“Since Titan, Oak Ridge has pioneered this idea of having GPU accelerators along with CPUs,” said Zacharia. “Frontier will be the third generation of supercomputing system built around this architecture and it will be the second generation AI machine.”
Frontier will be used for future application simulations for quantum computers, nuclear energy systems, fusion reactors, and precision medicines, said Zacharia, adding “Frontier finally gets us to the point where we can actually design new materials.”
“We are approaching a revolution in how we can design and analyze materials,” said Tom Evans, Oak Ridge National Laboratory technical lead for the Energy Applications Focus Area, Exascale Computing Project. “We can look and carefully characterize the electronic structure of fairly simple atoms and very simple molecules right now. But with exascale computing on Frontier, we’re trying to stretch that to molecules that consist of thousands of atoms. The more we understand about the electronic structure, the more we’re able to actually manufacture and use exotic materials for things like very small, high tensile strength materials and buildings to make them more energy efficient. At the end of the day, everything in some sense comes down to materials.”
AMD’s Forrest Norrod and Cray’s Pete Ungaro on stage at AMD’s Next Horizon event in November 2018.In terms of number-one system bragging rights, the DOE has previously stated, and recently confirmed, that Aurora (aka Aurora21, the revised CORAL-1 system that Intel is contracted to deliver to Argonne) is on track to be the United States’, and possibly the world’s, first exascale system in 2021; and since that messaging has not changed, we believe it is the intention of the DOE to deliver on that goal. However, even if it is the case that Intel keeps to its timeline and Aurora is deployed and benchmarked first, Frontier is slated to be stood up on a very similar timeline and according to publicly stated performance goals will provide roughly 50 percent more flops capability.
Asked to comment on the “competitive” timelines for Frontier and Aurora, Zacharia said he could only comment on Frontier.
“I don’t know all the details of Aurora procurement because that information has not been publicly released, but we do know that Frontier will be the largest system by far that the DOE has procured,” he said.
“We know that Oak Ridge has experience with Summit and Titan previously in using CPU-GPU systems. We also know that the pre-exascale system that the scientific community is using today to develop all their applications and system software is on our system Summit, which is the largest machine available to anybody…. If there is any competition between the labs, it’s just competition for ideas, which is what scientists should do, but otherwise this is truly a DOE lab system effort to ensure the United States maintains the forefront of this important technology, not only because it drives technology innovation in the IT computing space but it also drives economic competition and creates jobs.”
Zacharia further cited that the goals for Frontier are aligned and consistent with the White House AI initiative as well as the National Council on American Workers, which is creating new jobs using AI and scientific computing in manufacturing and other spaces.
As for that $600-million-plus price tag, it is “by far the most expensive single machine that [the DOE has] ever procured,” said Zacharia. It’s also Cray’s largest contract ever.
The total amount includes the system build contract for “over $500 million,” as well as the development contract for “over $100 million” that will, according to Ungaro, be used to develop some of the core technologies for the machine, as well as a new programming environment that will enhance GPU programmability via extensions for Radeon Open Compute Platform (ROCm).
“The Cray Programming Environment (Cray PE)…will see a number of enhancements for increased functionality and scale,” said Cray. “This will start with Cray working with AMD to enhance these tools for optimized GPU scaling with extensions for Radeon Open Compute Platform (ROCm). These software enhancements will leverage low-level integrations of AMD ROCmRDMA technology with Cray Slingshot to enable direct communication between the Slingshot NIC to read and write data directly to GPU memory for higher application performance.”
To support the converged use of analytics, AI, and HPC at extreme scale, “Cray PE will be integrated with a full machine learning software stack with support for the most popular tools and frameworks.”
Shasta cabinet detailFrontier marks Cray’s third major contract award for the Shasta architecture and Slingshot interconnect. Previous awards were for the National Energy Research Scientific Computing Center’s NERSC-9 pre-exascale Perlmutter system (with partners AMD and Nvidia) and the Argonne National Laboratory’s Aurora exascale system (with Intel as the prime).
Frontier is the first CORAL-2 award, announced nearly 13 months after the RFP was released. As laid out in the program’s RFP, CORAL-2 seeks to fund up to three exascale-class systems: Frontier at Oak Ridge, El Capitan at Livermore and a potential third system at Argonne if the lab chooses to make an award under the RFP and if funding is available. Like the original CORAL program, which kicked off in 2012, CORAL-2 has a mandate to field architecturally diverse machines in a way that manages risk during a period of rapid technological evolution. The stipulation indicates that “the systems residing at or planned to reside at ORNL and ANL must be diverse from one another,” however the program allows Oak Ridge and Livermore labs to employ the same architecture if they choose to do so, as in the case of Summit and Sierra, which employ very similar IBM-Nvidia architectures.
The CORAL-2 effort is part of the U.S. Exascale Computing Initiative. The ECI has two components: one is the hardware delivery and the other is application readiness. The latter is the domain of the Exascale Computing Project ( see HPCwire‘s recent coverage to read about the latest progress), which is investing $1.7 billion to ensure there’s an exascale-ready software ecosystem to get the most from exascale hardware when it arrives.
“ECP Software Technology is excited to be a part of preparing the software stack for Frontier,” said Sandia’s Mike Heroux, director of software technology for the Exascale Computing Project. “We are already on our way, using Summit and Sierra as launching pads. Working with [Oak Ridge Leadership Computing Facility], Cray, and AMD, we look forward to providing the programming environments and tools, and math, data and visualization libraries that will unlock the potential of Frontier for producing the countless scientific achievements we expect from such a powerful system. We are privileged to be part of the effort.”
ORNL’s Center for Accelerated Application Readiness is accepting proposals from scientists to prepare their codes to run on Frontier. Check with the Frontier website for additional information.
|RecommendKeepReplyMark as Last Read|
|From: FJB||7/21/2019 12:22:38 PM|
|Google expected to achieve quantum supremacy in 2019: Here’s what that means|
Google‘s reportedly on the verge of demonstrating a quantum computer capable of feats no ordinary classical computer could perform. The term for this is quantum supremacy, and experts believe the Mountain View company could be mere months from achieving it. This may be the biggest scientific breakthrough for humanity since we figured out how to harness the power of fire. Here’s what you need to know before it happens.
Functional quantum computers currently exist – IBM, D-Wave, Google, Microsoft, Rigetti, and dozens of other companies and universities are working tirelessly to develop them – but none of them actually do anything that we can’t already do with a regular, old-fashioned computer yet. They’re a proof-of-concept. The big news right now has to do with a new “rule” called Nevin’s Law. It was named after one of Google‘s quantum gurus, Hartmut Nevin, stated that quantum computing technology is currently snowballing at a double-exponential rate. We’ll get to that later. First let’s talk about what quantum supremacy would actually mean for you and me.
For a basic primer on quantum computers, click here.
Why you should care
Experts predict the advent of quantum supremacy – useful quantum computers – will herald revolutionary advances in nearly every scientific field. We’re talking breakthroughs in chemistry, astrophysics, medicine, security, communications and more. It may sound like a lot of hype, but these are the grounded predictions. Others think quantum computers will help scientists unlock some of the greater mysteries of the cosmos such as how the universe came to be and whether life exists outside of our own planet.
But quantum computing is an edge technology: there’s no blueprint for wrangling subatomic particles into performing computations. Some folks believe quantum computers will never stack up to modern supercomputers. While this is a minority view, there is a valid point to be gleaned from it: quantum computers will never replace classical ones. And they’re not meant to.
You can’t replace your iPhone or PC with a quantum computer any more than you can replace your tennis shoes with a nuclear aircraft carrier. The two things are designed to do different things, despite the fact they’re both related to transportation in some way.
Classical computers allow you to play games, check your emails, surf the web, and run programs. Quantum computers will, for the most part, perform simulations too complex for binary systems that run on computer bits. In other words, individual consumers will have almost no use for a quantum computer of their own, but NASA and MIT, for example, absolutely will.
What’s Google actually doing?
While quantum supremacy would be a giant breakthrough, let’s not get ahead of ourselves: The world probably isn’t going to catapult into some sort of far-future scientific utopia just because Google shows off a quantum system that can do things impossible for a binary computer. The reason experts in the field are excited right now is because of Nevin’s Law – something that’s not really a law at all.
Nevin’s Law is currently more of an affectionate term for a rule coined by Google‘s Hartmut Nevin. At the company’s Spring Quantum Symposium this May, Nevin made the claim that quantum systems are increasing in performance at a doubly-exponential rate. This means, rather than doubling in performance with successive iterations as was the case with classical computers and Moore’s Law, quantum technology is increasing in performance at a much more dramatic rate. It took 50 years to go from punch card systems to iPhones: if Nevin’s Law is true we’ll see quantum systems increase in a fraction of that time.
Most of this improvement can be attributed to amazing new feats in error-correction – filtering out noise in quantum systems is among the biggest challenges faced by physicists. Some of the improvement has to do with the simple fact that a rising tide lifts all ships. Google‘s invested as much time, money, and personnel as any other organization involved in quantum technology (if not more). If quantum supremacy is possible, Google‘s as likely a candidate to achieve it as any other company.
Well, except IBM, but only because it takes a somewhat different view on the subject. Arguably, IBM is at the forefront of quantum technology. And there’s no reason to believe it won’t reach Google’s definition of quantum supremacy soon as well, but it’s leadership has been reticent to talk about goalposts like quantum supremacy.
TNW reached out to IBM to get it’s take, Here’s what Dr. Jay Gambetta, IBM Fellow and Global Lead of Quantum Computing, Theory, Software, IBM Q, told us:
Supremacy isn’t something you shoot for. As has been proven, it’s a moving target, and something we’ll recognize once we’ve moved to bigger things – namely demonstrating a significant performance advantage over what classical computers can do, alone. This means developing a quantum computation that’s either hundreds or thousands of times faster than a classical computation, or needs a smaller fraction of the memory required by a classical computer, or makes something possible that simply isn’t possible now with a classical computer.What’s next
We must also measure that progress beyond simple qubit counts or just coherence times. Which is why we developed quantum volume, full-system performance metric that accounts for gate and measurement errors as well as device cross talk and connectivity, and circuit software compiler efficiency. It’s an agnostic metric that others, including Rigetti, have benchmarked their systems against. You can read more about the Quantum Volume of our systems, and how it’s calculated, here.
Time will tell whether IBM or Google’s approach makes more sense, but according to Nevin’s Law we’re mere months away from seeing a full-fledged demonstration of quantum supremacy from one team or another. Quanta Magazine reports that Google‘s had to crib computational power from systems outside of its quantum labs just to keep up with the hand-over-fist improvements in performance. Nevins told Quanta’s Kevin Hartnett:
Somewhere in February I had to make calls to say, ‘Hey, we need more quota.’ We were running jobs comprised of a million processors.He went on to explain that, with double-exponential growth, the proof isn’t always front-and-center at first:
It looks like nothing is happening, nothing is happening, and then whoops, suddenly you’re in a different world. That’s what we’re experiencing here.We’ve been talking about quantum computing for years, but this is the first time quantum supremacy’s been dangled in front of our faces as a near-term eventuality. Of course, with any bold claim, it’s prudent to maintain a modicum of cynicism. But Nevin’s Law tells us our quantum dreams could come true before the end of the year.
All we want for Christmas is an entirely new computing paradigm capable of making classical computers look like punch-card systems. Oh, and a drone with a flamethrower attachment — but that’s unrelated.
Read next: Consoles aren't selling because the next-gen is coming
|RecommendKeepReplyMark as Last ReadRead Replies (1)|
|From: FJB||9/14/2019 6:26:28 PM|
This Letter proposes a realistic implementation of the curved relativistic mirror concept to reach unprecedented light intensities in experiments. The scheme is based on relativistic plasma mirrors that are optically curved by laser radiation pressure. Its validity is supported by cutting-edge three-dimensional particle-in-cell simulations and a theoretical model, which show that intensities above 1025??W?cm-2 could be reached with a 3 PetaWatt (PW) laser. Its very high robustness to laser and plasma imperfections is shown to surpass all previous schemes and should enable its implementation on existing PW laser facilities.
Revised 21 March 2019Received 28 November 2018DOI:https://doi.org/10.1103/PhysRevLett.123.105001
© 2019 American Physical Society
Physics Subject Headings (PhySH)
|RecommendKeepReplyMark as Last Read|
|From: FJB||10/20/2019 6:18:32 AM|
|Math Breakthrough Speeds Supercomputer Simulations |
A breakthrough by UC Davis mathematicians could help scientists get three or four times the performance from supercomputers used to model protein folding, turbulence and other complex atomic scale problems.
“This is a big deal,” said Niels Gronbech-Jensen, professor of mathematics and of mechanical and aerospace engineering at UC Davis. “We are now able to do a broad class of simulations several times faster than what has been possible before.”
Simulation of a virus particle created with LAMMPS molecular dynamics software. New work from UC Davis will allow faster and more accurate simulations of atoms and molecules. (Image by Eindhoven University of Technology via Sandia National Lab.)
One of the new algorithms has been incorporated into the Sandia National Laboratory molecular dynamics suite, LAMMPS, which is used worldwide for studies in biochemistry, materials science and other fields.
Newton’s equations describe how systems change over time. In the early twentieth century, physicist Paul Langevin developed equations that add friction and noise to Newton’s equations in order to describe a system in thermal balance. But it was only with the development of computers that it became practical to use these equations to study how large ensembles of atoms and molecules behave. That methodology, called molecular dynamics, was pioneered by, among others, Edward Teller and Bernie Alder of the Lawrence Livermore National Laboratory and the UC Davis Department of Applied Science.
Molecular dynamics simulations are now widely used in applications such as materials science and pharmaceutical research.
The timestep problemIn adapting Newton’s and Langevin’s equations to run on digital computers, scientists had to make an important change. They had to break the equations into discrete timesteps.
“The time step makes the system behave differently,” Gronbech-Jensen said.
The shorter the timesteps, the closer the simulation will be to reality, where systems change continuously. But with short timesteps it takes longer to complete a simulation. With larger timesteps, however, results can start to deviate from reality.
“We are in a squeeze between getting somewhere and being accurate,” Gronbech-Jensen said.
Molecular dynamics simulations essentially describe the movements and interactions of a lot of particles. A few years ago, Gronbech-Jensen’s research group found a way to accurately calculate the thermal distributions of positions of particles in a simulation regardless of the timestep. Over the past year, they have figured out that they can obtain accurate thermal distributions for the particle velocities as well, thereby getting a complete and accurate statistical description of a molecular ensemble simulated at large time steps.
The new algorithm allows scientists to run simulations with bigger timesteps without losing statistical accuracy. This could effectively increase computing power three- to four-fold or more, Gronbech-Jensen said – a feature that is particularly impactful for simulations that are currently challenging the most powerful supercomputers in the world.
The new capability is now freely available to use through the LAMMPS molecular dynamics suite.
Gronbech-Jensen said it’s gratifying to see his team’s work go into wide use.
“It’s great to see our work getting to the point of having tangible impact,” he said.
More information Accurate configurational and kinetic statistics in discrete-time Langevin systems (Molecular Physics)
Complete set of stochastic Verlet-type thermostats for correct Langevin simulations (Molecular Physics)
|RecommendKeepReplyMark as Last Read|
|From: FJB||10/23/2019 11:45:35 AM|
|IBM DENIES GOOGLE QUANTUM SUPREMACY|
On “Quantum Supremacy” | IBM Research Blog
October 21, 2019 | Written by: , John Gunnels, and Jay Gambetta
Categorized: Quantum Computing
Share this post:
Quantum computers are starting to approach the limit of classical simulation and it is important that we continue to benchmark progress and to ask how difficult they are to simulate. This is a fascinating scientific question.
Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.
Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.
This particular notion of “quantum supremacy” is based on executing a random quantum circuit of a size infeasible for simulation with any available classical computer. Specifically, the paper shows a computational experiment over a 53-qubit quantum processor that implements an impressively large two-qubit gate quantum circuit of depth 20, with 430 two-qubit and 1,113 single-qubit gates, and with predicted total fidelity of 0.2%. Their classical simulation estimate of 10,000 years is based on the observation that the RAM memory requirement to store the full state vector in a Schrödinger-type simulation would be prohibitive, and thus one needs to resort to a Schrödinger-Feynman simulation that trades off space for time.
The concept of “quantum supremacy” showcases the resources unique to quantum computers, such as direct access to entanglement and superposition. However, classical computers have resources of their own such as a hierarchy of memories and high-precision computations in hardware, various software assets, and a vast knowledge base of algorithms, and it is important to leverage all such capabilities when comparing quantum to classical.
When their comparison to classical was made, they relied on an advanced simulation that leverages parallelism, fast and error-free computation, and large aggregate RAM, but failed to fully account for plentiful disk storage. In contrast, our Schrödinger-style classical simulation approach uses both RAM and hard drive space to store and manipulate the state vector. Performance-enhancing techniques employed by our simulation methodology include circuit partitioning, tensor contraction deferral, gate aggregation and batching, careful orchestration of collective communication, and well-known optimization methods such as cache-blocking and double-buffering in order to overlap the communication transpiring between and computation taking place on the CPU and GPU components of the hybrid nodes. Further details may be found in Leveraging Secondary Storage to Simulate Deep 54-qubit Sycamore Circuits.
Figure 1. Analysis of expected classical computing runtime vs circuit depth of “Google Sycamore Circuits”. The bottom (blue) line estimates the classical runtime for a 53-qubit processor (2.5 days for a circuit depth 20), and the upper line (orange) does so for a 54-qubit processor.
Our simulation approach features a number of nice properties that do not directly transfer from the classical to quantum worlds. For instance, once computed classically, the full state vector can be accessed arbitrarily many times. The runtime of our simulation method scales approximately linearly with the circuit depth (see Figure 1 above), imposing no limits such as those owing to the limited coherence times. New and better classical hardware, code optimizations to more efficiently utilize the classical hardware, not to mention the potential of leveraging GPU-direct communications to run the kind of supremacy simulations of interest, could substantially accelerate our simulation.
Building quantum systems is a feat of science and engineering and benchmarking them is a formidable challenge. Google’s experiment is an excellent demonstration of the progress in superconducting-based quantum computing, showing state-of-the-art gate fidelities on a 53-qubit device, but it should not be viewed as proof that quantum computers are “supreme” over classical computers.
It is well known in the quantum community that we at IBM are concerned of where the term “quantum supremacy” has gone. The origins of the term, including both a reasoned defense and a candid reflection on some of its controversial dimensions, were recently discussed by John Preskill in a thoughtful article in Quanta Magazine. Professor Preskill summarized the two main objections to the term that have arisen from the community by explaining that the “word exacerbates the already overhyped reporting on the status of quantum technology” and that “through its association with white supremacy, evokes a repugnant political stance.”
Both are sensible objections. And we would further add that the “supremacy” term is being misunderstood by nearly all (outside of the rarified world of quantum computing experts that can put it in the appropriate context). A headline that includes some variation of “Quantum Supremacy Achieved” is almost irresistible to print, but it will inevitably mislead the general public. First because, as we argue above, by its strictest definition the goal has not been met. But more fundamentally, because quantum computers will never reign “supreme” over classical computers, but will rather work in concert with them, since each have their unique strengths.
For the reasons stated above, and since we already have ample evidence that the term “quantum supremacy” is being broadly misinterpreted and causing ever growing amounts of confusion, we urge the community to treat claims that, for the first time, a quantum computer did something that a classical computer cannot with a large dose of skepticism due to the complicated nature of benchmarking an appropriate metric.
For quantum to positively impact society, the task ahead is to continue to build and make widely accessible ever more powerful programmable quantum computing systems that can implement, reproducibly and reliably, a broad array of quantum demonstrations, algorithms and programs. This is the only path forward for practical solutions to be realized in quantum computers.
A final thought. The concept of quantum computing is inspiring a whole new generation of scientists, including physicists, engineers, and computer scientists, to fundamentally change the landscape of information technology. If you are already pushing the frontiers of quantum computing forward, let’s keep the momentum going. And if you are new to the field, come and join the community. Go ahead and run your first program on a real quantum computer today.
The best is yet to come.
Chief Architect for IBM Q Dmitri Maslov also contributed to this article.
|RecommendKeepReplyMark as Last Read|