From: FJB  10/20/2019 6:18:32 AM     Math Breakthrough Speeds Supercomputer Simulations Andy Fell egghead.ucdavis.edu /2019/10/18/mathbreakthroughspeedssupercomputersimulations/
A breakthrough by UC Davis mathematicians could help scientists get three or four times the performance from supercomputers used to model protein folding, turbulence and other complex atomic scale problems.
“This is a big deal,” said Niels GronbechJensen, professor of mathematics and of mechanical and aerospace engineering at UC Davis. “We are now able to do a broad class of simulations several times faster than what has been possible before.”
Simulation of a virus particle created with LAMMPS molecular dynamics software. New work from UC Davis will allow faster and more accurate simulations of atoms and molecules. (Image by Eindhoven University of Technology via Sandia National Lab.)
One of the new algorithms has been incorporated into the Sandia National Laboratory molecular dynamics suite, LAMMPS, which is used worldwide for studies in biochemistry, materials science and other fields.
Newton’s equations describe how systems change over time. In the early twentieth century, physicist Paul Langevin developed equations that add friction and noise to Newton’s equations in order to describe a system in thermal balance. But it was only with the development of computers that it became practical to use these equations to study how large ensembles of atoms and molecules behave. That methodology, called molecular dynamics, was pioneered by, among others, Edward Teller and Bernie Alder of the Lawrence Livermore National Laboratory and the UC Davis Department of Applied Science.
Molecular dynamics simulations are now widely used in applications such as materials science and pharmaceutical research.
The timestep problemIn adapting Newton’s and Langevin’s equations to run on digital computers, scientists had to make an important change. They had to break the equations into discrete timesteps.
“The time step makes the system behave differently,” GronbechJensen said.
The shorter the timesteps, the closer the simulation will be to reality, where systems change continuously. But with short timesteps it takes longer to complete a simulation. With larger timesteps, however, results can start to deviate from reality.
“We are in a squeeze between getting somewhere and being accurate,” GronbechJensen said.
Molecular dynamics simulations essentially describe the movements and interactions of a lot of particles. A few years ago, GronbechJensen’s research group found a way to accurately calculate the thermal distributions of positions of particles in a simulation regardless of the timestep. Over the past year, they have figured out that they can obtain accurate thermal distributions for the particle velocities as well, thereby getting a complete and accurate statistical description of a molecular ensemble simulated at large time steps.
The new algorithm allows scientists to run simulations with bigger timesteps without losing statistical accuracy. This could effectively increase computing power three to fourfold or more, GronbechJensen said – a feature that is particularly impactful for simulations that are currently challenging the most powerful supercomputers in the world.
The new capability is now freely available to use through the LAMMPS molecular dynamics suite.
GronbechJensen said it’s gratifying to see his team’s work go into wide use.
“It’s great to see our work getting to the point of having tangible impact,” he said.
More information Accurate configurational and kinetic statistics in discretetime Langevin systems (Molecular Physics)
Complete set of stochastic Verlettype thermostats for correct Langevin simulations (Molecular Physics) 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  10/23/2019 11:45:35 AM     IBM DENIES GOOGLE QUANTUM SUPREMACY
ibm.com
On “Quantum Supremacy”  IBM Research Blog Edwin Pednault
October 21, 2019  Written by: , John Gunnels, and Jay Gambetta
Categorized: Quantum Computing
Share this post:
Quantum computers are starting to approach the limit of classical simulation and it is important that we continue to benchmark progress and to ask how difficult they are to simulate. This is a fascinating scientific question.
Recent advances in quantum computing have resulted in two 53qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a stateoftheart supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worstcase estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.
Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.
This particular notion of “quantum supremacy” is based on executing a random quantum circuit of a size infeasible for simulation with any available classical computer. Specifically, the paper shows a computational experiment over a 53qubit quantum processor that implements an impressively large twoqubit gate quantum circuit of depth 20, with 430 twoqubit and 1,113 singlequbit gates, and with predicted total fidelity of 0.2%. Their classical simulation estimate of 10,000 years is based on the observation that the RAM memory requirement to store the full state vector in a Schrödingertype simulation would be prohibitive, and thus one needs to resort to a SchrödingerFeynman simulation that trades off space for time.
The concept of “quantum supremacy” showcases the resources unique to quantum computers, such as direct access to entanglement and superposition. However, classical computers have resources of their own such as a hierarchy of memories and highprecision computations in hardware, various software assets, and a vast knowledge base of algorithms, and it is important to leverage all such capabilities when comparing quantum to classical.
When their comparison to classical was made, they relied on an advanced simulation that leverages parallelism, fast and errorfree computation, and large aggregate RAM, but failed to fully account for plentiful disk storage. In contrast, our Schrödingerstyle classical simulation approach uses both RAM and hard drive space to store and manipulate the state vector. Performanceenhancing techniques employed by our simulation methodology include circuit partitioning, tensor contraction deferral, gate aggregation and batching, careful orchestration of collective communication, and wellknown optimization methods such as cacheblocking and doublebuffering in order to overlap the communication transpiring between and computation taking place on the CPU and GPU components of the hybrid nodes. Further details may be found in Leveraging Secondary Storage to Simulate Deep 54qubit Sycamore Circuits.
Figure 1. Analysis of expected classical computing runtime vs circuit depth of “Google Sycamore Circuits”. The bottom (blue) line estimates the classical runtime for a 53qubit processor (2.5 days for a circuit depth 20), and the upper line (orange) does so for a 54qubit processor.
Our simulation approach features a number of nice properties that do not directly transfer from the classical to quantum worlds. For instance, once computed classically, the full state vector can be accessed arbitrarily many times. The runtime of our simulation method scales approximately linearly with the circuit depth (see Figure 1 above), imposing no limits such as those owing to the limited coherence times. New and better classical hardware, code optimizations to more efficiently utilize the classical hardware, not to mention the potential of leveraging GPUdirect communications to run the kind of supremacy simulations of interest, could substantially accelerate our simulation.
Building quantum systems is a feat of science and engineering and benchmarking them is a formidable challenge. Google’s experiment is an excellent demonstration of the progress in superconductingbased quantum computing, showing stateoftheart gate fidelities on a 53qubit device, but it should not be viewed as proof that quantum computers are “supreme” over classical computers.
It is well known in the quantum community that we at IBM are concerned of where the term “quantum supremacy” has gone. The origins of the term, including both a reasoned defense and a candid reflection on some of its controversial dimensions, were recently discussed by John Preskill in a thoughtful article in Quanta Magazine. Professor Preskill summarized the two main objections to the term that have arisen from the community by explaining that the “word exacerbates the already overhyped reporting on the status of quantum technology” and that “through its association with white supremacy, evokes a repugnant political stance.”
Both are sensible objections. And we would further add that the “supremacy” term is being misunderstood by nearly all (outside of the rarified world of quantum computing experts that can put it in the appropriate context). A headline that includes some variation of “Quantum Supremacy Achieved” is almost irresistible to print, but it will inevitably mislead the general public. First because, as we argue above, by its strictest definition the goal has not been met. But more fundamentally, because quantum computers will never reign “supreme” over classical computers, but will rather work in concert with them, since each have their unique strengths.
For the reasons stated above, and since we already have ample evidence that the term “quantum supremacy” is being broadly misinterpreted and causing ever growing amounts of confusion, we urge the community to treat claims that, for the first time, a quantum computer did something that a classical computer cannot with a large dose of skepticism due to the complicated nature of benchmarking an appropriate metric.
For quantum to positively impact society, the task ahead is to continue to build and make widely accessible ever more powerful programmable quantum computing systems that can implement, reproducibly and reliably, a broad array of quantum demonstrations, algorithms and programs. This is the only path forward for practical solutions to be realized in quantum computers.
A final thought. The concept of quantum computing is inspiring a whole new generation of scientists, including physicists, engineers, and computer scientists, to fundamentally change the landscape of information technology. If you are already pushing the frontiers of quantum computing forward, let’s keep the momentum going. And if you are new to the field, come and join the community. Go ahead and run your first program on a real quantum computer today.
The best is yet to come.
Chief Architect for IBM Q Dmitri Maslov also contributed to this article. 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

To: FJB who wrote (373)  11/1/2019 7:33:40 PM  From: trickydick    FUBHO: This is fascinating stuff, almost sounds like science fiction, only, as you said, it's going to become commercialized soon. I'm new to some of these boards and I'm always looking for new opportunities for investments.
I may be stepping in here when huge volumes of information may have already been shared, over long periods, so my apologies if my inquiry is burdensome and amateurish.
. So, my question is: Are you vested in this technology through the stock market? Do you see this specific type of computers to be a field to invest in? Is that best time already came and went? Is this technology so finite that investing in it wouldn't be worth the time to research it?
Just trying to expand my horizons.
I would appreciate any help and/ or direction you can give us. And, thank you. 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  4/8/2020 11:16:53 AM     ‘Amazing’ Math Bridge Extended Beyond Fermat’s Last Theorem By Erica Klarreich
April 6, 2020
quantamagazine.org
Robert Langlands, who conjectured the influential Langlands correspondence about 50 years ago, giving a talk at the Institute for Advanced Study in Princeton, New Jersey, in 2016.
Dan Komoda/Institute for Advanced Study
Namely, for both Diophantine equations and automorphic forms, there’s a natural way to generate an infinite sequence of numbers. For a Diophantine equation, you can count how many solutions the equation has in each clockstyle arithmetic system (for example, in the usual 12hour clock, 10 + 4 = 2). And for the kind of automorphic form that appears in the Langlands correspondence, you can compute an infinite list of numbers analogous to quantum energy levels.
If you include only the clock arithmetics that have a prime number of hours, Langlands conjectured that these two number sequences match up in an astonishingly broad array of circumstances. In other words, given an automorphic form, its energy levels govern the clock sequence of some Diophantine equation, and vice versa.
This connection is “weirder than telepathy,” Emerton said. “How these two sides communicate with each other … for me it seems incredible and amazing, even though I have been studying it for over 20 years.”
In the 1950s and 1960s, mathematicians figured out the beginnings of this bridge in one direction: how to go from certain automorphic forms to elliptic curves with coefficients that are rational numbers (ratios of whole numbers). Then in the 1990s, Wiles, with contributions from Taylor, figured out the opposite direction for a certain family of elliptic curves. Their result gave an instant proof of Fermat’s Last Theorem, since mathematicians had already shown that if Fermat’s Last Theorem were false, at least one of those elliptic curves would not have a matching automorphic form.
Fermat’s Last Theorem was far from the only discovery to emerge from the construction of this bridge. Mathematicians have used it, for instance, to prove the SatoTate conjecture, a decadesold problem about the statistical distribution of the number of clock solutions to an elliptic curve, as well as a conjecture about the energy levels of automorphic forms that originated with the legendary early 20thcentury mathematician Srinivasa Ramanujan.
After Wiles and Taylor published their findings, it became clear that their method still had some juice. Soon mathematicians figured out how to extend the method to all elliptic curves with rational coefficients. More recently, mathematicians figured out how to cover coefficients that include simple irrational numbers, such as 3 + $latex \sqrt{2}$.
What they couldn’t do, however, was extend the TaylorWiles method to elliptic curves whose coefficients include complex numbers such as i (the square root of 1) or 3 + i or $latex \sqrt{2}$i. Nor could they handle Diophantine equations with exponents much higher than those in elliptic curves. Equations where the highest exponent on the righthand side is 4 instead of 3 come along for free with the TaylorWiles method, but as soon as the exponent rises to 5, the method no longer works.
Mathematicians gradually realized that for these two next natural extensions of the Langlands bridge, it wasn’t simply a matter of finding some small adjustment to the TaylorWiles method. Instead, there seemed to be a fundamental obstruction.
They’re “the next examples you’d think of,” Gee said. “But you’re told, ‘No, these things are hopelessly out of reach.’”
The problem was that the TaylorWiles method finds the matching automorphic form for a Diophantine equation by successively approximating it with other automorphic forms. But in the situations where the equation’s coefficients include complex numbers or the exponent is 5 or higher, automorphic forms become exceedingly rare — so rare that a given automorphic form will usually have no nearby automorphic forms to use for approximation purposes.
In Wiles’ setting, the automorphic form you’re seeking “is like a needle in a haystack, but the haystack exists,” Emerton said. “And it’s almost as if it’s like a haystack of iron filings, and you’re putting in this magnet so it lines them up to point to your needle.”
But when it comes to complexnumber coefficients or higher exponents, he said, “it’s like a needle in a vacuum.”
Going to the Moon Many of today’s number theorists came of age in the era of Wiles’ proof. “It was the only piece of mathematics I ever saw on the front page of a newspaper,” recalled Gee, who was 13 at the time. “For many people, it’s something that seemed exciting, that they wanted to understand, and then they ended up working in this area because of that.”
So when in 2012, two mathematicians — Frank Calegari of the University of Chicago and David Geraghty (now a research scientist at Facebook) — proposed a way to overcome the obstruction to extending the TaylorWiles method, their idea sent ripples of excitement through the new generation of number theorists.
Their work showed that “this fundamental obstruction to going any further is not really an obstruction at all,” Gee said. Instead, he said, the seeming limitations of the TaylorWiles method are telling you “that in fact you’ve only got the shadow of the actual, more general method that [Calegari and Geraghty] introduced.”
In the cases where the obstruction crops up, the automorphic forms live on higherdimensional tilings than the twodimensional Escherstyle tilings Wiles studied. In these higherdimensional worlds, automorphic forms are inconveniently rare. But on the plus side, higherdimensional tilings often have a much richer structure than twodimensional tilings do. Calegari and Geraghty’s insight was to tap into this rich structure to make up for the shortage of automorphic forms.
More specifically, whenever you have an automorphic form, you can use its “coloring” of the tiling as a sort of measuring tool that can calculate the average color on any chunk of the tiling you choose. In the twodimensional setting, automorphic forms are essentially the only such measuring tools available. But for higherdimensional tilings, new measuring tools crop up called torsion classes, which assign to each chunk of the tiling not an average color but a number from a clock arithmetic. There’s an abundance of these torsion classes.
For some Diophantine equations, Calegari and Geraghty proposed, it might be possible to find the matching automorphic form by approximating it not with other automorphic forms but with torsion classes. “The insight they had was fantastic,” Caraiani said.
Calegari and Geraghty provided the blueprint for a much broader bridge from Diophantine equations to automorphic forms than the one Wiles and Taylor built. Yet their idea was far from a complete bridge. For it to work, mathematicians would first have to prove three major conjectures. It was, Calegari said, as if his paper with Geraghty explained how you could get to the moon — provided someone would obligingly whip up a spaceship, rocket fuel and spacesuits. The three conjectures “were completely beyond us,” Calegari said.
In particular, Calegari and Geraghty’s method required that there already be a bridge going in the other direction, from automorphic forms to the Diophantine equations side. And that bridge would have to transport not just automorphic forms but also torsion classes. “I think a lot of people thought this was a hopeless problem when Calegari and Geraghty first outlined their program,” said Taylor, who is now at Stanford University.
Yet less than a year after Calegari and Geraghty posted their paper online, Peter Scholze — a mathematician at the University of Bonn who went on to win the Fields Medal, mathematics’ highest honor — astonished number theorists by figuring out how to go from torsion classes to the Diophantine equations side in the case of elliptic curves whose coefficients are simple complex numbers such as 3 + 2i or 4  $latex \sqrt{5}$i. “He’s done a lot of exciting things, but that’s perhaps his most exciting achievement,” Taylor said.
Scholze had proved the first of Calegari and Geraghty’s three conjectures. And a pair of subsequent papers by Scholze and Caraiani came close to proving the second conjecture, which involves showing that Scholze’s bridge has the right properties.
It started to feel as if the program was within reach, so in the fall of 2016, to try to make further progress, Caraiani and Taylor organized what Calegari called a “secret” workshop at the Institute for Advanced Study. “We took over the lecture room — no one else was allowed in,” Calegari said.
After a couple of days of expository talks, the workshop participants started realizing how to both polish off the second conjecture and sidestep the third conjecture. “Maybe within a day of having actually stated all the problems, they were all solved,” said Gee, another participant.
The participants spent the rest of the week elaborating various aspects of the proof, and over the next two years they wrote up their findings into a 10author paper — an almost unheardof number of authors for a number theory paper. Their paper essentially establishes the Langlands bridge for elliptic curves with coefficients drawn from any number system made up of rational numbers plus simple irrational and complex numbers.
“The plan in advance [of the workshop] was just to see how close one could get to proving things,” Gee said. “I don’t think anyone really expected to prove the result.”
Extending the Bridge Meanwhile, a parallel story was unfolding for extending the bridge beyond elliptic curves. Calegari and Gee had been working with George Boxer (now at the École Normale Supérieure in Lyon, France) to tackle the case where the highest exponent in the Diophantine equation is 5 or 6 (instead of 3 or 4, the cases that were already known). But the three mathematicians were stuck on a key part of their argument.
Then, the very weekend after the “secret” workshop, Vincent Pilloni of the École Normale Supérieure put out a paper that showed how to circumvent that very obstacle. “We have to stop what we’re doing now and work with Pilloni!” the other three researchers immediately told each other, according to Calegari.
Within a few weeks, the four mathematicians had solved this problem too, though it took a couple of years and nearly 300 pages for them to fully flesh out their ideas. Their paper and the 10author paper were both posted online in late December 2018, within four days of each other.
Soon after the secret workshop at the IAS, Frank Calegari (left), Toby Gee (center) and Vincent Pilloni, working with George Boxer (not pictured), found a way to extend the Langlands bridge beyond elliptic curves.
Frank Calegari, University of Chicago; Courtesy of Toby Gee; Arnold Nipoli
“I think they’re pretty huge,” Emerton said of the two papers. Those papers and the preceding building blocks are all “state of the art,” he said.
While these two papers essentially prove that the mysterious telepathy between Diophantine equations and automorphic forms carries over to these new settings, there’s one caveat: They don’t quite build a perfect bridge between the two sides. Instead, both papers establish “potential automorphy.” This means that each Diophantine equation has a matching automorphic form, but we don’t know for sure that the automorphic form lives in the patch of its continent that mathematicians would expect. But potential automorphy is enough for many applications — for instance, the SatoTate conjecture about the statistics of clock solutions to Diophantine equations, which the 10author paper succeeded in proving in much broader contexts than before.
And mathematicians are already starting to figure out how to improve on these potential automorphy results. In October, for instance, three mathematicians — Patrick Allen of the University of Illinois, UrbanaChampaign, Chandrashekhar Khare of the University of California, Los Angeles and Jack Thorne of the University of Cambridge — proved that a substantial proportion of the elliptic curves studied in the 10author paper do have bridges that land in exactly the right place.
Bridges with this higher level of precision may eventually allow mathematicians to prove a host of new theorems, including a centuryold generalization of Fermat’s Last Theorem. This conjectures that the equation at the heart of the theorem continues to have no solutions even when x, y and z are drawn not just from whole numbers but from combinations of whole numbers and the imaginary number i.
The two papers carrying out the CalegariGeraghty program form an important proof of principle, said Michael Harris of Columbia University. They’re “a demonstration that the method does have wide scope,” he said.
While the new papers connect much wider regions of the two Langlands continents than before, they still leave vast territories uncharted. On the Diophantine equations side, there are still all the equations with exponents higher than 6, as well as equations with more than two variables. On the other side are automorphic forms that live on more complicated symmetric spaces than the ones that have been studied so far.
“These papers, right now, are kind of the pinnacle of achievement,” Emerton said. But “at some point, they will just be looked back at as one more step on the way.”
Langlands himself never considered torsion when he thought about automorphic forms, so one challenge for mathematicians is to come up with a unifying vision of these different threads. “The envelope is being expanded,” Taylor said. “We’ve to some degree left the path laid out by Langlands, and we don’t quite know where we’re going.”

 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last ReadRead Replies (2) 

To: FJB who wrote (382)  4/22/2020 3:57:35 PM  From: trickydick    FUBHO: Fascinating material, of which I probably understood about 1%, if I'm lucky. BUT, what's this have to do with IMMU and it's stock activity? Do you understand this material? If you do, can you give us a very short summary of what this means? The only question I can think of is: Is this theorem work associated with sub atomic physics? Just curious, thank you. 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  6/15/2020 6:09:28 AM    
Aging Problems At 5nm And Below
semiengineering.com
The mechanisms that cause aging in semiconductors have been known for a long time, but the concept did not concern most people because the expected lifetime of parts was far longer than their intended deployment in the field. In a short period of time, all of that has changed.
As device geometries have become smaller, the issue has become more significant. At 5nm, it becomes an essential part of the development flow with tools and flows evolving rapidly as new problems are discovered, understood and modeled.
“We have seen it move from being a boutique technology, used by specific design groups, into something that’s much more of a regular part of the signoff process,” says Art Schaldenbrand, senior product manager at Cadence. “As we go down into these more advanced nodes, the number of issues you have to deal with increases. At half micron you might only have to worry about hot carrier injection (HCI) if you’re doing something like a power chip. As you go down below 180nm you start seeing things like negativebias temperature instability (NBTI). Further down you get into other phenomena like self heating, which becomes a significant reliability problem.”
The ways of dealing with it in the past are no longer viable. “Until recently, designers very conservatively dealt with the aging problem by overdesign, leaving plenty of margin on the table,” says Ahmed Ramadan, senior product engineering manager for Mentor, a Siemens Business. “However, while pushing designs to the limit is not only needed to achieve competitive advantage, it is also needed to fulfill new applications requirements given the diminishing transistor scaling benefits. All of these call for the necessity of accurate aging analysis.”
While new phenomena are being discovered, the old ones continue to get worse. “The drivers of aging, such as temperature and electrical stress, have not really changed,” says André Lange, group manager for quality and reliability at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “However, densely packed active devices with minimum safety margins are required to realize advanced functionality requirements. This makes them more susceptible to reliability issues caused by selfheating and increasing field strengths. Considering advanced packaging techniques with 2.5D and 3D integration, the drivers for reliability issues, especially temperature, will gain importance.”
Contributing factors The biggest factor is heat. “Higher speeds tends to produce higher temperatures and temperature is the biggest killer,” says Rita Horner, senior product marketing manager for 3DIC at Synopsys. “Temperature exacerbates electron migration. The expected life can exponentially change from a tiny delta in temperature.”
This became a much bigger concern with finFETs. “In a planar CMOS process, heat can escape through the bulk of the device into the substrate fairly easily,” says Cadence’s Schaldenbrand. “But when you stand the transistor on its side and wrap it in a blanket, which is effectively what the gate oxide and gate acts like, the channel experiences greater temperature rise, so the stress that a device is experiencing increases significantly.”
An increasing amount of electronics are finding themselves being deployed in hostile environments. “Semiconductor chips that operate in extreme conditions, such as automotive (150° C) or high elevation (data servers in Mexico City) have the highest risk of reliability and aging related constraints,” says Milind Weling, senior vice president of programs and operations at Intermolecular. “ 2.5D and 3D designs could see additional mechanical stress on the underlying silicon chips, and this could induce additional mechanical stress aging.”
Devices’ attributes get progressively worse. “Over time, the threshold voltage of a device degrades, which means that it takes more time to turn the device on,” says Haran Thanikasalam, senior applications engineer for AMS at Synopsys. “One reason for this is negative bias instability. But as devices scale down, voltage scaling has been slower than geometry scaling. Today, we are reaching the limits of physics. Devices are operating somewhere around 0.6 to 0.7 volts at 3nm, compared to 1.2V at 40nm or 28nm. Because of this, the electric fields have increased. Large electrical field over a very tiny device area can cause severe breakdown.”
This is new. “The way we capture this phenomenon is something called timedependent dielectric breakdown (TTDB),” says Schaldenbrand. “You’re looking at how that field density causes devices to break down, and making sure the devices are not experiencing too much field density.”
The other primary cause of aging is electromigration (EM). “If you perform reliability simulation, like EM or IR drop simulation, not only do the devices degrade but you also have electromigration happening on the interconnects,” adds Thanikasalam. “You have to consider not only the devices, but also the interconnects between the devices.”
Analog and digital When it comes to aging, digital is a subset of analog. “In digital, you’re most worried about drive, because that changes the rise and fall delays,” says Schaldenbrand. “That covers a variety of sins. But analog is a lot more subtle and gain is something you worry about. Just knowing that Vt changed by this much isn’t going to tell you how much your gain will degrade. That’s only one part of the equation.”
Fig 1: Failure of analog components over time. Source: Synopsys
Aging can be masked in digital. “Depending on the application, a system may just degrade, or it may fail from the same amount of aging,” says Mentor’s Ramadan. “For example, microprocessor degradation may lead to lower performance, necessitating a slowdown, but not necessary failures. In missioncritical AI applications, such as ADAS, a sensor degradation may directly lead to AI failures and hence system failure.”
That simpler notion of degradation for digital often can be hidden. “A lot of this is captured at the cell characterization level,” adds Schaldenbrand. “So the system designer doesn’t worry about it much. If he runs the right libraries, the problem is covered for him.”
Duty cycle In order to get an accurate picture for aging, you have to consider activity in the design, but often this is not in the expected manner. “Negative bias temperatures stability (NBTS) is affecting some devices,” says Synopsys’ Horner. “But the devices do not have to be actively running. Aging can be happening while the device is shut off.”
In the past the analysis was done without simulation. “You can only get a certain amount of reliability data from doing static, vector independent analysis,” says Synopsys’ Thanikasalam. “This analysis does not care about the stimuli you give to your system. It takes a broader look and identifies where the problems are happening without simulating the design. But that is proving to be a very inaccurate way of doing things, especially at smaller nodes, because everything is activitydependent.”
That can be troublesome for IP blocks. “The problem is that if somebody is doing their own chip, their own software in their own device, they have all the information they need to know, down to the transistor level, what that duty cycle is,” says Kurt Shuler, vice president of marketing at Arteris IP. “But if you are creating a chip that other people will create software for, or if you’re providing a whole SDK and they’re modifying it, then you don’t really know. Those chip vendors have to provide to their customers some means to do that analysis.”
For some parts of the design, duty cycles can be estimated. “You never want to find a block level problem at the system level,” says Schaldenbrand. “People can do the analysis at the block level, and it’s fairly inexpensive to do there. For an analog block, such as an ADC or a SerDes or a PLL, you have a good idea of what its operation is going to be within the system. You know what kind of stresses it will experience. That is not true for a large digital design, where you might have several operating modes. That will change digital activity a lot.”
This is the fundamental reason why it has turned into a user issue. “It puts the onus on the user to make sure that you pick stimulus that will activate the parts of the design that you think are going to be more vulnerable to aging and electromigration, and you have to do that yourself,” says Thanikasalam. “This has created a big warning sign among the end users because the foundries won’t be able to provide you with stimulus. They have no clue what your design does.”
Monitoring and testing The industry approaches are changing at multiple levels. “To properly assess aging in a chip, manufacturers have relied on a function called burnin testing, where the wafer is cooked to artificially age it, after which it can be tested for reliability,” says Syed Alam, global semiconductor lead for Accenture. “Heat is the primary factor for aging in chips, with usage a close second, especially for flash as there are only so many rewrites available on a drive.”
And this is still a technique that many rely on. “AECQ100, an important standard for automotive electronics, contains multiple tests that do not reveal true reliability information,” says Fraunhofer’s Lange. “For example, in hightemperature operating life (HTOL) testing, 3×77 devices have to be stressed for 100 hours with functional tests before and after stress. Even when all devices pass, you cannot tell whether they will fail after 101 hours or whether they will last 10X longer. This information can only be obtained by extended testing or simulations.”
An emerging alternative is to build aging sensors into the chip. “There are sensors, which usually contain a timing loop, and they will warn you when it takes longer for the electrons to go around a loop,” says Arteris IP’s Shuler. “There is also a concept called canary cells, where these are meant to die prematurely compared to a standard transistor. This can tell you that aging is impacting the chip. What you are trying to do is to get predictive information that the chip is going to die. In some cases, they are taking the information from those sensors, getting that off chip, throwing it into big database and running AI algorithms to try to do predictive work.”
Additional 3D issues Many of the same problems exist in 2D, 2.5D and 3D designs, except that thermal issues may become more amplified with some architectures. But there also may be a whole bunch of new issues that are not yet fully understood. “When you’re stacking devices on top of each other, you have to backgrind them to thin them,” says Horner. “The stresses on the thinner die could be a concern, and that needs to be understood and studied and addressed in terms of the analysis. In addition, various types of the silicon age differently. You’re talking about a heterogeneous environment where you are potentially stacking DRAM, which tends to be more of a specific technology — or CPUs and GPUs, which may utilize different technology process nodes. You may have different types of TSVs or bumps that have been used in this particular silicon. How do they interact with each other?”
Those interfaces are a concern. “There is stress on the die, and that changes the device characteristics,” says Schaldenbrand. “But if different dies heat up to different temperatures, then the places where they interface are going to have a lot of mechanical stress. That’s a big problem, and system interconnect is going to be a big challenge going forward.”
Models and analysis It all starts with the foundries. “The TSMCs and the Samsungs of the world have to start providing that information,” says Shuler. “As you get to 5nm and below, even 7nm, there is a lot of variability in these processes and that makes everything worse.”
“The foundries worried about this because they realized that the devices being subjected to higher electric fields were degrading much faster than before,” says Thanikasalam. “They started using the MOS reliability and analysis solution (MOSRA) that applies to the device aging part of it. Recently, we see that shift towards the end customers who are starting to use the aging models. Some customers will only do a simple run using degraded models so that the simulation accounts for the degradation of the threshold voltage.”
Highvolume chips will need much more extensive analysis. “For highvolume production, multi PVT simulations are becoming a useless way of verifying this,” adds Thanikasalam. Everybody has to run Monte Carlo at this level. Monte Carlo simulation with the variation models is the key in 5nm and below.”
More models are needed. “There are more models being created and optimized,” says Horner. “In terms of 3D stacking, we have knowledge of the concern about electromigration, IR, thermal, and power. Those are the key things that are understood and modeled. For the mechanical aspects — even the materials that we put between the layers and their effect in terms of heat, and also the stability structures — while there are models out there, they are not as enhanced because we haven’t seen enough of these yet.”
Schaldenbrand agrees. “We are constantly working on the models and updating them, adding new phenomena as people become aware of them. There’s been a lot of changes required to get ready for the advanced nodes. For the nominal device we can very well described aging, but the interaction between process variation and its effect on reliability is something that’s still a research topic. That is a very challenging subject.”
With finFETs, the entire methodology changed. “The rules have become so complicated that you need to have a tool that can actually interpret the rules, apply the rules, and tell us where there could be problems two, three years down the line,” says Thanikasalam. “FinFETs can be multi threshold devices, so when you have the entire gamut of threshold voltage being used in a single IP, we have so many problems because every single device will go in different direction.”
Conclusion Still, progress is being made. “Recently, we have seen many foundries, IDMs, fabless and IP companies rushing to find a solution,” says Ramadan. “They cover a wide range of applications and technology processes. Whereas a standard aging model can be helpful as a starting point for new players, further customizations are expected depending on the target application and the technology process. The Compact Modeling Coalition (CMC), under the Silicon Integration Initiative (Si2), currently is working on developing a standard aging model to help the industry. In 2018, the CMC released the first standard Open Model Interface (OMI) that enables aging simulation for different circuit simulators using the unified standard OMI interface.”
That’s an important piece, but there is still a long road ahead. “Standardization activities within the CMC have started to solve some of these issues,” says Lange. “But there is quite a lot of work ahead in terms of model complexity, characterization effort, application scenario, and tool support.”
Related Stories Circuit Aging Becoming A Critical Consideration As reliability demands soar in automotive and other safetyrelated markets, tools vendors are focusing on an area often ignored in the past. How Chips Age Are current methodologies sufficient for ensuring that chips will function as expected throughout their expected lifetimes? Different Ways To Improve Chip Reliability Push toward zero defects requires more and different kinds of test in new places. Taming NBTI To Improve Device Reliability Negativebias temperature instability can cause an array of problems at advanced nodes and reduced voltages. 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  12/25/2020 10:08:54 PM     Fermilab and partners achieve sustained, highfidelity quantum teleportation
December 15, 2020
news.fnal.gov
Media contact A viable quantum internet — a network in which information stored in qubits is shared over long distances through entanglement — would transform the fields of data storage, precision sensing and computing, ushering in a new era of communication.
This month, scientists at Fermilab, a U.S. Department of Energy Office of Science national laboratory, and their partners took a significant step in the direction of realizing a quantum internet.
In a paper published in PRX Quantum, the team presents for the first time a demonstration of a sustained, longdistance (44 kilometers of fiber) teleportation of qubits of photons (quanta of light) with fidelity greater than 90%. The qubits were teleported over a fiberoptic network using stateoftheart singlephoton detectors and offtheshelf equipment.
“We’re thrilled by these results,” said Fermilab scientist Panagiotis Spentzouris, head of the Fermilab quantum science program and one of the paper’s coauthors. “This is a key achievement on the way to building a technology that will redefine how we conduct global communication.”
In a demonstration of highfidelity quantum teleportation at the Fermilab Quantum Network, fiberoptic cables connect offtheshelf devices (shown above), as well as stateoftheart R&D devices. Photo: Fermilab
Quantum teleportation is a “disembodied” transfer of quantum states from one location to another. The quantum teleportation of a qubit is achieved using quantum entanglement, in which two or more particles are inextricably linked to each other. If an entangled pair of particles is shared between two separate locations, no matter the distance between them, the encoded information is teleported.
The joint team — researchers at Fermilab, AT&T, Caltech, Harvard University, NASA Jet Propulsion Laboratory and University of Calgary — successfully teleported qubits on two systems: the Caltech Quantum Network, or CQNET, and the Fermilab Quantum Network, or FQNET. The systems were designed, built, commissioned and deployed by Caltech’s publicprivate research program on Intelligent Quantum Networks and Technologies, or INQNET.
“We are very proud to have achieved this milestone on sustainable, highperforming and scalable quantum teleportation systems,” said Maria Spiropulu, ShangYi Ch’en professor of physics at Caltech and director of the INQNET research program. “The results will be further improved with system upgrades we are expecting to complete by Q2 2021.”
CQNET and FQNET, which feature nearautonomous data processing, are compatible both with existing telecommunication infrastructure and with emerging quantum processing and storage devices. Researchers are using them to improve the fidelity and rate of entanglement distribution, with an emphasis on complex quantum communication protocols and fundamental science.
The achievement comes just a few months after the U.S. Department of Energy unveiled its blueprint for a national quantum internet at a press conference in Chicago.
“With this demonstration we’re beginning to lay the foundation for the construction of a Chicagoarea metropolitan quantum network,” Spentzouris said. The Chicagoland network, called the Illinois Express Quantum Network, is being designed by Fermilab in collaboration with Argonne National Laboratory, Caltech, Northwestern University and industry partners.
This research was supported by DOE’s Office of Science through the Quantum Information ScienceEnabled Discovery (QuantISED) program.
“The feat is a testament to success of collaboration across disciplines and institutions, which drives so much of what we accomplish in science,” said Fermilab Deputy Director of Research Joe Lykken. “I commend the INQNET team and our partners in academia and industry on this firstofitskind achievement in quantum teleportation.”
Learn more about the result.
Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance LLC, a joint partnership between the University of Chicago and the Universities Research Association, Inc. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @Fermilab.
The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.
Tagged: California, Caltech, INQNET, quantum communication, quantum information science, quantum science, quantum teleportation 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  12/26/2020 7:30:12 AM    
Photocatalyst that can Split Water into Hydrogen and oxygen at a Quantum Efficiency Close to 100%  FuelCellsWorks By FuelCellsWorks fuelcellsworks.com
A research team led by Shinshu University’s Tsuyoshi Takata, Takashi Hisatomi and Kazunari Domen succeeded in developing a photocatalyst that can split water into hydrogen and oxygen at a quantum efficiency close to 100%.
The team consisted of their colleagues from Yamaguchi University, The University of Tokyo and National Institute of Advanced Industrial Science and Technology (AIST).
The team produced an ideal photocatalyst structure composed of semiconductor particles and cocatalysts. H2 and O2 evolution cocatalysts were selectively photodeposited on different facets of crystalline SrTiO3(Aldoped) particles due to anisotropic charge transport. This photocatalyst structure effectively prevented charge recombination losses, reaching the upper limit of quantum efficiency.
Figure 1 – Schematic structure (a) and scanning electron microscope image (b) of Aldoped SrTiO3 siteselectively coloaded with a hydrogen evolution cocatalyst (Rh/Cr2O3) and an oxygen evolution cocatalyst (CoOOH).
Water splitting reaction driven by solar energy is a technology for producing renewable solar hydrogen on a large scale. To put such technology to practical use, the production cost of solar hydrogen must be significantly reduced [1]. This requires the reaction system that can split water efficiently and can be scaled up easily. A system consisting of particulate semiconductor photocatalysts can be expanded over a large area with relatively simple processes. Therefore, it will make great strides toward largescale solar hydrogen production if photocatalysts driving the sunlightdriven water splitting reaction with high efficiency are developed.
To upgrade the solar energy conversion efficiency of photocatalytic water splitting, it is necessary to improve two factors: widening the wavelength range of light used by the photocatalyst for the reaction and increasing the quantum yield at each wavelength. The former is determined by the bandgap of the photocatalyst material used, and the latter is determined by the quality of the photocatalyst material and the functionality of the cocatalyst used to promote the reaction. However, photocatalytic water splitting is an endergonic reaction involving multielectron transfer occurring in a nonequilibrium state.
This study refined the design and operating principle for advancing water splitting methods with a high quantum efficiency. The knowledge obtained in this study will propel the field of photocatalytic water splitting further to enable the scalable solar hydrogen production.
The project was made possible through the support of NEDO (New Energy and Industrial Technology Development Organization) under the “Artificial photosynthesis project”.
Title: Photocatalytic water splitting with a quantum efficiency of almost unity Authors:Tsuyoshi Takata, Junzhe Jiang, Yoshihisa Sakata, Mamiko Nakabayashi, Naoya Shibata, Vikas Nandal, Kazuhiko Seki, Takashi Hisatomi, Kazunari Domen Journal:Nature, 581, 411414 (2020) DOI10.1038/s4158602022789

 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last ReadRead Replies (1) 

 