From: FJB  4/8/2020 11:16:53 AM     ‘Amazing’ Math Bridge Extended Beyond Fermat’s Last Theorem By Erica Klarreich
April 6, 2020
quantamagazine.org
Robert Langlands, who conjectured the influential Langlands correspondence about 50 years ago, giving a talk at the Institute for Advanced Study in Princeton, New Jersey, in 2016.
Dan Komoda/Institute for Advanced Study
Namely, for both Diophantine equations and automorphic forms, there’s a natural way to generate an infinite sequence of numbers. For a Diophantine equation, you can count how many solutions the equation has in each clockstyle arithmetic system (for example, in the usual 12hour clock, 10 + 4 = 2). And for the kind of automorphic form that appears in the Langlands correspondence, you can compute an infinite list of numbers analogous to quantum energy levels.
If you include only the clock arithmetics that have a prime number of hours, Langlands conjectured that these two number sequences match up in an astonishingly broad array of circumstances. In other words, given an automorphic form, its energy levels govern the clock sequence of some Diophantine equation, and vice versa.
This connection is “weirder than telepathy,” Emerton said. “How these two sides communicate with each other … for me it seems incredible and amazing, even though I have been studying it for over 20 years.”
In the 1950s and 1960s, mathematicians figured out the beginnings of this bridge in one direction: how to go from certain automorphic forms to elliptic curves with coefficients that are rational numbers (ratios of whole numbers). Then in the 1990s, Wiles, with contributions from Taylor, figured out the opposite direction for a certain family of elliptic curves. Their result gave an instant proof of Fermat’s Last Theorem, since mathematicians had already shown that if Fermat’s Last Theorem were false, at least one of those elliptic curves would not have a matching automorphic form.
Fermat’s Last Theorem was far from the only discovery to emerge from the construction of this bridge. Mathematicians have used it, for instance, to prove the SatoTate conjecture, a decadesold problem about the statistical distribution of the number of clock solutions to an elliptic curve, as well as a conjecture about the energy levels of automorphic forms that originated with the legendary early 20thcentury mathematician Srinivasa Ramanujan.
After Wiles and Taylor published their findings, it became clear that their method still had some juice. Soon mathematicians figured out how to extend the method to all elliptic curves with rational coefficients. More recently, mathematicians figured out how to cover coefficients that include simple irrational numbers, such as 3 + $latex \sqrt{2}$.
What they couldn’t do, however, was extend the TaylorWiles method to elliptic curves whose coefficients include complex numbers such as i (the square root of 1) or 3 + i or $latex \sqrt{2}$i. Nor could they handle Diophantine equations with exponents much higher than those in elliptic curves. Equations where the highest exponent on the righthand side is 4 instead of 3 come along for free with the TaylorWiles method, but as soon as the exponent rises to 5, the method no longer works.
Mathematicians gradually realized that for these two next natural extensions of the Langlands bridge, it wasn’t simply a matter of finding some small adjustment to the TaylorWiles method. Instead, there seemed to be a fundamental obstruction.
They’re “the next examples you’d think of,” Gee said. “But you’re told, ‘No, these things are hopelessly out of reach.’”
The problem was that the TaylorWiles method finds the matching automorphic form for a Diophantine equation by successively approximating it with other automorphic forms. But in the situations where the equation’s coefficients include complex numbers or the exponent is 5 or higher, automorphic forms become exceedingly rare — so rare that a given automorphic form will usually have no nearby automorphic forms to use for approximation purposes.
In Wiles’ setting, the automorphic form you’re seeking “is like a needle in a haystack, but the haystack exists,” Emerton said. “And it’s almost as if it’s like a haystack of iron filings, and you’re putting in this magnet so it lines them up to point to your needle.”
But when it comes to complexnumber coefficients or higher exponents, he said, “it’s like a needle in a vacuum.”
Going to the Moon Many of today’s number theorists came of age in the era of Wiles’ proof. “It was the only piece of mathematics I ever saw on the front page of a newspaper,” recalled Gee, who was 13 at the time. “For many people, it’s something that seemed exciting, that they wanted to understand, and then they ended up working in this area because of that.”
So when in 2012, two mathematicians — Frank Calegari of the University of Chicago and David Geraghty (now a research scientist at Facebook) — proposed a way to overcome the obstruction to extending the TaylorWiles method, their idea sent ripples of excitement through the new generation of number theorists.
Their work showed that “this fundamental obstruction to going any further is not really an obstruction at all,” Gee said. Instead, he said, the seeming limitations of the TaylorWiles method are telling you “that in fact you’ve only got the shadow of the actual, more general method that [Calegari and Geraghty] introduced.”
In the cases where the obstruction crops up, the automorphic forms live on higherdimensional tilings than the twodimensional Escherstyle tilings Wiles studied. In these higherdimensional worlds, automorphic forms are inconveniently rare. But on the plus side, higherdimensional tilings often have a much richer structure than twodimensional tilings do. Calegari and Geraghty’s insight was to tap into this rich structure to make up for the shortage of automorphic forms.
More specifically, whenever you have an automorphic form, you can use its “coloring” of the tiling as a sort of measuring tool that can calculate the average color on any chunk of the tiling you choose. In the twodimensional setting, automorphic forms are essentially the only such measuring tools available. But for higherdimensional tilings, new measuring tools crop up called torsion classes, which assign to each chunk of the tiling not an average color but a number from a clock arithmetic. There’s an abundance of these torsion classes.
For some Diophantine equations, Calegari and Geraghty proposed, it might be possible to find the matching automorphic form by approximating it not with other automorphic forms but with torsion classes. “The insight they had was fantastic,” Caraiani said.
Calegari and Geraghty provided the blueprint for a much broader bridge from Diophantine equations to automorphic forms than the one Wiles and Taylor built. Yet their idea was far from a complete bridge. For it to work, mathematicians would first have to prove three major conjectures. It was, Calegari said, as if his paper with Geraghty explained how you could get to the moon — provided someone would obligingly whip up a spaceship, rocket fuel and spacesuits. The three conjectures “were completely beyond us,” Calegari said.
In particular, Calegari and Geraghty’s method required that there already be a bridge going in the other direction, from automorphic forms to the Diophantine equations side. And that bridge would have to transport not just automorphic forms but also torsion classes. “I think a lot of people thought this was a hopeless problem when Calegari and Geraghty first outlined their program,” said Taylor, who is now at Stanford University.
Yet less than a year after Calegari and Geraghty posted their paper online, Peter Scholze — a mathematician at the University of Bonn who went on to win the Fields Medal, mathematics’ highest honor — astonished number theorists by figuring out how to go from torsion classes to the Diophantine equations side in the case of elliptic curves whose coefficients are simple complex numbers such as 3 + 2i or 4  $latex \sqrt{5}$i. “He’s done a lot of exciting things, but that’s perhaps his most exciting achievement,” Taylor said.
Scholze had proved the first of Calegari and Geraghty’s three conjectures. And a pair of subsequent papers by Scholze and Caraiani came close to proving the second conjecture, which involves showing that Scholze’s bridge has the right properties.
It started to feel as if the program was within reach, so in the fall of 2016, to try to make further progress, Caraiani and Taylor organized what Calegari called a “secret” workshop at the Institute for Advanced Study. “We took over the lecture room — no one else was allowed in,” Calegari said.
After a couple of days of expository talks, the workshop participants started realizing how to both polish off the second conjecture and sidestep the third conjecture. “Maybe within a day of having actually stated all the problems, they were all solved,” said Gee, another participant.
The participants spent the rest of the week elaborating various aspects of the proof, and over the next two years they wrote up their findings into a 10author paper — an almost unheardof number of authors for a number theory paper. Their paper essentially establishes the Langlands bridge for elliptic curves with coefficients drawn from any number system made up of rational numbers plus simple irrational and complex numbers.
“The plan in advance [of the workshop] was just to see how close one could get to proving things,” Gee said. “I don’t think anyone really expected to prove the result.”
Extending the Bridge Meanwhile, a parallel story was unfolding for extending the bridge beyond elliptic curves. Calegari and Gee had been working with George Boxer (now at the École Normale Supérieure in Lyon, France) to tackle the case where the highest exponent in the Diophantine equation is 5 or 6 (instead of 3 or 4, the cases that were already known). But the three mathematicians were stuck on a key part of their argument.
Then, the very weekend after the “secret” workshop, Vincent Pilloni of the École Normale Supérieure put out a paper that showed how to circumvent that very obstacle. “We have to stop what we’re doing now and work with Pilloni!” the other three researchers immediately told each other, according to Calegari.
Within a few weeks, the four mathematicians had solved this problem too, though it took a couple of years and nearly 300 pages for them to fully flesh out their ideas. Their paper and the 10author paper were both posted online in late December 2018, within four days of each other.
Soon after the secret workshop at the IAS, Frank Calegari (left), Toby Gee (center) and Vincent Pilloni, working with George Boxer (not pictured), found a way to extend the Langlands bridge beyond elliptic curves.
Frank Calegari, University of Chicago; Courtesy of Toby Gee; Arnold Nipoli
“I think they’re pretty huge,” Emerton said of the two papers. Those papers and the preceding building blocks are all “state of the art,” he said.
While these two papers essentially prove that the mysterious telepathy between Diophantine equations and automorphic forms carries over to these new settings, there’s one caveat: They don’t quite build a perfect bridge between the two sides. Instead, both papers establish “potential automorphy.” This means that each Diophantine equation has a matching automorphic form, but we don’t know for sure that the automorphic form lives in the patch of its continent that mathematicians would expect. But potential automorphy is enough for many applications — for instance, the SatoTate conjecture about the statistics of clock solutions to Diophantine equations, which the 10author paper succeeded in proving in much broader contexts than before.
And mathematicians are already starting to figure out how to improve on these potential automorphy results. In October, for instance, three mathematicians — Patrick Allen of the University of Illinois, UrbanaChampaign, Chandrashekhar Khare of the University of California, Los Angeles and Jack Thorne of the University of Cambridge — proved that a substantial proportion of the elliptic curves studied in the 10author paper do have bridges that land in exactly the right place.
Bridges with this higher level of precision may eventually allow mathematicians to prove a host of new theorems, including a centuryold generalization of Fermat’s Last Theorem. This conjectures that the equation at the heart of the theorem continues to have no solutions even when x, y and z are drawn not just from whole numbers but from combinations of whole numbers and the imaginary number i.
The two papers carrying out the CalegariGeraghty program form an important proof of principle, said Michael Harris of Columbia University. They’re “a demonstration that the method does have wide scope,” he said.
While the new papers connect much wider regions of the two Langlands continents than before, they still leave vast territories uncharted. On the Diophantine equations side, there are still all the equations with exponents higher than 6, as well as equations with more than two variables. On the other side are automorphic forms that live on more complicated symmetric spaces than the ones that have been studied so far.
“These papers, right now, are kind of the pinnacle of achievement,” Emerton said. But “at some point, they will just be looked back at as one more step on the way.”
Langlands himself never considered torsion when he thought about automorphic forms, so one challenge for mathematicians is to come up with a unifying vision of these different threads. “The envelope is being expanded,” Taylor said. “We’ve to some degree left the path laid out by Langlands, and we don’t quite know where we’re going.”

 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last ReadRead Replies (2) 

To: FJB who wrote (382)  4/22/2020 3:57:35 PM  From: trickydick    FUBHO: Fascinating material, of which I probably understood about 1%, if I'm lucky. BUT, what's this have to do with IMMU and it's stock activity? Do you understand this material? If you do, can you give us a very short summary of what this means? The only question I can think of is: Is this theorem work associated with sub atomic physics? Just curious, thank you. 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  6/15/2020 6:09:28 AM    
Aging Problems At 5nm And Below
semiengineering.com
The mechanisms that cause aging in semiconductors have been known for a long time, but the concept did not concern most people because the expected lifetime of parts was far longer than their intended deployment in the field. In a short period of time, all of that has changed.
As device geometries have become smaller, the issue has become more significant. At 5nm, it becomes an essential part of the development flow with tools and flows evolving rapidly as new problems are discovered, understood and modeled.
“We have seen it move from being a boutique technology, used by specific design groups, into something that’s much more of a regular part of the signoff process,” says Art Schaldenbrand, senior product manager at Cadence. “As we go down into these more advanced nodes, the number of issues you have to deal with increases. At half micron you might only have to worry about hot carrier injection (HCI) if you’re doing something like a power chip. As you go down below 180nm you start seeing things like negativebias temperature instability (NBTI). Further down you get into other phenomena like self heating, which becomes a significant reliability problem.”
The ways of dealing with it in the past are no longer viable. “Until recently, designers very conservatively dealt with the aging problem by overdesign, leaving plenty of margin on the table,” says Ahmed Ramadan, senior product engineering manager for Mentor, a Siemens Business. “However, while pushing designs to the limit is not only needed to achieve competitive advantage, it is also needed to fulfill new applications requirements given the diminishing transistor scaling benefits. All of these call for the necessity of accurate aging analysis.”
While new phenomena are being discovered, the old ones continue to get worse. “The drivers of aging, such as temperature and electrical stress, have not really changed,” says André Lange, group manager for quality and reliability at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “However, densely packed active devices with minimum safety margins are required to realize advanced functionality requirements. This makes them more susceptible to reliability issues caused by selfheating and increasing field strengths. Considering advanced packaging techniques with 2.5D and 3D integration, the drivers for reliability issues, especially temperature, will gain importance.”
Contributing factors The biggest factor is heat. “Higher speeds tends to produce higher temperatures and temperature is the biggest killer,” says Rita Horner, senior product marketing manager for 3DIC at Synopsys. “Temperature exacerbates electron migration. The expected life can exponentially change from a tiny delta in temperature.”
This became a much bigger concern with finFETs. “In a planar CMOS process, heat can escape through the bulk of the device into the substrate fairly easily,” says Cadence’s Schaldenbrand. “But when you stand the transistor on its side and wrap it in a blanket, which is effectively what the gate oxide and gate acts like, the channel experiences greater temperature rise, so the stress that a device is experiencing increases significantly.”
An increasing amount of electronics are finding themselves being deployed in hostile environments. “Semiconductor chips that operate in extreme conditions, such as automotive (150° C) or high elevation (data servers in Mexico City) have the highest risk of reliability and aging related constraints,” says Milind Weling, senior vice president of programs and operations at Intermolecular. “ 2.5D and 3D designs could see additional mechanical stress on the underlying silicon chips, and this could induce additional mechanical stress aging.”
Devices’ attributes get progressively worse. “Over time, the threshold voltage of a device degrades, which means that it takes more time to turn the device on,” says Haran Thanikasalam, senior applications engineer for AMS at Synopsys. “One reason for this is negative bias instability. But as devices scale down, voltage scaling has been slower than geometry scaling. Today, we are reaching the limits of physics. Devices are operating somewhere around 0.6 to 0.7 volts at 3nm, compared to 1.2V at 40nm or 28nm. Because of this, the electric fields have increased. Large electrical field over a very tiny device area can cause severe breakdown.”
This is new. “The way we capture this phenomenon is something called timedependent dielectric breakdown (TTDB),” says Schaldenbrand. “You’re looking at how that field density causes devices to break down, and making sure the devices are not experiencing too much field density.”
The other primary cause of aging is electromigration (EM). “If you perform reliability simulation, like EM or IR drop simulation, not only do the devices degrade but you also have electromigration happening on the interconnects,” adds Thanikasalam. “You have to consider not only the devices, but also the interconnects between the devices.”
Analog and digital When it comes to aging, digital is a subset of analog. “In digital, you’re most worried about drive, because that changes the rise and fall delays,” says Schaldenbrand. “That covers a variety of sins. But analog is a lot more subtle and gain is something you worry about. Just knowing that Vt changed by this much isn’t going to tell you how much your gain will degrade. That’s only one part of the equation.”
Fig 1: Failure of analog components over time. Source: Synopsys
Aging can be masked in digital. “Depending on the application, a system may just degrade, or it may fail from the same amount of aging,” says Mentor’s Ramadan. “For example, microprocessor degradation may lead to lower performance, necessitating a slowdown, but not necessary failures. In missioncritical AI applications, such as ADAS, a sensor degradation may directly lead to AI failures and hence system failure.”
That simpler notion of degradation for digital often can be hidden. “A lot of this is captured at the cell characterization level,” adds Schaldenbrand. “So the system designer doesn’t worry about it much. If he runs the right libraries, the problem is covered for him.”
Duty cycle In order to get an accurate picture for aging, you have to consider activity in the design, but often this is not in the expected manner. “Negative bias temperatures stability (NBTS) is affecting some devices,” says Synopsys’ Horner. “But the devices do not have to be actively running. Aging can be happening while the device is shut off.”
In the past the analysis was done without simulation. “You can only get a certain amount of reliability data from doing static, vector independent analysis,” says Synopsys’ Thanikasalam. “This analysis does not care about the stimuli you give to your system. It takes a broader look and identifies where the problems are happening without simulating the design. But that is proving to be a very inaccurate way of doing things, especially at smaller nodes, because everything is activitydependent.”
That can be troublesome for IP blocks. “The problem is that if somebody is doing their own chip, their own software in their own device, they have all the information they need to know, down to the transistor level, what that duty cycle is,” says Kurt Shuler, vice president of marketing at Arteris IP. “But if you are creating a chip that other people will create software for, or if you’re providing a whole SDK and they’re modifying it, then you don’t really know. Those chip vendors have to provide to their customers some means to do that analysis.”
For some parts of the design, duty cycles can be estimated. “You never want to find a block level problem at the system level,” says Schaldenbrand. “People can do the analysis at the block level, and it’s fairly inexpensive to do there. For an analog block, such as an ADC or a SerDes or a PLL, you have a good idea of what its operation is going to be within the system. You know what kind of stresses it will experience. That is not true for a large digital design, where you might have several operating modes. That will change digital activity a lot.”
This is the fundamental reason why it has turned into a user issue. “It puts the onus on the user to make sure that you pick stimulus that will activate the parts of the design that you think are going to be more vulnerable to aging and electromigration, and you have to do that yourself,” says Thanikasalam. “This has created a big warning sign among the end users because the foundries won’t be able to provide you with stimulus. They have no clue what your design does.”
Monitoring and testing The industry approaches are changing at multiple levels. “To properly assess aging in a chip, manufacturers have relied on a function called burnin testing, where the wafer is cooked to artificially age it, after which it can be tested for reliability,” says Syed Alam, global semiconductor lead for Accenture. “Heat is the primary factor for aging in chips, with usage a close second, especially for flash as there are only so many rewrites available on a drive.”
And this is still a technique that many rely on. “AECQ100, an important standard for automotive electronics, contains multiple tests that do not reveal true reliability information,” says Fraunhofer’s Lange. “For example, in hightemperature operating life (HTOL) testing, 3×77 devices have to be stressed for 100 hours with functional tests before and after stress. Even when all devices pass, you cannot tell whether they will fail after 101 hours or whether they will last 10X longer. This information can only be obtained by extended testing or simulations.”
An emerging alternative is to build aging sensors into the chip. “There are sensors, which usually contain a timing loop, and they will warn you when it takes longer for the electrons to go around a loop,” says Arteris IP’s Shuler. “There is also a concept called canary cells, where these are meant to die prematurely compared to a standard transistor. This can tell you that aging is impacting the chip. What you are trying to do is to get predictive information that the chip is going to die. In some cases, they are taking the information from those sensors, getting that off chip, throwing it into big database and running AI algorithms to try to do predictive work.”
Additional 3D issues Many of the same problems exist in 2D, 2.5D and 3D designs, except that thermal issues may become more amplified with some architectures. But there also may be a whole bunch of new issues that are not yet fully understood. “When you’re stacking devices on top of each other, you have to backgrind them to thin them,” says Horner. “The stresses on the thinner die could be a concern, and that needs to be understood and studied and addressed in terms of the analysis. In addition, various types of the silicon age differently. You’re talking about a heterogeneous environment where you are potentially stacking DRAM, which tends to be more of a specific technology — or CPUs and GPUs, which may utilize different technology process nodes. You may have different types of TSVs or bumps that have been used in this particular silicon. How do they interact with each other?”
Those interfaces are a concern. “There is stress on the die, and that changes the device characteristics,” says Schaldenbrand. “But if different dies heat up to different temperatures, then the places where they interface are going to have a lot of mechanical stress. That’s a big problem, and system interconnect is going to be a big challenge going forward.”
Models and analysis It all starts with the foundries. “The TSMCs and the Samsungs of the world have to start providing that information,” says Shuler. “As you get to 5nm and below, even 7nm, there is a lot of variability in these processes and that makes everything worse.”
“The foundries worried about this because they realized that the devices being subjected to higher electric fields were degrading much faster than before,” says Thanikasalam. “They started using the MOS reliability and analysis solution (MOSRA) that applies to the device aging part of it. Recently, we see that shift towards the end customers who are starting to use the aging models. Some customers will only do a simple run using degraded models so that the simulation accounts for the degradation of the threshold voltage.”
Highvolume chips will need much more extensive analysis. “For highvolume production, multi PVT simulations are becoming a useless way of verifying this,” adds Thanikasalam. Everybody has to run Monte Carlo at this level. Monte Carlo simulation with the variation models is the key in 5nm and below.”
More models are needed. “There are more models being created and optimized,” says Horner. “In terms of 3D stacking, we have knowledge of the concern about electromigration, IR, thermal, and power. Those are the key things that are understood and modeled. For the mechanical aspects — even the materials that we put between the layers and their effect in terms of heat, and also the stability structures — while there are models out there, they are not as enhanced because we haven’t seen enough of these yet.”
Schaldenbrand agrees. “We are constantly working on the models and updating them, adding new phenomena as people become aware of them. There’s been a lot of changes required to get ready for the advanced nodes. For the nominal device we can very well described aging, but the interaction between process variation and its effect on reliability is something that’s still a research topic. That is a very challenging subject.”
With finFETs, the entire methodology changed. “The rules have become so complicated that you need to have a tool that can actually interpret the rules, apply the rules, and tell us where there could be problems two, three years down the line,” says Thanikasalam. “FinFETs can be multi threshold devices, so when you have the entire gamut of threshold voltage being used in a single IP, we have so many problems because every single device will go in different direction.”
Conclusion Still, progress is being made. “Recently, we have seen many foundries, IDMs, fabless and IP companies rushing to find a solution,” says Ramadan. “They cover a wide range of applications and technology processes. Whereas a standard aging model can be helpful as a starting point for new players, further customizations are expected depending on the target application and the technology process. The Compact Modeling Coalition (CMC), under the Silicon Integration Initiative (Si2), currently is working on developing a standard aging model to help the industry. In 2018, the CMC released the first standard Open Model Interface (OMI) that enables aging simulation for different circuit simulators using the unified standard OMI interface.”
That’s an important piece, but there is still a long road ahead. “Standardization activities within the CMC have started to solve some of these issues,” says Lange. “But there is quite a lot of work ahead in terms of model complexity, characterization effort, application scenario, and tool support.”
Related Stories Circuit Aging Becoming A Critical Consideration As reliability demands soar in automotive and other safetyrelated markets, tools vendors are focusing on an area often ignored in the past. How Chips Age Are current methodologies sufficient for ensuring that chips will function as expected throughout their expected lifetimes? Different Ways To Improve Chip Reliability Push toward zero defects requires more and different kinds of test in new places. Taming NBTI To Improve Device Reliability Negativebias temperature instability can cause an array of problems at advanced nodes and reduced voltages. 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  12/25/2020 10:08:54 PM     Fermilab and partners achieve sustained, highfidelity quantum teleportation
December 15, 2020
news.fnal.gov
Media contact A viable quantum internet — a network in which information stored in qubits is shared over long distances through entanglement — would transform the fields of data storage, precision sensing and computing, ushering in a new era of communication.
This month, scientists at Fermilab, a U.S. Department of Energy Office of Science national laboratory, and their partners took a significant step in the direction of realizing a quantum internet.
In a paper published in PRX Quantum, the team presents for the first time a demonstration of a sustained, longdistance (44 kilometers of fiber) teleportation of qubits of photons (quanta of light) with fidelity greater than 90%. The qubits were teleported over a fiberoptic network using stateoftheart singlephoton detectors and offtheshelf equipment.
“We’re thrilled by these results,” said Fermilab scientist Panagiotis Spentzouris, head of the Fermilab quantum science program and one of the paper’s coauthors. “This is a key achievement on the way to building a technology that will redefine how we conduct global communication.”
In a demonstration of highfidelity quantum teleportation at the Fermilab Quantum Network, fiberoptic cables connect offtheshelf devices (shown above), as well as stateoftheart R&D devices. Photo: Fermilab
Quantum teleportation is a “disembodied” transfer of quantum states from one location to another. The quantum teleportation of a qubit is achieved using quantum entanglement, in which two or more particles are inextricably linked to each other. If an entangled pair of particles is shared between two separate locations, no matter the distance between them, the encoded information is teleported.
The joint team — researchers at Fermilab, AT&T, Caltech, Harvard University, NASA Jet Propulsion Laboratory and University of Calgary — successfully teleported qubits on two systems: the Caltech Quantum Network, or CQNET, and the Fermilab Quantum Network, or FQNET. The systems were designed, built, commissioned and deployed by Caltech’s publicprivate research program on Intelligent Quantum Networks and Technologies, or INQNET.
“We are very proud to have achieved this milestone on sustainable, highperforming and scalable quantum teleportation systems,” said Maria Spiropulu, ShangYi Ch’en professor of physics at Caltech and director of the INQNET research program. “The results will be further improved with system upgrades we are expecting to complete by Q2 2021.”
CQNET and FQNET, which feature nearautonomous data processing, are compatible both with existing telecommunication infrastructure and with emerging quantum processing and storage devices. Researchers are using them to improve the fidelity and rate of entanglement distribution, with an emphasis on complex quantum communication protocols and fundamental science.
The achievement comes just a few months after the U.S. Department of Energy unveiled its blueprint for a national quantum internet at a press conference in Chicago.
“With this demonstration we’re beginning to lay the foundation for the construction of a Chicagoarea metropolitan quantum network,” Spentzouris said. The Chicagoland network, called the Illinois Express Quantum Network, is being designed by Fermilab in collaboration with Argonne National Laboratory, Caltech, Northwestern University and industry partners.
This research was supported by DOE’s Office of Science through the Quantum Information ScienceEnabled Discovery (QuantISED) program.
“The feat is a testament to success of collaboration across disciplines and institutions, which drives so much of what we accomplish in science,” said Fermilab Deputy Director of Research Joe Lykken. “I commend the INQNET team and our partners in academia and industry on this firstofitskind achievement in quantum teleportation.”
Learn more about the result.
Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance LLC, a joint partnership between the University of Chicago and the Universities Research Association, Inc. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @Fermilab.
The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.
Tagged: California, Caltech, INQNET, quantum communication, quantum information science, quantum science, quantum teleportation 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  12/26/2020 7:30:12 AM    
Photocatalyst that can Split Water into Hydrogen and oxygen at a Quantum Efficiency Close to 100%  FuelCellsWorks By FuelCellsWorks fuelcellsworks.com
A research team led by Shinshu University’s Tsuyoshi Takata, Takashi Hisatomi and Kazunari Domen succeeded in developing a photocatalyst that can split water into hydrogen and oxygen at a quantum efficiency close to 100%.
The team consisted of their colleagues from Yamaguchi University, The University of Tokyo and National Institute of Advanced Industrial Science and Technology (AIST).
The team produced an ideal photocatalyst structure composed of semiconductor particles and cocatalysts. H2 and O2 evolution cocatalysts were selectively photodeposited on different facets of crystalline SrTiO3(Aldoped) particles due to anisotropic charge transport. This photocatalyst structure effectively prevented charge recombination losses, reaching the upper limit of quantum efficiency.
Figure 1 – Schematic structure (a) and scanning electron microscope image (b) of Aldoped SrTiO3 siteselectively coloaded with a hydrogen evolution cocatalyst (Rh/Cr2O3) and an oxygen evolution cocatalyst (CoOOH).
Water splitting reaction driven by solar energy is a technology for producing renewable solar hydrogen on a large scale. To put such technology to practical use, the production cost of solar hydrogen must be significantly reduced [1]. This requires the reaction system that can split water efficiently and can be scaled up easily. A system consisting of particulate semiconductor photocatalysts can be expanded over a large area with relatively simple processes. Therefore, it will make great strides toward largescale solar hydrogen production if photocatalysts driving the sunlightdriven water splitting reaction with high efficiency are developed.
To upgrade the solar energy conversion efficiency of photocatalytic water splitting, it is necessary to improve two factors: widening the wavelength range of light used by the photocatalyst for the reaction and increasing the quantum yield at each wavelength. The former is determined by the bandgap of the photocatalyst material used, and the latter is determined by the quality of the photocatalyst material and the functionality of the cocatalyst used to promote the reaction. However, photocatalytic water splitting is an endergonic reaction involving multielectron transfer occurring in a nonequilibrium state.
This study refined the design and operating principle for advancing water splitting methods with a high quantum efficiency. The knowledge obtained in this study will propel the field of photocatalytic water splitting further to enable the scalable solar hydrogen production.
The project was made possible through the support of NEDO (New Energy and Industrial Technology Development Organization) under the “Artificial photosynthesis project”.
Title: Photocatalytic water splitting with a quantum efficiency of almost unity Authors:Tsuyoshi Takata, Junzhe Jiang, Yoshihisa Sakata, Mamiko Nakabayashi, Naoya Shibata, Vikas Nandal, Kazuhiko Seki, Takashi Hisatomi, Kazunari Domen Journal:Nature, 581, 411414 (2020) DOI10.1038/s4158602022789

 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last ReadRead Replies (1) 

To: FJB who wrote (385)  2/17/2021 8:14:12 PM  From: sense    Hmmm. The new future... just like the old future ?
At least, after decades of hearing about how germanium was going to replace silicon in the next generation, and then, the next, ad infinitum....
There seems to be a broad consensus now that germanium transistors are definitively better... in fuzz pedals.
Otherwise <crickets chirping>.
The incremental improvements delivered to us by the academiccorporate researchindustrial standards coordinating complex... do seem to ensure you can earn a degree in a related engineering field and have it not be made obsolete for an entire career... /s 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

To: FJB who wrote (387)  2/17/2021 9:01:12 PM  From: sense    That one was pretty exciting....
Right up to the point where, first, it mentions being made of Strotiummm and Rhodiummm... and then the bit about achieving "almost" unity...
How "almost" is it ?
Others might well have an ability to tweak some aspects to get "almost" close enough to over the hump to offer an economic reason to get it out of the lab ?
Thanks for providing the brain floss. 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

From: FJB  2/24/2021 8:09:46 PM     Imec demonstrates 20nm pitch line/space resist imaging with highNA EUV interference lithography Science X staff techxplore.com /news/202102imecnmpitchlinespaceresist.html
Schematic representations (not to scale) of Lloyd’s Mirror setup for highNA EUV interference coupon experiments . Credit: IMEC Imec reports for the first time the use of a 13.5nm, highharmonicgeneration source for the printing of 20nm pitch line/spaces using interference lithographic imaging of an Inpria metaloxide resist under highnumericalaperture (highNA) conditions. The demonstrated highNA capability of the EUV interference lithography using this EUV source presents an important milestone of the AttoLab, a research facility initiated by imec and KMLabs to accelerate the development of the highNA patterning ecosystem on 300 mm wafers. The interference tool will be used to explore the fundamental dynamics of photoresist imaging and provide patterned 300 mm wafers for process development before the first 0.55 highNA EXE5000 prototype from ASML becomes available.
The highNA exposure at 13.5 nm was emulated with a coherent highflux laser source of KMLabs in a Lloyd'sMirrorbased interference setup for coupon experiments on imec's spectroscopy beamline. This apparatus supplies critical learning for the next step, expansion to 300 mm wafer interference exposures. In this arrangement, light reflected from a mirror interferes with light directly emitted by the 13.5 nm laser source, generating a finely detailed interference pattern suited for resist imaging. The pitch of the imaged resist pattern can be tuned by changing the angle between the interfering light beams. With this setup, 20 nm line/spaces could for the first time at imec be successfully patterned in an Inpria metaloxide resist (exposure dose range of ~5464mJ/cm2, interference angle 20 degrees) using a singleexposure, coated on coupon samples.
"The highflux laser source of KMLabs was used at a record small wavelength of 13.5 nm, emitting a series of attosecond (1018s) pulses that reaches the photoresist with a pulse duration that is a few femtoseconds (1015s) in width. This imposed challenging requirements on the temporal coherence of the interfering waves," explains John Petersen, Principal Scientist at imec and SPIE Fellow. "The demonstrated capability of this setup for emulating highNA EUV lithography exposures is an important AttoLab milestone. It demonstrates that we can synchronize femtosecond wide pulses, that we have excellent vibration control, and excellent beam pointing stability. The 13.5 nm femtosecond enveloped attosecond laser pulses allow us to study EUV photon absorption and ultrafast radiative processes that are subsequently induced in the photoresist material. For these studies, we will couple the beamline with spectroscopy techniques, such as timeresolved infrared and photoelectron spectroscopy, that we earlier installed within the laboratory facility. The fundamental learnings from this spectroscopy beamline will contribute to developing the lithographic materials required for the nextgeneration (i.e., 0.55 NA) EUV lithography scanners, before the first 0.55 EXE5000 prototype becomes available."
Interference chamber for fullwafer experiments. Credit: IMEC Next up, the learnings from this first proof of concept will now be transferred to a second, 300mmwafercompatible EUV interference lithography beamline that is currently under installation. This beamline is designed for screening various resist materials under highNA conditions with a few seconds per singleexposure, and for supporting the development of optimized pattern, etch and metrology technologies viable for highNA EUV lithography."The lab's capabilities are instrumental for fundamental investigations to accelerate material development toward high NA EUV," said Andrew Grenville, CEO of Inpria. "We are looking forward to deeper collaboration with the AttoLab."
(Left) Crosssection SEM image of a 20nm L/S pattern imaged an Inpria metaloxide resist, exposed in a Lloyd’s mirror interference setup at a dose of 64mJ/cm2 and interference angle 20°. (Right) Fourier transform analysis where 0.05=20nm pitch. Credit: IMEC "Our interference tools are designed to go from 32 nm pitch to an unprecedented 8 nm pitch on 300 mm wafers, as well as smaller coupons," says John Petersen. "They will offer complementary insights in what is already gained from 0.33NA EUV lithography scanners—which are currently being pushed to their ultimate singleexposure resolution limits. In addition to patterning, many other materials research areas will benefit from this stateoftheart AttoLab research facility. For example, the ultrafast analytic capability will accelerate materials development of the nextgeneration logic, memory, and quantum devices, and of the nextgeneration metrology and inspection techniques."
More information: Introduction to imec's AttoLab for ultrafast kinetics of EUV exposure processes and ultrasmall pitch lithography, Paper 1161046
Citation: Imec demonstrates 20nm pitch line/space resist imaging with highNA EUV interference lithography (2021, February 23) retrieved 24 February 2021 from techxplore.com
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. 
 New Technology  Stock Discussion ForumsShare  RecommendKeepReplyMark as Last Read 

 