SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNew Technology


Previous 10 Next 10 
From: FJB7/21/2019 12:22:38 PM
   of 425
 
Google expected to achieve quantum supremacy in 2019: Here’s what that means
Tristan Greene
thenextweb.com

Google‘s reportedly on the verge of demonstrating a quantum computer capable of feats no ordinary classical computer could perform. The term for this is quantum supremacy, and experts believe the Mountain View company could be mere months from achieving it. This may be the biggest scientific breakthrough for humanity since we figured out how to harness the power of fire. Here’s what you need to know before it happens.

Functional quantum computers currently exist – IBM, D-Wave, Google, Microsoft, Rigetti, and dozens of other companies and universities are working tirelessly to develop them – but none of them actually do anything that we can’t already do with a regular, old-fashioned computer yet. They’re a proof-of-concept. The big news right now has to do with a new “rule” called Nevin’s Law. It was named after one of Google‘s quantum gurus, Hartmut Nevin, stated that quantum computing technology is currently snowballing at a double-exponential rate. We’ll get to that later. First let’s talk about what quantum supremacy would actually mean for you and me.

For a basic primer on quantum computers, click here.

Why you should care


Experts predict the advent of quantum supremacy – useful quantum computers – will herald revolutionary advances in nearly every scientific field. We’re talking breakthroughs in chemistry, astrophysics, medicine, security, communications and more. It may sound like a lot of hype, but these are the grounded predictions. Others think quantum computers will help scientists unlock some of the greater mysteries of the cosmos such as how the universe came to be and whether life exists outside of our own planet.

But quantum computing is an edge technology: there’s no blueprint for wrangling subatomic particles into performing computations. Some folks believe quantum computers will never stack up to modern supercomputers. While this is a minority view, there is a valid point to be gleaned from it: quantum computers will never replace classical ones. And they’re not meant to.

You can’t replace your iPhone or PC with a quantum computer any more than you can replace your tennis shoes with a nuclear aircraft carrier. The two things are designed to do different things, despite the fact they’re both related to transportation in some way.

Classical computers allow you to play games, check your emails, surf the web, and run programs. Quantum computers will, for the most part, perform simulations too complex for binary systems that run on computer bits. In other words, individual consumers will have almost no use for a quantum computer of their own, but NASA and MIT, for example, absolutely will.

What’s Google actually doing?

While quantum supremacy would be a giant breakthrough, let’s not get ahead of ourselves: The world probably isn’t going to catapult into some sort of far-future scientific utopia just because Google shows off a quantum system that can do things impossible for a binary computer. The reason experts in the field are excited right now is because of Nevin’s Law – something that’s not really a law at all.

Nevin’s Law is currently more of an affectionate term for a rule coined by Google‘s Hartmut Nevin. At the company’s Spring Quantum Symposium this May, Nevin made the claim that quantum systems are increasing in performance at a doubly-exponential rate. This means, rather than doubling in performance with successive iterations as was the case with classical computers and Moore’s Law, quantum technology is increasing in performance at a much more dramatic rate. It took 50 years to go from punch card systems to iPhones: if Nevin’s Law is true we’ll see quantum systems increase in a fraction of that time.


Most of this improvement can be attributed to amazing new feats in error-correction – filtering out noise in quantum systems is among the biggest challenges faced by physicists. Some of the improvement has to do with the simple fact that a rising tide lifts all ships. Google‘s invested as much time, money, and personnel as any other organization involved in quantum technology (if not more). If quantum supremacy is possible, Google‘s as likely a candidate to achieve it as any other company.

Well, except IBM, but only because it takes a somewhat different view on the subject. Arguably, IBM is at the forefront of quantum technology. And there’s no reason to believe it won’t reach Google’s definition of quantum supremacy soon as well, but it’s leadership has been reticent to talk about goalposts like quantum supremacy.

TNW reached out to IBM to get it’s take, Here’s what Dr. Jay Gambetta, IBM Fellow and Global Lead of Quantum Computing, Theory, Software, IBM Q, told us:

Supremacy isn’t something you shoot for. As has been proven, it’s a moving target, and something we’ll recognize once we’ve moved to bigger things – namely demonstrating a significant performance advantage over what classical computers can do, alone. This means developing a quantum computation that’s either hundreds or thousands of times faster than a classical computation, or needs a smaller fraction of the memory required by a classical computer, or makes something possible that simply isn’t possible now with a classical computer.

We must also measure that progress beyond simple qubit counts or just coherence times. Which is why we developed quantum volume, full-system performance metric that accounts for gate and measurement errors as well as device cross talk and connectivity, and circuit software compiler efficiency. It’s an agnostic metric that others, including Rigetti, have benchmarked their systems against. You can read more about the Quantum Volume of our systems, and how it’s calculated, here.

What’s next

Time will tell whether IBM or Google’s approach makes more sense, but according to Nevin’s Law we’re mere months away from seeing a full-fledged demonstration of quantum supremacy from one team or another. Quanta Magazine reports that Google‘s had to crib computational power from systems outside of its quantum labs just to keep up with the hand-over-fist improvements in performance. Nevins told Quanta’s Kevin Hartnett:

Somewhere in February I had to make calls to say, ‘Hey, we need more quota.’ We were running jobs comprised of a million processors.

He went on to explain that, with double-exponential growth, the proof isn’t always front-and-center at first:

It looks like nothing is happening, nothing is happening, and then whoops, suddenly you’re in a different world. That’s what we’re experiencing here.

We’ve been talking about quantum computing for years, but this is the first time quantum supremacy’s been dangled in front of our faces as a near-term eventuality. Of course, with any bold claim, it’s prudent to maintain a modicum of cynicism. But Nevin’s Law tells us our quantum dreams could come true before the end of the year.

All we want for Christmas is an entirely new computing paradigm capable of making classical computers look like punch-card systems. Oh, and a drone with a flamethrower attachment — but that’s unrelated.

Read next: Consoles aren't selling because the next-gen is coming

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: FJB9/14/2019 6:26:28 PM
   of 425
 
Abstract
This Letter proposes a realistic implementation of the curved relativistic mirror concept to reach unprecedented light intensities in experiments. The scheme is based on relativistic plasma mirrors that are optically curved by laser radiation pressure. Its validity is supported by cutting-edge three-dimensional particle-in-cell simulations and a theoretical model, which show that intensities above 1025??W?cm-2 could be reached with a 3 PetaWatt (PW) laser. Its very high robustness to laser and plasma imperfections is shown to surpass all previous schemes and should enable its implementation on existing PW laser facilities.


  • Revised 21 March 2019
  • Received 28 November 2018
  • DOI:https://doi.org/10.1103/PhysRevLett.123.105001

    © 2019 American Physical Society

    Physics Subject Headings (PhySH)

    Share RecommendKeepReplyMark as Last Read


    From: DinoNavarre9/18/2019 10:44:19 PM
    1 Recommendation   of 425
     
    I like Israeli tech......Anybody familiar with this outfit / technology / competitors....???

    Opinions....???

    Audio Pixels Limited

    Investor Video Presentation

    Chart

    Digital Speaker Development Update

    ASX Quote

    Share RecommendKeepReplyMark as Last Read


    From: DinoNavarre9/19/2019 12:46:13 PM
       of 425
     
    Are there any more listed outfits like this???? Thanks for any help!!!

    HUT.V

    Share RecommendKeepReplyMark as Last ReadRead Replies (1)


    To: DinoNavarre who wrote (376)9/19/2019 5:39:11 PM
    From: FJB
       of 425
     
    There many, many companies like that in China. You should ask about miners here ...
    Subject 59919

    Share RecommendKeepReplyMark as Last ReadRead Replies (1)


    To: FJB who wrote (377)9/19/2019 6:54:07 PM
    From: DinoNavarre
       of 425
     
    Will do.....Thanks.

    Share RecommendKeepReplyMark as Last Read


    From: FJB10/20/2019 6:18:32 AM
       of 425
     
    Math Breakthrough Speeds Supercomputer Simulations
    Andy Fell
    egghead.ucdavis.edu /2019/10/18/math-breakthrough-speeds-supercomputer-simulations/

    A breakthrough by UC Davis mathematicians could help scientists get three or four times the performance from supercomputers used to model protein folding, turbulence and other complex atomic scale problems.

    “This is a big deal,” said Niels Gronbech-Jensen, professor of mathematics and of mechanical and aerospace engineering at UC Davis. “We are now able to do a broad class of simulations several times faster than what has been possible before.”




    Simulation of a virus particle created with LAMMPS molecular dynamics software. New work from UC Davis will allow faster and more accurate simulations of atoms and molecules. (Image by Eindhoven University of Technology via Sandia National Lab.)

    One of the new algorithms has been incorporated into the Sandia National Laboratory molecular dynamics suite, LAMMPS, which is used worldwide for studies in biochemistry, materials science and other fields.

    Newton’s equations describe how systems change over time. In the early twentieth century, physicist Paul Langevin developed equations that add friction and noise to Newton’s equations in order to describe a system in thermal balance. But it was only with the development of computers that it became practical to use these equations to study how large ensembles of atoms and molecules behave. That methodology, called molecular dynamics, was pioneered by, among others, Edward Teller and Bernie Alder of the Lawrence Livermore National Laboratory and the UC Davis Department of Applied Science.

    Molecular dynamics simulations are now widely used in applications such as materials science and pharmaceutical research.

    The timestep problemIn adapting Newton’s and Langevin’s equations to run on digital computers, scientists had to make an important change. They had to break the equations into discrete timesteps.

    “The time step makes the system behave differently,” Gronbech-Jensen said.

    The shorter the timesteps, the closer the simulation will be to reality, where systems change continuously. But with short timesteps it takes longer to complete a simulation. With larger timesteps, however, results can start to deviate from reality.

    “We are in a squeeze between getting somewhere and being accurate,” Gronbech-Jensen said.

    Molecular dynamics simulations essentially describe the movements and interactions of a lot of particles. A few years ago, Gronbech-Jensen’s research group found a way to accurately calculate the thermal distributions of positions of particles in a simulation regardless of the timestep. Over the past year, they have figured out that they can obtain accurate thermal distributions for the particle velocities as well, thereby getting a complete and accurate statistical description of a molecular ensemble simulated at large time steps.

    The new algorithm allows scientists to run simulations with bigger timesteps without losing statistical accuracy. This could effectively increase computing power three- to four-fold or more, Gronbech-Jensen said – a feature that is particularly impactful for simulations that are currently challenging the most powerful supercomputers in the world.

    The new capability is now freely available to use through the LAMMPS molecular dynamics suite.

    Gronbech-Jensen said it’s gratifying to see his team’s work go into wide use.

    “It’s great to see our work getting to the point of having tangible impact,” he said.

    More information Accurate configurational and kinetic statistics in discrete-time Langevin systems (Molecular Physics)

    Complete set of stochastic Verlet-type thermostats for correct Langevin simulations (Molecular Physics)

    Share RecommendKeepReplyMark as Last Read


    From: FJB10/23/2019 11:45:35 AM
       of 425
     
    IBM DENIES GOOGLE QUANTUM SUPREMACY

    ibm.com

    On “Quantum Supremacy” | IBM Research Blog
    Edwin Pednault



    October 21, 2019 | Written by: , John Gunnels, and Jay Gambetta

    Categorized: Quantum Computing


    Share this post:

    Quantum computers are starting to approach the limit of classical simulation and it is important that we continue to benchmark progress and to ask how difficult they are to simulate. This is a fascinating scientific question.

    Recent advances in quantum computing have resulted in two 53-qubit processors: one from our group in IBM and a device described by Google in a paper published in the journal Nature. In the paper, it is argued that their device reached “quantum supremacy” and that “a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.” We argue that an ideal simulation of the same task can be performed on a classical system in 2.5 days and with far greater fidelity. This is in fact a conservative, worst-case estimate, and we expect that with additional refinements the classical cost of the simulation can be further reduced.

    Because the original meaning of the term “quantum supremacy,” as proposed by John Preskill in 2012, was to describe the point where quantum computers can do things that classical computers can’t, this threshold has not been met.

    This particular notion of “quantum supremacy” is based on executing a random quantum circuit of a size infeasible for simulation with any available classical computer. Specifically, the paper shows a computational experiment over a 53-qubit quantum processor that implements an impressively large two-qubit gate quantum circuit of depth 20, with 430 two-qubit and 1,113 single-qubit gates, and with predicted total fidelity of 0.2%. Their classical simulation estimate of 10,000 years is based on the observation that the RAM memory requirement to store the full state vector in a Schrödinger-type simulation would be prohibitive, and thus one needs to resort to a Schrödinger-Feynman simulation that trades off space for time.

    The concept of “quantum supremacy” showcases the resources unique to quantum computers, such as direct access to entanglement and superposition. However, classical computers have resources of their own such as a hierarchy of memories and high-precision computations in hardware, various software assets, and a vast knowledge base of algorithms, and it is important to leverage all such capabilities when comparing quantum to classical.

    When their comparison to classical was made, they relied on an advanced simulation that leverages parallelism, fast and error-free computation, and large aggregate RAM, but failed to fully account for plentiful disk storage. In contrast, our Schrödinger-style classical simulation approach uses both RAM and hard drive space to store and manipulate the state vector. Performance-enhancing techniques employed by our simulation methodology include circuit partitioning, tensor contraction deferral, gate aggregation and batching, careful orchestration of collective communication, and well-known optimization methods such as cache-blocking and double-buffering in order to overlap the communication transpiring between and computation taking place on the CPU and GPU components of the hybrid nodes. Further details may be found in Leveraging Secondary Storage to Simulate Deep 54-qubit Sycamore Circuits.




    Figure 1. Analysis of expected classical computing runtime vs circuit depth of “Google Sycamore Circuits”. The bottom (blue) line estimates the classical runtime for a 53-qubit processor (2.5 days for a circuit depth 20), and the upper line (orange) does so for a 54-qubit processor.

    Our simulation approach features a number of nice properties that do not directly transfer from the classical to quantum worlds. For instance, once computed classically, the full state vector can be accessed arbitrarily many times. The runtime of our simulation method scales approximately linearly with the circuit depth (see Figure 1 above), imposing no limits such as those owing to the limited coherence times. New and better classical hardware, code optimizations to more efficiently utilize the classical hardware, not to mention the potential of leveraging GPU-direct communications to run the kind of supremacy simulations of interest, could substantially accelerate our simulation.

    Building quantum systems is a feat of science and engineering and benchmarking them is a formidable challenge. Google’s experiment is an excellent demonstration of the progress in superconducting-based quantum computing, showing state-of-the-art gate fidelities on a 53-qubit device, but it should not be viewed as proof that quantum computers are “supreme” over classical computers.

    It is well known in the quantum community that we at IBM are concerned of where the term “quantum supremacy” has gone. The origins of the term, including both a reasoned defense and a candid reflection on some of its controversial dimensions, were recently discussed by John Preskill in a thoughtful article in Quanta Magazine. Professor Preskill summarized the two main objections to the term that have arisen from the community by explaining that the “word exacerbates the already overhyped reporting on the status of quantum technology” and that “through its association with white supremacy, evokes a repugnant political stance.”

    Both are sensible objections. And we would further add that the “supremacy” term is being misunderstood by nearly all (outside of the rarified world of quantum computing experts that can put it in the appropriate context). A headline that includes some variation of “Quantum Supremacy Achieved” is almost irresistible to print, but it will inevitably mislead the general public. First because, as we argue above, by its strictest definition the goal has not been met. But more fundamentally, because quantum computers will never reign “supreme” over classical computers, but will rather work in concert with them, since each have their unique strengths.

    For the reasons stated above, and since we already have ample evidence that the term “quantum supremacy” is being broadly misinterpreted and causing ever growing amounts of confusion, we urge the community to treat claims that, for the first time, a quantum computer did something that a classical computer cannot with a large dose of skepticism due to the complicated nature of benchmarking an appropriate metric.

    For quantum to positively impact society, the task ahead is to continue to build and make widely accessible ever more powerful programmable quantum computing systems that can implement, reproducibly and reliably, a broad array of quantum demonstrations, algorithms and programs. This is the only path forward for practical solutions to be realized in quantum computers.

    A final thought. The concept of quantum computing is inspiring a whole new generation of scientists, including physicists, engineers, and computer scientists, to fundamentally change the landscape of information technology. If you are already pushing the frontiers of quantum computing forward, let’s keep the momentum going. And if you are new to the field, come and join the community. Go ahead and run your first program on a real quantum computer today.

    The best is yet to come.

    Chief Architect for IBM Q Dmitri Maslov also contributed to this article.

    Share RecommendKeepReplyMark as Last Read


    To: FJB who wrote (373)11/1/2019 7:33:40 PM
    From: trickydick
       of 425
     
    FUBHO: This is fascinating stuff, almost sounds like science fiction, only, as you said, it's going to become commercialized soon. I'm new to some of these boards and I'm always looking for new opportunities for investments.

    I may be stepping in here when huge volumes of information may have already been shared, over long periods, so my apologies if my inquiry is burdensome and amateurish.

    .
    So, my question is: Are you vested in this technology through the stock market? Do you see this specific type of computers to be a field to invest in? Is that best time already came and went? Is this technology so finite that investing in it wouldn't be worth the time to research it?

    Just trying to expand my horizons.

    I would appreciate any help and/ or direction you can give us. And, thank you.

    Share RecommendKeepReplyMark as Last Read


    From: FJB4/8/2020 11:16:53 AM
       of 425
     
    ‘Amazing’ Math Bridge Extended Beyond Fermat’s Last Theorem
    By Erica Klarreich

    April 6, 2020




    quantamagazine.org

    Robert Langlands, who conjectured the influential Langlands correspondence about 50 years ago, giving a talk at the Institute for Advanced Study in Princeton, New Jersey, in 2016.

    Dan Komoda/Institute for Advanced Study

    Namely, for both Diophantine equations and automorphic forms, there’s a natural way to generate an infinite sequence of numbers. For a Diophantine equation, you can count how many solutions the equation has in each clock-style arithmetic system (for example, in the usual 12-hour clock, 10 + 4 = 2). And for the kind of automorphic form that appears in the Langlands correspondence, you can compute an infinite list of numbers analogous to quantum energy levels.

    If you include only the clock arithmetics that have a prime number of hours, Langlands conjectured that these two number sequences match up in an astonishingly broad array of circumstances. In other words, given an automorphic form, its energy levels govern the clock sequence of some Diophantine equation, and vice versa.

    This connection is “weirder than telepathy,” Emerton said. “How these two sides communicate with each other … for me it seems incredible and amazing, even though I have been studying it for over 20 years.”

    In the 1950s and 1960s, mathematicians figured out the beginnings of this bridge in one direction: how to go from certain automorphic forms to elliptic curves with coefficients that are rational numbers (ratios of whole numbers). Then in the 1990s, Wiles, with contributions from Taylor, figured out the opposite direction for a certain family of elliptic curves. Their result gave an instant proof of Fermat’s Last Theorem, since mathematicians had already shown that if Fermat’s Last Theorem were false, at least one of those elliptic curves would not have a matching automorphic form.

    Fermat’s Last Theorem was far from the only discovery to emerge from the construction of this bridge. Mathematicians have used it, for instance, to prove the Sato-Tate conjecture, a decades-old problem about the statistical distribution of the number of clock solutions to an elliptic curve, as well as a conjecture about the energy levels of automorphic forms that originated with the legendary early 20th-century mathematician Srinivasa Ramanujan.

    After Wiles and Taylor published their findings, it became clear that their method still had some juice. Soon mathematicians figured out how to extend the method to all elliptic curves with rational coefficients. More recently, mathematicians figured out how to cover coefficients that include simple irrational numbers, such as 3 + $latex \sqrt{2}$.

    What they couldn’t do, however, was extend the Taylor-Wiles method to elliptic curves whose coefficients include complex numbers such as i (the square root of -1) or 3 + i or $latex \sqrt{2}$i. Nor could they handle Diophantine equations with exponents much higher than those in elliptic curves. Equations where the highest exponent on the right-hand side is 4 instead of 3 come along for free with the Taylor-Wiles method, but as soon as the exponent rises to 5, the method no longer works.

    Mathematicians gradually realized that for these two next natural extensions of the Langlands bridge, it wasn’t simply a matter of finding some small adjustment to the Taylor-Wiles method. Instead, there seemed to be a fundamental obstruction.

    They’re “the next examples you’d think of,” Gee said. “But you’re told, ‘No, these things are hopelessly out of reach.’”

    The problem was that the Taylor-Wiles method finds the matching automorphic form for a Diophantine equation by successively approximating it with other automorphic forms. But in the situations where the equation’s coefficients include complex numbers or the exponent is 5 or higher, automorphic forms become exceedingly rare — so rare that a given automorphic form will usually have no nearby automorphic forms to use for approximation purposes.

    In Wiles’ setting, the automorphic form you’re seeking “is like a needle in a haystack, but the haystack exists,” Emerton said. “And it’s almost as if it’s like a haystack of iron filings, and you’re putting in this magnet so it lines them up to point to your needle.”

    But when it comes to complex-number coefficients or higher exponents, he said, “it’s like a needle in a vacuum.”

    Going to the Moon Many of today’s number theorists came of age in the era of Wiles’ proof. “It was the only piece of mathematics I ever saw on the front page of a newspaper,” recalled Gee, who was 13 at the time. “For many people, it’s something that seemed exciting, that they wanted to understand, and then they ended up working in this area because of that.”

    So when in 2012, two mathematicians — Frank Calegari of the University of Chicago and David Geraghty (now a research scientist at Facebook) — proposed a way to overcome the obstruction to extending the Taylor-Wiles method, their idea sent ripples of excitement through the new generation of number theorists.

    Their work showed that “this fundamental obstruction to going any further is not really an obstruction at all,” Gee said. Instead, he said, the seeming limitations of the Taylor-Wiles method are telling you “that in fact you’ve only got the shadow of the actual, more general method that [Calegari and Geraghty] introduced.”

    In the cases where the obstruction crops up, the automorphic forms live on higher-dimensional tilings than the two-dimensional Escher-style tilings Wiles studied. In these higher-dimensional worlds, automorphic forms are inconveniently rare. But on the plus side, higher-dimensional tilings often have a much richer structure than two-dimensional tilings do. Calegari and Geraghty’s insight was to tap into this rich structure to make up for the shortage of automorphic forms.

    More specifically, whenever you have an automorphic form, you can use its “coloring” of the tiling as a sort of measuring tool that can calculate the average color on any chunk of the tiling you choose. In the two-dimensional setting, automorphic forms are essentially the only such measuring tools available. But for higher-dimensional tilings, new measuring tools crop up called torsion classes, which assign to each chunk of the tiling not an average color but a number from a clock arithmetic. There’s an abundance of these torsion classes.

    For some Diophantine equations, Calegari and Geraghty proposed, it might be possible to find the matching automorphic form by approximating it not with other automorphic forms but with torsion classes. “The insight they had was fantastic,” Caraiani said.

    Calegari and Geraghty provided the blueprint for a much broader bridge from Diophantine equations to automorphic forms than the one Wiles and Taylor built. Yet their idea was far from a complete bridge. For it to work, mathematicians would first have to prove three major conjectures. It was, Calegari said, as if his paper with Geraghty explained how you could get to the moon — provided someone would obligingly whip up a spaceship, rocket fuel and spacesuits. The three conjectures “were completely beyond us,” Calegari said.

    In particular, Calegari and Geraghty’s method required that there already be a bridge going in the other direction, from automorphic forms to the Diophantine equations side. And that bridge would have to transport not just automorphic forms but also torsion classes. “I think a lot of people thought this was a hopeless problem when Calegari and Geraghty first outlined their program,” said Taylor, who is now at Stanford University.

    Yet less than a year after Calegari and Geraghty posted their paper online, Peter Scholze — a mathematician at the University of Bonn who went on to win the Fields Medal, mathematics’ highest honor — astonished number theorists by figuring out how to go from torsion classes to the Diophantine equations side in the case of elliptic curves whose coefficients are simple complex numbers such as 3 + 2i or 4 - $latex \sqrt{5}$i. “He’s done a lot of exciting things, but that’s perhaps his most exciting achievement,” Taylor said.

    Scholze had proved the first of Calegari and Geraghty’s three conjectures. And a pair of subsequent papers by Scholze and Caraiani came close to proving the second conjecture, which involves showing that Scholze’s bridge has the right properties.

    It started to feel as if the program was within reach, so in the fall of 2016, to try to make further progress, Caraiani and Taylor organized what Calegari called a “secret” workshop at the Institute for Advanced Study. “We took over the lecture room — no one else was allowed in,” Calegari said.

    After a couple of days of expository talks, the workshop participants started realizing how to both polish off the second conjecture and sidestep the third conjecture. “Maybe within a day of having actually stated all the problems, they were all solved,” said Gee, another participant.

    The participants spent the rest of the week elaborating various aspects of the proof, and over the next two years they wrote up their findings into a 10-author paper — an almost unheard-of number of authors for a number theory paper. Their paper essentially establishes the Langlands bridge for elliptic curves with coefficients drawn from any number system made up of rational numbers plus simple irrational and complex numbers.

    “The plan in advance [of the workshop] was just to see how close one could get to proving things,” Gee said. “I don’t think anyone really expected to prove the result.”

    Extending the Bridge Meanwhile, a parallel story was unfolding for extending the bridge beyond elliptic curves. Calegari and Gee had been working with George Boxer (now at the École Normale Supérieure in Lyon, France) to tackle the case where the highest exponent in the Diophantine equation is 5 or 6 (instead of 3 or 4, the cases that were already known). But the three mathematicians were stuck on a key part of their argument.

    Then, the very weekend after the “secret” workshop, Vincent Pilloni of the École Normale Supérieure put out a paper that showed how to circumvent that very obstacle. “We have to stop what we’re doing now and work with Pilloni!” the other three researchers immediately told each other, according to Calegari.

    Within a few weeks, the four mathematicians had solved this problem too, though it took a couple of years and nearly 300 pages for them to fully flesh out their ideas. Their paper and the 10-author paper were both posted online in late December 2018, within four days of each other.





    Soon after the secret workshop at the IAS, Frank Calegari (left), Toby Gee (center) and Vincent Pilloni, working with George Boxer (not pictured), found a way to extend the Langlands bridge beyond elliptic curves.

    Frank Calegari, University of Chicago; Courtesy of Toby Gee; Arnold Nipoli

    “I think they’re pretty huge,” Emerton said of the two papers. Those papers and the preceding building blocks are all “state of the art,” he said.

    While these two papers essentially prove that the mysterious telepathy between Diophantine equations and automorphic forms carries over to these new settings, there’s one caveat: They don’t quite build a perfect bridge between the two sides. Instead, both papers establish “potential automorphy.” This means that each Diophantine equation has a matching automorphic form, but we don’t know for sure that the automorphic form lives in the patch of its continent that mathematicians would expect. But potential automorphy is enough for many applications — for instance, the Sato-Tate conjecture about the statistics of clock solutions to Diophantine equations, which the 10-author paper succeeded in proving in much broader contexts than before.

    And mathematicians are already starting to figure out how to improve on these potential automorphy results. In October, for instance, three mathematicians — Patrick Allen of the University of Illinois, Urbana-Champaign, Chandrashekhar Khare of the University of California, Los Angeles and Jack Thorne of the University of Cambridge — proved that a substantial proportion of the elliptic curves studied in the 10-author paper do have bridges that land in exactly the right place.

    Bridges with this higher level of precision may eventually allow mathematicians to prove a host of new theorems, including a century-old generalization of Fermat’s Last Theorem. This conjectures that the equation at the heart of the theorem continues to have no solutions even when x, y and z are drawn not just from whole numbers but from combinations of whole numbers and the imaginary number i.

    The two papers carrying out the Calegari-Geraghty program form an important proof of principle, said Michael Harris of Columbia University. They’re “a demonstration that the method does have wide scope,” he said.

    While the new papers connect much wider regions of the two Langlands continents than before, they still leave vast territories uncharted. On the Diophantine equations side, there are still all the equations with exponents higher than 6, as well as equations with more than two variables. On the other side are automorphic forms that live on more complicated symmetric spaces than the ones that have been studied so far.

    “These papers, right now, are kind of the pinnacle of achievement,” Emerton said. But “at some point, they will just be looked back at as one more step on the way.”

    Langlands himself never considered torsion when he thought about automorphic forms, so one challenge for mathematicians is to come up with a unifying vision of these different threads. “The envelope is being expanded,” Taylor said. “We’ve to some degree left the path laid out by Langlands, and we don’t quite know where we’re going.”


    Share RecommendKeepReplyMark as Last ReadRead Replies (2)
    Previous 10 Next 10