SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNew Technology


Previous 10 Next 10 
From: FJB2/28/2019 8:33:12 AM
   of 421
 
Chip Roadmap Looks Dark, Bumpy

eetimes.com

SAN JOSE, Calif. — The semiconductor roadmap could extend a decade to a 1-nm node or it could falter before the 3-nm node for lack of new resist chemistries. Those were some of the hopes and fears that engineers expressed at an evening panel session at an annual lithography conference here.

The session was intended as a lighthearted send-up of the long-predicted death of Moore’s Law. It also showed the disturbing uncertainties that are natural outgrowths of the many challenges perpetually appearing on the path to next-generation chips.

Today, Samsung has started production of 7-nm devices using extreme ultraviolet lithography. TSMC expects to ramp a 7+-nm node using EUV by June. ASML aims to serve both with a 2019 upgrade of its EUV system, the 3400C, promising throughput of 170 wafers/hour and 90+% availability.

One of the next big challenges is brewing more sensitive resist materials for the 3-nm node. Today’s chemically amplified resists (CARs) “are OK for the current and maybe next generation, but we’d like new platforms,” said Tony Yen, a vice president at ASML.

Yen pointed to the long history of CARs dating back to the 1980s and 248-nm lithography. “It’s about time we put more emphasis in new platforms like molecular resists,” Yen said.

With a total market for the crucial chemicals valued at less than a billion dollars a year, “the model needs to change,” he added. “Development could be done in a pre-competitive place and then licensed to commercial resist vendors.”

Ryan Callahan from resist maker FujiFilm disagreed. “There is great competition to secure the business because those who are first will succeed and others will be gone … [but with the] market getting smaller as some [ such as GlobalFoundries] abandon EUV, resist suppliers won’t do consortia for developing together,” he said.


ASML plans to release this year an upgrade of its current EUV system. Click to enlarge. (Source: ASML)
In an effort to jumpstart work on resists for next-generation EUV systems, imec and laser specialist KMLabs announced that they will form a so-called AttoLab. It will try to characterize how resists absorb and ionize photons in time frames measured in pico- and attoseconds.

“We will learn how to see the fine detail of radiation chemistry, working with suppliers to find new materials to take us to the next level … We will also look at quantum phenomena … it is pure science, but new technologies may come from this work,” said John Petersen, a principal scientist at imec who co-authored papers describing the new lab.

The resists are one way to reduce random errors known as stochastics, an old problem but one raising its head aggressively as engineers push toward the 5-nm node. Yen was bullish that ASML will deal with the defects that threaten yields.

“Stochastics are more severe now than they were with 193-nm lithography, but they can be countered by higher [light] doses,” Yen said. “Our roadmap goes to 500-W systems, so we are going up in power, and High NA systems will deliver a better image quality, so we are well-prepared to combat stochastics.”

Phillipe Leray, a metrology specialist at imec, was less optimistic. “We have to tackle the defect challenge in the near future,” he said. “Time is running out, and I don’t see any solution around the corner.”

Next page: Pulling out all the stops for scaling

Page 1 / 2 NEXT >

Share RecommendKeepReplyMark as Last Read


From: FJB3/6/2019 6:35:29 PM
   of 421
 
IBM announces that its System Q One quantum computer has reached its 'highest quantum volume to date'
phys.org

March 5, 2019 by Bob Yirka, Phys.org report


Credit: IBMIBM has announced at this year's American Physical Society meeting that its System Q One quantum computer has reached its "highest quantum volume to date"—a measure that the computer has doubled in performance in each of the past two years, the company reports.

Quantum computers are, as their name implies, computers based on quantum bits. Many physicists and computer scientists believe they will soon outperform traditional computers. Unfortunately, reaching that goal has proven to be a difficult challenge. Several big-name companies have built quantum computers, but none are ready to compete with traditional hardware just yet. These companies have, over time, come to use the number of qubits that a given quantum computer uses as a means of measuring its performance—but most in the field agree that such a number is not really a good way to compare two very different quantum computers.

IBM is one of the big-name companies working to create a truly useful quantum computer, and as part of that effort, has built models that they sell or lease to other companies looking to jump on the quantum bandwagon as soon as they become viable. As part of its announcement, IBM focused specifically on the term "quantum volume"—a metric that has not previously been used in the quantum computing field. IBM claims that it is a better measure of true performance, and is therefore using the metric to show that the company's System Q One quantum computer advancement has been following Moore's Law.



Credit: IBMAs part of its announcement, IBM published an overview of the results of testing several models of its System Q One machine on its corporate blog. One such metric, notably, was "quantum volume," a metric created by a team at IBM, which is described as accounting for "gate and measurement errors as well as device cross talk and connectivity, and circuit software compiler efficiency." The team that created the metric wrote a paper describing the metric and how it is calculated and uploaded it to the arXiv preprint server last November. In that paper, they noted that the new metric "quantifies the largest random circuit of equal width and depth that the computer successfully implements," and pointed out that it is also strongly tied to error rates.



Credit: IBM Explore further: IBM says it's reached milestone in quantum computing

More information: www.ibm.com/blogs/research/201 … ower-quantum-device/

Related Stories IBM says it's reached milestone in quantum computing November 10, 2017IBM has announced a milestone in its race against Google and other big tech firms to build a powerful quantum computer.

Researchers determine the performance of multi-dimensional bits February 4, 2019What kinds of computers would be conceivable if physics worked differently? Quantum physicists Marius Krumm from the University of Vienna and Markus Müller from the Viennese Institute of Quantum Optics and Quantum Information ...

Cloud based quantum computing used to calculate nuclear binding energy February 2, 2018A team of researchers at Oak Ridge National Laboratory has demonstrated that it is possible to use cloud-based quantum computers to conduct quantum simulations and calculations. The team has written a paper describing their ...

First proof of quantum computer advantage October 18, 2018For many years, quantum computers were not much more than an idea. Today, companies, governments and intelligence agencies are investing in the development of quantum technology. Robert König, professor for the theory of ...

IBM announces cloud-based quantum computing platform May 4, 2016(Tech Xplore)—IBM has announced the development of a quantum computing platform that will allow users to access and program its 5 qubit quantum computer over the Internet. Called the IBM Quantum Experience, it is, the company ...

How to certify a quantum computer November 5, 2018Quantum computers are being developed by teams working not only at universities but also at Google, IBM, Microsoft and D-Wave, a start-up company. And things are evolving quickly, says Nicolas Sangouard, SNSF Professor at ...

Recommended for you The optomechanical Kerker effect: Controlling light with vibrating nanoparticles March 6, 2019For the Kerker effect to occur, particles need to have electric and magnetic polarizabilities of the same strength. This, however, is very challenging to achieve, as magnetic optical resonances in small particles are relatively ...

More evidence of sound waves carrying mass March 6, 2019A trio of researchers at Columbia University has found more evidence showing that sound waves carry mass. In their paper published in the journal Physical Review Letters, Angelo Esposito, Rafael Krichevsky and Alberto Nicolis ...

Einstein 'puzzle' solved as missing page emerges in new trove March 6, 2019An Albert Einstein "puzzle" has been solved thanks to a missing manuscript page emerging in a trove of his writings newly acquired by Jerusalem's Hebrew University, officials announced Wednesday.

Spin devices rev up March 6, 2019Electric currents drive all our electronic devices. The emerging field of spintronics looks to replace electric currents with what are known as spin currents. Researchers from the University of Tokyo have made a breakthrough ...

The science of knitting, unpicked March 6, 2019Dating back more than 3,000 years, knitting is an ancient form of manufacturing, but Elisabetta Matsumoto of the Georgia Institute of Technology in Atlanta believes that understanding how stitch types govern shape and stretchiness ...

Making long-lived positronium atoms for antimatter gravity experiments March 6, 2019The universe is almost devoid of antimatter, and physicists haven't yet figured out why. Discovering any slight difference between the behaviour of antimatter and matter in Earth's gravitational field could shed light on ...

Share RecommendKeepReplyMark as Last Read


From: FJB3/8/2019 1:50:40 PM
1 Recommendation   of 421
 
THE BUGATTI 'LA VOITURE NOIRE' IS THE WORLD'S MOST EXPENSIVE NEW CAR
The Batmobile-like hypercar just sold for an astounding $18.9 million.
BRANDON FRIEDERICHMAR 5, 2019





Bugatti

Bugatti just blew the minds of gearheads and luxury aficionados everywhere by unveiling the world's most expensive new car at the 2019 Geneva Motor Show.



The Bugatti "La Voiture Noire"—French for "the black car"—is a one-off hypercar created to commemorate the 110th anniversary of the French marque's founding.

A Bugatti enthusiast picked it up for a truly astounding $18.9 million, according to Business Insider.

That's nearly $6 million more than the Rolls-Royce "Sweptail" sold for when it set the previous record back in 2017.



And if we're being honest, La Voiture Noire looks way cooler than the Rolls. Basing the design on company founder Jean Bugatti's Type 57 SC Atlantic, engineers aimed to sculpt an exterior that's "all of a piece" by integrating the bumpers into the body and creating a uniform windshield that flows into the side windows.



“Every single component has been handcrafted and the carbon fiber body has a deep black gloss only interrupted by the ultra-fine fiber structure," said Bugatti designer Etienne Salome.

“We worked long and hard on this design until was nothing that we could improve. For us, the coupe represents the perfect form with a perfect finish."



For that astronomical price tag, the anonymous buyer also got the same ludicrous, 1,500-horsepower W16 that powers the 236-mph Divo and the 260-mph Chiron, along with six freakin' tailpipes.



Enjoy viewing it now, because you'll almost certainly never see the Bugatti La Voiture Noire in real life.

TAGS HYPERCARS GENEVA INTERNATIONAL MOTOR SHOW LUXURY MONEY RIDES BUGATTI BUGATTI LA VOITURE NOIRE WORLD RECORDS

Share RecommendKeepReplyMark as Last Read


From: FJB6/11/2019 8:29:58 PM
   of 421
 
Cray, AMD to Extend DOE's Exascale Frontier
By Tiffany Trader
www.hpcwire.com

Cray and AMD are coming back to Oak Ridge National Laboratory to partner on the world’s largest and most expensive supercomputer. The Department of Energy’s Oak Ridge National Laboratory has selected American HPC company Cray–and its technology partner AMD–to provide the lab with its first exascale supercomputer for 2021 deployment.

The $600 million award marks the first system announcement to come out of the second CORAL (Collaboration of Oak Ridge, Argonne and Livermore) procurement process ( CORAL-2). Poised to deliver “greater than 1.5 exaflops of HPC and AI processing performance,” Frontier (ORNL-5) will be based on Cray’s new Shasta architecture and Slingshot interconnect and will feature future-generation AMD Epyc CPUs and Radeon Instinct GPUs.

In a media briefing ahead of today’s announcement at Oak Ridge, the partners revealed that Frontier will span more than 100 Shasta supercomputer cabinets, each supporting 300 kilowatts of computing. Single-socket nodes will consist of one CPU and four GPUs, connected by AMD’s custom high bandwidth, low latency coherent Infinity fabric.

Oak Ridge Director Thomas Zacharia indicated that 40 MW of power, the maximum power draw set out in the CORAL-2 RFP, would be available for Frontier.

“Cray’s Slingshot system interconnect ties together this massive supercomputer and a new system software stack fuses the best of high performance computing and cloud capabilities,” said Cray CEO Pete Ungaro. “We worked together with AMD to design a new high density heterogeneous computing blade for Shasta and new programming environment for this new CPU-GPU node.”

Frontier will use a custom AMD Epyc processor based on a future generation of AMD’s Zen cores (beyond Rome and Milan). “[The future-gen Epycs] will have additional instructions in the microarchitecture as well as in the architecture itself for both optimization of AI as well as supercomputing workloads,” said AMD CEO Lisa Su, adding that the new Radeon Instinct GPU incorporates “extensive optimization for the AI and the computing performance, [with] mixed-precision operations for optimum deep learning performance, and high bandwidth memory for the best latency.”

The CPU and GPUs will be linked by AMD’s new coherent Infinity fabric and each GPU will be able to talk directly to the Slingshot network, enabling each node “to get the optimum performance for both supercomputing as well as AI,” said Su. All these components were designed for Frontier but will be available to enterprise applications after the system debuts, according to AMD.

Frontier marks a return for Cray and AMD to Oak Ridge, home to another Cray-AMD system, Titan. Benchmarked at 17.6 Linpack petaflops, Titan was the number one system in the world when it debuted (as an upgrade to Jaguar) in 2012. With Titan set to be decommissioned on August 1, 2019, and Frontier scheduled to be deployed in the back half of 2021 and accepted in 2022, Oak Ridge won’t be without a Cray-AMD machine for too long. While Titan used AMD (Opteron) CPUs and Nvidia (K20X) GPUS, Frontier will rely on AMD for all its in-node processing elements.

Frontier is Oak Ridge’s third machine to use a heterogeneous design. In addition to the aforementioned Titan, Oak Ridge is of course home to Summit, which became the world’s fastest supercomputer in June 2018. Its 143.5 GPU-accelerated Linpack petaflops are owed to 9,216 Power9 22-core CPUs and 27,648 Nvidia Tesla V100 GPUs.

“Since Titan, Oak Ridge has pioneered this idea of having GPU accelerators along with CPUs,” said Zacharia. “Frontier will be the third generation of supercomputing system built around this architecture and it will be the second generation AI machine.”

Frontier will be used for future application simulations for quantum computers, nuclear energy systems, fusion reactors, and precision medicines, said Zacharia, adding “Frontier finally gets us to the point where we can actually design new materials.”

“We are approaching a revolution in how we can design and analyze materials,” said Tom Evans, Oak Ridge National Laboratory technical lead for the Energy Applications Focus Area, Exascale Computing Project. “We can look and carefully characterize the electronic structure of fairly simple atoms and very simple molecules right now. But with exascale computing on Frontier, we’re trying to stretch that to molecules that consist of thousands of atoms. The more we understand about the electronic structure, the more we’re able to actually manufacture and use exotic materials for things like very small, high tensile strength materials and buildings to make them more energy efficient. At the end of the day, everything in some sense comes down to materials.”

AMD’s Forrest Norrod and Cray’s Pete Ungaro on stage at AMD’s Next Horizon event in November 2018.In terms of number-one system bragging rights, the DOE has previously stated, and recently confirmed, that Aurora (aka Aurora21, the revised CORAL-1 system that Intel is contracted to deliver to Argonne) is on track to be the United States’, and possibly the world’s, first exascale system in 2021; and since that messaging has not changed, we believe it is the intention of the DOE to deliver on that goal. However, even if it is the case that Intel keeps to its timeline and Aurora is deployed and benchmarked first, Frontier is slated to be stood up on a very similar timeline and according to publicly stated performance goals will provide roughly 50 percent more flops capability.

Asked to comment on the “competitive” timelines for Frontier and Aurora, Zacharia said he could only comment on Frontier.

“I don’t know all the details of Aurora procurement because that information has not been publicly released, but we do know that Frontier will be the largest system by far that the DOE has procured,” he said.

“We know that Oak Ridge has experience with Summit and Titan previously in using CPU-GPU systems. We also know that the pre-exascale system that the scientific community is using today to develop all their applications and system software is on our system Summit, which is the largest machine available to anybody…. If there is any competition between the labs, it’s just competition for ideas, which is what scientists should do, but otherwise this is truly a DOE lab system effort to ensure the United States maintains the forefront of this important technology, not only because it drives technology innovation in the IT computing space but it also drives economic competition and creates jobs.”

Zacharia further cited that the goals for Frontier are aligned and consistent with the White House AI initiative as well as the National Council on American Workers, which is creating new jobs using AI and scientific computing in manufacturing and other spaces.

As for that $600-million-plus price tag, it is “by far the most expensive single machine that [the DOE has] ever procured,” said Zacharia. It’s also Cray’s largest contract ever.

The total amount includes the system build contract for “over $500 million,” as well as the development contract for “over $100 million” that will, according to Ungaro, be used to develop some of the core technologies for the machine, as well as a new programming environment that will enhance GPU programmability via extensions for Radeon Open Compute Platform (ROCm).

“The Cray Programming Environment (Cray PE)…will see a number of enhancements for increased functionality and scale,” said Cray. “This will start with Cray working with AMD to enhance these tools for optimized GPU scaling with extensions for Radeon Open Compute Platform (ROCm). These software enhancements will leverage low-level integrations of AMD ROCmRDMA technology with Cray Slingshot to enable direct communication between the Slingshot NIC to read and write data directly to GPU memory for higher application performance.”

To support the converged use of analytics, AI, and HPC at extreme scale, “Cray PE will be integrated with a full machine learning software stack with support for the most popular tools and frameworks.”

Shasta cabinet detailFrontier marks Cray’s third major contract award for the Shasta architecture and Slingshot interconnect. Previous awards were for the National Energy Research Scientific Computing Center’s NERSC-9 pre-exascale Perlmutter system (with partners AMD and Nvidia) and the Argonne National Laboratory’s Aurora exascale system (with Intel as the prime).

Frontier is the first CORAL-2 award, announced nearly 13 months after the RFP was released. As laid out in the program’s RFP, CORAL-2 seeks to fund up to three exascale-class systems: Frontier at Oak Ridge, El Capitan at Livermore and a potential third system at Argonne if the lab chooses to make an award under the RFP and if funding is available. Like the original CORAL program, which kicked off in 2012, CORAL-2 has a mandate to field architecturally diverse machines in a way that manages risk during a period of rapid technological evolution. The stipulation indicates that “the systems residing at or planned to reside at ORNL and ANL must be diverse from one another,” however the program allows Oak Ridge and Livermore labs to employ the same architecture if they choose to do so, as in the case of Summit and Sierra, which employ very similar IBM-Nvidia architectures.

The CORAL-2 effort is part of the U.S. Exascale Computing Initiative. The ECI has two components: one is the hardware delivery and the other is application readiness. The latter is the domain of the Exascale Computing Project ( see HPCwire‘s recent coverage to read about the latest progress), which is investing $1.7 billion to ensure there’s an exascale-ready software ecosystem to get the most from exascale hardware when it arrives.

“ECP Software Technology is excited to be a part of preparing the software stack for Frontier,” said Sandia’s Mike Heroux, director of software technology for the Exascale Computing Project. “We are already on our way, using Summit and Sierra as launching pads. Working with [Oak Ridge Leadership Computing Facility], Cray, and AMD, we look forward to providing the programming environments and tools, and math, data and visualization libraries that will unlock the potential of Frontier for producing the countless scientific achievements we expect from such a powerful system. We are privileged to be part of the effort.”

ORNL’s Center for Accelerated Application Readiness is accepting proposals from scientists to prepare their codes to run on Frontier. Check with the Frontier website for additional information.

Share RecommendKeepReplyMark as Last Read


From: FJB7/21/2019 12:22:38 PM
   of 421
 
Google expected to achieve quantum supremacy in 2019: Here’s what that means
Tristan Greene
thenextweb.com

Google‘s reportedly on the verge of demonstrating a quantum computer capable of feats no ordinary classical computer could perform. The term for this is quantum supremacy, and experts believe the Mountain View company could be mere months from achieving it. This may be the biggest scientific breakthrough for humanity since we figured out how to harness the power of fire. Here’s what you need to know before it happens.

Functional quantum computers currently exist – IBM, D-Wave, Google, Microsoft, Rigetti, and dozens of other companies and universities are working tirelessly to develop them – but none of them actually do anything that we can’t already do with a regular, old-fashioned computer yet. They’re a proof-of-concept. The big news right now has to do with a new “rule” called Nevin’s Law. It was named after one of Google‘s quantum gurus, Hartmut Nevin, stated that quantum computing technology is currently snowballing at a double-exponential rate. We’ll get to that later. First let’s talk about what quantum supremacy would actually mean for you and me.

For a basic primer on quantum computers, click here.

Why you should care


Experts predict the advent of quantum supremacy – useful quantum computers – will herald revolutionary advances in nearly every scientific field. We’re talking breakthroughs in chemistry, astrophysics, medicine, security, communications and more. It may sound like a lot of hype, but these are the grounded predictions. Others think quantum computers will help scientists unlock some of the greater mysteries of the cosmos such as how the universe came to be and whether life exists outside of our own planet.

But quantum computing is an edge technology: there’s no blueprint for wrangling subatomic particles into performing computations. Some folks believe quantum computers will never stack up to modern supercomputers. While this is a minority view, there is a valid point to be gleaned from it: quantum computers will never replace classical ones. And they’re not meant to.

You can’t replace your iPhone or PC with a quantum computer any more than you can replace your tennis shoes with a nuclear aircraft carrier. The two things are designed to do different things, despite the fact they’re both related to transportation in some way.

Classical computers allow you to play games, check your emails, surf the web, and run programs. Quantum computers will, for the most part, perform simulations too complex for binary systems that run on computer bits. In other words, individual consumers will have almost no use for a quantum computer of their own, but NASA and MIT, for example, absolutely will.

What’s Google actually doing?

While quantum supremacy would be a giant breakthrough, let’s not get ahead of ourselves: The world probably isn’t going to catapult into some sort of far-future scientific utopia just because Google shows off a quantum system that can do things impossible for a binary computer. The reason experts in the field are excited right now is because of Nevin’s Law – something that’s not really a law at all.

Nevin’s Law is currently more of an affectionate term for a rule coined by Google‘s Hartmut Nevin. At the company’s Spring Quantum Symposium this May, Nevin made the claim that quantum systems are increasing in performance at a doubly-exponential rate. This means, rather than doubling in performance with successive iterations as was the case with classical computers and Moore’s Law, quantum technology is increasing in performance at a much more dramatic rate. It took 50 years to go from punch card systems to iPhones: if Nevin’s Law is true we’ll see quantum systems increase in a fraction of that time.


Most of this improvement can be attributed to amazing new feats in error-correction – filtering out noise in quantum systems is among the biggest challenges faced by physicists. Some of the improvement has to do with the simple fact that a rising tide lifts all ships. Google‘s invested as much time, money, and personnel as any other organization involved in quantum technology (if not more). If quantum supremacy is possible, Google‘s as likely a candidate to achieve it as any other company.

Well, except IBM, but only because it takes a somewhat different view on the subject. Arguably, IBM is at the forefront of quantum technology. And there’s no reason to believe it won’t reach Google’s definition of quantum supremacy soon as well, but it’s leadership has been reticent to talk about goalposts like quantum supremacy.

TNW reached out to IBM to get it’s take, Here’s what Dr. Jay Gambetta, IBM Fellow and Global Lead of Quantum Computing, Theory, Software, IBM Q, told us:

Supremacy isn’t something you shoot for. As has been proven, it’s a moving target, and something we’ll recognize once we’ve moved to bigger things – namely demonstrating a significant performance advantage over what classical computers can do, alone. This means developing a quantum computation that’s either hundreds or thousands of times faster than a classical computation, or needs a smaller fraction of the memory required by a classical computer, or makes something possible that simply isn’t possible now with a classical computer.

We must also measure that progress beyond simple qubit counts or just coherence times. Which is why we developed quantum volume, full-system performance metric that accounts for gate and measurement errors as well as device cross talk and connectivity, and circuit software compiler efficiency. It’s an agnostic metric that others, including Rigetti, have benchmarked their systems against. You can read more about the Quantum Volume of our systems, and how it’s calculated, here.

What’s next

Time will tell whether IBM or Google’s approach makes more sense, but according to Nevin’s Law we’re mere months away from seeing a full-fledged demonstration of quantum supremacy from one team or another. Quanta Magazine reports that Google‘s had to crib computational power from systems outside of its quantum labs just to keep up with the hand-over-fist improvements in performance. Nevins told Quanta’s Kevin Hartnett:

Somewhere in February I had to make calls to say, ‘Hey, we need more quota.’ We were running jobs comprised of a million processors.

He went on to explain that, with double-exponential growth, the proof isn’t always front-and-center at first:

It looks like nothing is happening, nothing is happening, and then whoops, suddenly you’re in a different world. That’s what we’re experiencing here.

We’ve been talking about quantum computing for years, but this is the first time quantum supremacy’s been dangled in front of our faces as a near-term eventuality. Of course, with any bold claim, it’s prudent to maintain a modicum of cynicism. But Nevin’s Law tells us our quantum dreams could come true before the end of the year.

All we want for Christmas is an entirely new computing paradigm capable of making classical computers look like punch-card systems. Oh, and a drone with a flamethrower attachment — but that’s unrelated.

Read next: Consoles aren't selling because the next-gen is coming

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: FJB9/14/2019 6:26:28 PM
   of 421
 
Abstract
This Letter proposes a realistic implementation of the curved relativistic mirror concept to reach unprecedented light intensities in experiments. The scheme is based on relativistic plasma mirrors that are optically curved by laser radiation pressure. Its validity is supported by cutting-edge three-dimensional particle-in-cell simulations and a theoretical model, which show that intensities above 1025??W?cm-2 could be reached with a 3 PetaWatt (PW) laser. Its very high robustness to laser and plasma imperfections is shown to surpass all previous schemes and should enable its implementation on existing PW laser facilities.


  • Revised 21 March 2019
  • Received 28 November 2018
  • DOI:https://doi.org/10.1103/PhysRevLett.123.105001

    © 2019 American Physical Society

    Physics Subject Headings (PhySH)

    Share RecommendKeepReplyMark as Last Read


    From: DinoNavarre9/18/2019 10:44:19 PM
    1 Recommendation   of 421
     
    I like Israeli tech......Anybody familiar with this outfit / technology / competitors....???

    Opinions....???

    Audio Pixels Limited

    Investor Video Presentation

    Chart

    Digital Speaker Development Update

    ASX Quote

    Share RecommendKeepReplyMark as Last Read


    From: DinoNavarre9/19/2019 12:46:13 PM
       of 421
     
    Are there any more listed outfits like this???? Thanks for any help!!!

    HUT.V

    Share RecommendKeepReplyMark as Last ReadRead Replies (1)


    To: DinoNavarre who wrote (376)9/19/2019 5:39:11 PM
    From: FJB
       of 421
     
    There many, many companies like that in China. You should ask about miners here ...
    Subject 59919

    Share RecommendKeepReplyMark as Last ReadRead Replies (1)


    To: FJB who wrote (377)9/19/2019 6:54:07 PM
    From: DinoNavarre
       of 421
     
    Will do.....Thanks.

    Share RecommendKeepReplyMark as Last Read
    Previous 10 Next 10