From: FJB | 6/15/2020 6:09:28 AM | | | |
Aging Problems At 5nm And Below
semiengineering.com
The mechanisms that cause aging in semiconductors have been known for a long time, but the concept did not concern most people because the expected lifetime of parts was far longer than their intended deployment in the field. In a short period of time, all of that has changed.
As device geometries have become smaller, the issue has become more significant. At 5nm, it becomes an essential part of the development flow with tools and flows evolving rapidly as new problems are discovered, understood and modeled.
“We have seen it move from being a boutique technology, used by specific design groups, into something that’s much more of a regular part of the sign-off process,” says Art Schaldenbrand, senior product manager at Cadence. “As we go down into these more advanced nodes, the number of issues you have to deal with increases. At half micron you might only have to worry about hot carrier injection (HCI) if you’re doing something like a power chip. As you go down below 180nm you start seeing things like negative-bias temperature instability (NBTI). Further down you get into other phenomena like self heating, which becomes a significant reliability problem.”
The ways of dealing with it in the past are no longer viable. “Until recently, designers very conservatively dealt with the aging problem by overdesign, leaving plenty of margin on the table,” says Ahmed Ramadan, senior product engineering manager for Mentor, a Siemens Business. “However, while pushing designs to the limit is not only needed to achieve competitive advantage, it is also needed to fulfill new applications requirements given the diminishing transistor scaling benefits. All of these call for the necessity of accurate aging analysis.”
While new phenomena are being discovered, the old ones continue to get worse. “The drivers of aging, such as temperature and electrical stress, have not really changed,” says André Lange, group manager for quality and reliability at Fraunhofer IIS’ Engineering of Adaptive Systems Division. “However, densely packed active devices with minimum safety margins are required to realize advanced functionality requirements. This makes them more susceptible to reliability issues caused by self-heating and increasing field strengths. Considering advanced packaging techniques with 2.5D and 3D integration, the drivers for reliability issues, especially temperature, will gain importance.”
Contributing factors The biggest factor is heat. “Higher speeds tends to produce higher temperatures and temperature is the biggest killer,” says Rita Horner, senior product marketing manager for 3D-IC at Synopsys. “Temperature exacerbates electron migration. The expected life can exponentially change from a tiny delta in temperature.”
This became a much bigger concern with finFETs. “In a planar CMOS process, heat can escape through the bulk of the device into the substrate fairly easily,” says Cadence’s Schaldenbrand. “But when you stand the transistor on its side and wrap it in a blanket, which is effectively what the gate oxide and gate acts like, the channel experiences greater temperature rise, so the stress that a device is experiencing increases significantly.”
An increasing amount of electronics are finding themselves being deployed in hostile environments. “Semiconductor chips that operate in extreme conditions, such as automotive (150° C) or high elevation (data servers in Mexico City) have the highest risk of reliability and aging related constraints,” says Milind Weling, senior vice president of programs and operations at Intermolecular. “ 2.5D and 3D designs could see additional mechanical stress on the underlying silicon chips, and this could induce additional mechanical stress aging.”
Devices’ attributes get progressively worse. “Over time, the threshold voltage of a device degrades, which means that it takes more time to turn the device on,” says Haran Thanikasalam, senior applications engineer for AMS at Synopsys. “One reason for this is negative bias instability. But as devices scale down, voltage scaling has been slower than geometry scaling. Today, we are reaching the limits of physics. Devices are operating somewhere around 0.6 to 0.7 volts at 3nm, compared to 1.2V at 40nm or 28nm. Because of this, the electric fields have increased. Large electrical field over a very tiny device area can cause severe breakdown.”
This is new. “The way we capture this phenomenon is something called time-dependent dielectric breakdown (TTDB),” says Schaldenbrand. “You’re looking at how that field density causes devices to break down, and making sure the devices are not experiencing too much field density.”
The other primary cause of aging is electromigration (EM). “If you perform reliability simulation, like EM or IR drop simulation, not only do the devices degrade but you also have electromigration happening on the interconnects,” adds Thanikasalam. “You have to consider not only the devices, but also the interconnects between the devices.”
Analog and digital When it comes to aging, digital is a subset of analog. “In digital, you’re most worried about drive, because that changes the rise and fall delays,” says Schaldenbrand. “That covers a variety of sins. But analog is a lot more subtle and gain is something you worry about. Just knowing that Vt changed by this much isn’t going to tell you how much your gain will degrade. That’s only one part of the equation.”
Fig 1: Failure of analog components over time. Source: Synopsys
Aging can be masked in digital. “Depending on the application, a system may just degrade, or it may fail from the same amount of aging,” says Mentor’s Ramadan. “For example, microprocessor degradation may lead to lower performance, necessitating a slowdown, but not necessary failures. In mission-critical AI applications, such as ADAS, a sensor degradation may directly lead to AI failures and hence system failure.”
That simpler notion of degradation for digital often can be hidden. “A lot of this is captured at the cell characterization level,” adds Schaldenbrand. “So the system designer doesn’t worry about it much. If he runs the right libraries, the problem is covered for him.”
Duty cycle In order to get an accurate picture for aging, you have to consider activity in the design, but often this is not in the expected manner. “Negative bias temperatures stability (NBTS) is affecting some devices,” says Synopsys’ Horner. “But the devices do not have to be actively running. Aging can be happening while the device is shut off.”
In the past the analysis was done without simulation. “You can only get a certain amount of reliability data from doing static, vector independent analysis,” says Synopsys’ Thanikasalam. “This analysis does not care about the stimuli you give to your system. It takes a broader look and identifies where the problems are happening without simulating the design. But that is proving to be a very inaccurate way of doing things, especially at smaller nodes, because everything is activity-dependent.”
That can be troublesome for IP blocks. “The problem is that if somebody is doing their own chip, their own software in their own device, they have all the information they need to know, down to the transistor level, what that duty cycle is,” says Kurt Shuler, vice president of marketing at Arteris IP. “But if you are creating a chip that other people will create software for, or if you’re providing a whole SDK and they’re modifying it, then you don’t really know. Those chip vendors have to provide to their customers some means to do that analysis.”
For some parts of the design, duty cycles can be estimated. “You never want to find a block level problem at the system level,” says Schaldenbrand. “People can do the analysis at the block level, and it’s fairly inexpensive to do there. For an analog block, such as an ADC or a SerDes or a PLL, you have a good idea of what its operation is going to be within the system. You know what kind of stresses it will experience. That is not true for a large digital design, where you might have several operating modes. That will change digital activity a lot.”
This is the fundamental reason why it has turned into a user issue. “It puts the onus on the user to make sure that you pick stimulus that will activate the parts of the design that you think are going to be more vulnerable to aging and electromigration, and you have to do that yourself,” says Thanikasalam. “This has created a big warning sign among the end users because the foundries won’t be able to provide you with stimulus. They have no clue what your design does.”
Monitoring and testing The industry approaches are changing at multiple levels. “To properly assess aging in a chip, manufacturers have relied on a function called burn-in testing, where the wafer is cooked to artificially age it, after which it can be tested for reliability,” says Syed Alam, global semiconductor lead for Accenture. “Heat is the primary factor for aging in chips, with usage a close second, especially for flash as there are only so many re-writes available on a drive.”
And this is still a technique that many rely on. “AEC-Q100, an important standard for automotive electronics, contains multiple tests that do not reveal true reliability information,” says Fraunhofer’s Lange. “For example, in high-temperature operating life (HTOL) testing, 3×77 devices have to be stressed for 100 hours with functional tests before and after stress. Even when all devices pass, you cannot tell whether they will fail after 101 hours or whether they will last 10X longer. This information can only be obtained by extended testing or simulations.”
An emerging alternative is to build aging sensors into the chip. “There are sensors, which usually contain a timing loop, and they will warn you when it takes longer for the electrons to go around a loop,” says Arteris IP’s Shuler. “There is also a concept called canary cells, where these are meant to die prematurely compared to a standard transistor. This can tell you that aging is impacting the chip. What you are trying to do is to get predictive information that the chip is going to die. In some cases, they are taking the information from those sensors, getting that off chip, throwing it into big database and running AI algorithms to try to do predictive work.”
Additional 3D issues Many of the same problems exist in 2D, 2.5D and 3D designs, except that thermal issues may become more amplified with some architectures. But there also may be a whole bunch of new issues that are not yet fully understood. “When you’re stacking devices on top of each other, you have to back-grind them to thin them,” says Horner. “The stresses on the thinner die could be a concern, and that needs to be understood and studied and addressed in terms of the analysis. In addition, various types of the silicon age differently. You’re talking about a heterogeneous environment where you are potentially stacking DRAM, which tends to be more of a specific technology — or CPUs and GPUs, which may utilize different technology process nodes. You may have different types of TSVs or bumps that have been used in this particular silicon. How do they interact with each other?”
Those interfaces are a concern. “There is stress on the die, and that changes the device characteristics,” says Schaldenbrand. “But if different dies heat up to different temperatures, then the places where they interface are going to have a lot of mechanical stress. That’s a big problem, and system interconnect is going to be a big challenge going forward.”
Models and analysis It all starts with the foundries. “The TSMCs and the Samsungs of the world have to start providing that information,” says Shuler. “As you get to 5nm and below, even 7nm, there is a lot of variability in these processes and that makes everything worse.”
“The foundries worried about this because they realized that the devices being subjected to higher electric fields were degrading much faster than before,” says Thanikasalam. “They started using the MOS reliability and analysis solution (MOSRA) that applies to the device aging part of it. Recently, we see that shift towards the end customers who are starting to use the aging models. Some customers will only do a simple run using degraded models so that the simulation accounts for the degradation of the threshold voltage.”
High-volume chips will need much more extensive analysis. “For high-volume production, multi PVT simulations are becoming a useless way of verifying this,” adds Thanikasalam. Everybody has to run Monte Carlo at this level. Monte Carlo simulation with the variation models is the key in 5nm and below.”
More models are needed. “There are more models being created and optimized,” says Horner. “In terms of 3D stacking, we have knowledge of the concern about electromigration, IR, thermal, and power. Those are the key things that are understood and modeled. For the mechanical aspects — even the materials that we put between the layers and their effect in terms of heat, and also the stability structures — while there are models out there, they are not as enhanced because we haven’t seen enough of these yet.”
Schaldenbrand agrees. “We are constantly working on the models and updating them, adding new phenomena as people become aware of them. There’s been a lot of changes required to get ready for the advanced nodes. For the nominal device we can very well described aging, but the interaction between process variation and its effect on reliability is something that’s still a research topic. That is a very challenging subject.”
With finFETs, the entire methodology changed. “The rules have become so complicated that you need to have a tool that can actually interpret the rules, apply the rules, and tell us where there could be problems two, three years down the line,” says Thanikasalam. “FinFETs can be multi threshold devices, so when you have the entire gamut of threshold voltage being used in a single IP, we have so many problems because every single device will go in different direction.”
Conclusion Still, progress is being made. “Recently, we have seen many foundries, IDMs, fabless and IP companies rushing to find a solution,” says Ramadan. “They cover a wide range of applications and technology processes. Whereas a standard aging model can be helpful as a starting point for new players, further customizations are expected depending on the target application and the technology process. The Compact Modeling Coalition (CMC), under the Silicon Integration Initiative (Si2), currently is working on developing a standard aging model to help the industry. In 2018, the CMC released the first standard Open Model Interface (OMI) that enables aging simulation for different circuit simulators using the unified standard OMI interface.”
That’s an important piece, but there is still a long road ahead. “Standardization activities within the CMC have started to solve some of these issues,” says Lange. “But there is quite a lot of work ahead in terms of model complexity, characterization effort, application scenario, and tool support.”
Related Stories Circuit Aging Becoming A Critical Consideration As reliability demands soar in automotive and other safety-related markets, tools vendors are focusing on an area often ignored in the past. How Chips Age Are current methodologies sufficient for ensuring that chips will function as expected throughout their expected lifetimes? Different Ways To Improve Chip Reliability Push toward zero defects requires more and different kinds of test in new places. Taming NBTI To Improve Device Reliability Negative-bias temperature instability can cause an array of problems at advanced nodes and reduced voltages. |
| New Technology | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: FJB | 12/25/2020 10:08:54 PM | | | | Fermilab and partners achieve sustained, high-fidelity quantum teleportation
December 15, 2020
news.fnal.gov
Media contact A viable quantum internet — a network in which information stored in qubits is shared over long distances through entanglement — would transform the fields of data storage, precision sensing and computing, ushering in a new era of communication.
This month, scientists at Fermilab, a U.S. Department of Energy Office of Science national laboratory, and their partners took a significant step in the direction of realizing a quantum internet.
In a paper published in PRX Quantum, the team presents for the first time a demonstration of a sustained, long-distance (44 kilometers of fiber) teleportation of qubits of photons (quanta of light) with fidelity greater than 90%. The qubits were teleported over a fiber-optic network using state-of-the-art single-photon detectors and off-the-shelf equipment.
“We’re thrilled by these results,” said Fermilab scientist Panagiotis Spentzouris, head of the Fermilab quantum science program and one of the paper’s co-authors. “This is a key achievement on the way to building a technology that will redefine how we conduct global communication.”
In a demonstration of high-fidelity quantum teleportation at the Fermilab Quantum Network, fiber-optic cables connect off-the-shelf devices (shown above), as well as state-of-the-art R&D devices. Photo: Fermilab
Quantum teleportation is a “disembodied” transfer of quantum states from one location to another. The quantum teleportation of a qubit is achieved using quantum entanglement, in which two or more particles are inextricably linked to each other. If an entangled pair of particles is shared between two separate locations, no matter the distance between them, the encoded information is teleported.
The joint team — researchers at Fermilab, AT&T, Caltech, Harvard University, NASA Jet Propulsion Laboratory and University of Calgary — successfully teleported qubits on two systems: the Caltech Quantum Network, or CQNET, and the Fermilab Quantum Network, or FQNET. The systems were designed, built, commissioned and deployed by Caltech’s public-private research program on Intelligent Quantum Networks and Technologies, or IN-Q-NET.
“We are very proud to have achieved this milestone on sustainable, high-performing and scalable quantum teleportation systems,” said Maria Spiropulu, Shang-Yi Ch’en professor of physics at Caltech and director of the IN-Q-NET research program. “The results will be further improved with system upgrades we are expecting to complete by Q2 2021.”
CQNET and FQNET, which feature near-autonomous data processing, are compatible both with existing telecommunication infrastructure and with emerging quantum processing and storage devices. Researchers are using them to improve the fidelity and rate of entanglement distribution, with an emphasis on complex quantum communication protocols and fundamental science.
The achievement comes just a few months after the U.S. Department of Energy unveiled its blueprint for a national quantum internet at a press conference in Chicago.
“With this demonstration we’re beginning to lay the foundation for the construction of a Chicago-area metropolitan quantum network,” Spentzouris said. The Chicagoland network, called the Illinois Express Quantum Network, is being designed by Fermilab in collaboration with Argonne National Laboratory, Caltech, Northwestern University and industry partners.
This research was supported by DOE’s Office of Science through the Quantum Information Science-Enabled Discovery (QuantISED) program.
“The feat is a testament to success of collaboration across disciplines and institutions, which drives so much of what we accomplish in science,” said Fermilab Deputy Director of Research Joe Lykken. “I commend the IN-Q-NET team and our partners in academia and industry on this first-of-its-kind achievement in quantum teleportation.”
Learn more about the result.
Fermilab is America’s premier national laboratory for particle physics and accelerator research. A U.S. Department of Energy Office of Science laboratory, Fermilab is located near Chicago, Illinois, and operated under contract by the Fermi Research Alliance LLC, a joint partnership between the University of Chicago and the Universities Research Association, Inc. Visit Fermilab’s website at www.fnal.gov and follow us on Twitter at @Fermilab.
The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.
Tagged: California, Caltech, IN-Q-NET, quantum communication, quantum information science, quantum science, quantum teleportation |
| New Technology | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: FJB | 12/26/2020 7:30:12 AM | | | |
Photocatalyst that can Split Water into Hydrogen and oxygen at a Quantum Efficiency Close to 100% - FuelCellsWorks By FuelCellsWorks fuelcellsworks.com
A research team led by Shinshu University’s Tsuyoshi Takata, Takashi Hisatomi and Kazunari Domen succeeded in developing a photocatalyst that can split water into hydrogen and oxygen at a quantum efficiency close to 100%.
The team consisted of their colleagues from Yamaguchi University, The University of Tokyo and National Institute of Advanced Industrial Science and Technology (AIST).
The team produced an ideal photocatalyst structure composed of semiconductor particles and cocatalysts. H2 and O2 evolution cocatalysts were selectively photodeposited on different facets of crystalline SrTiO3(Al-doped) particles due to anisotropic charge transport. This photocatalyst structure effectively prevented charge recombination losses, reaching the upper limit of quantum efficiency.
Figure 1 – Schematic structure (a) and scanning electron microscope image (b) of Al-doped SrTiO3 site-selectively coloaded with a hydrogen evolution cocatalyst (Rh/Cr2O3) and an oxygen evolution cocatalyst (CoOOH).
Water splitting reaction driven by solar energy is a technology for producing renewable solar hydrogen on a large scale. To put such technology to practical use, the production cost of solar hydrogen must be significantly reduced [1]. This requires the reaction system that can split water efficiently and can be scaled up easily. A system consisting of particulate semiconductor photocatalysts can be expanded over a large area with relatively simple processes. Therefore, it will make great strides toward large-scale solar hydrogen production if photocatalysts driving the sunlight-driven water splitting reaction with high efficiency are developed.
To upgrade the solar energy conversion efficiency of photocatalytic water splitting, it is necessary to improve two factors: widening the wavelength range of light used by the photocatalyst for the reaction and increasing the quantum yield at each wavelength. The former is determined by the bandgap of the photocatalyst material used, and the latter is determined by the quality of the photocatalyst material and the functionality of the cocatalyst used to promote the reaction. However, photocatalytic water splitting is an endergonic reaction involving multi-electron transfer occurring in a non-equilibrium state.
This study refined the design and operating principle for advancing water splitting methods with a high quantum efficiency. The knowledge obtained in this study will propel the field of photocatalytic water splitting further to enable the scalable solar hydrogen production.
The project was made possible through the support of NEDO (New Energy and Industrial Technology Development Organization) under the “Artificial photosynthesis project”.
Title: Photocatalytic water splitting with a quantum efficiency of almost unity Authors:Tsuyoshi Takata, Junzhe Jiang, Yoshihisa Sakata, Mamiko Nakabayashi, Naoya Shibata, Vikas Nandal, Kazuhiko Seki, Takashi Hisatomi, Kazunari Domen Journal:Nature, 581, 411-414 (2020) DOI10.1038/s41586-020-2278-9
|
| New Technology | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last ReadRead Replies (1) |
|
To: FJB who wrote (385) | 2/17/2021 8:14:12 PM | From: sense | | | Hmmm. The new future... just like the old future ?
At least, after decades of hearing about how germanium was going to replace silicon in the next generation, and then, the next, ad infinitum....
There seems to be a broad consensus now that germanium transistors are definitively better... in fuzz pedals.
Otherwise <crickets chirping>.
The incremental improvements delivered to us by the academic-corporate research-industrial standards coordinating complex... do seem to ensure you can earn a degree in a related engineering field and have it not be made obsolete for an entire career... /s |
| New Technology | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
To: FJB who wrote (387) | 2/17/2021 9:01:12 PM | From: sense | | | That one was pretty exciting....
Right up to the point where, first, it mentions being made of Strotiummm and Rhodiummm... and then the bit about achieving "almost" unity...
How "almost" is it ?
Others might well have an ability to tweak some aspects to get "almost" close enough to over the hump to offer an economic reason to get it out of the lab ?
Thanks for providing the brain floss. |
| New Technology | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: FJB | 2/24/2021 8:09:46 PM | | | | Imec demonstrates 20-nm pitch line/space resist imaging with high-NA EUV interference lithography Science X staff techxplore.com /news/2021-02-imec-nm-pitch-linespace-resist.html
Schematic representations (not to scale) of Lloyd’s Mirror setup for high-NA EUV interference coupon experiments . Credit: IMEC Imec reports for the first time the use of a 13.5-nm, high-harmonic-generation source for the printing of 20-nm pitch line/spaces using interference lithographic imaging of an Inpria metal-oxide resist under high-numerical-aperture (high-NA) conditions. The demonstrated high-NA capability of the EUV interference lithography using this EUV source presents an important milestone of the AttoLab, a research facility initiated by imec and KMLabs to accelerate the development of the high-NA patterning ecosystem on 300 mm wafers. The interference tool will be used to explore the fundamental dynamics of photoresist imaging and provide patterned 300 mm wafers for process development before the first 0.55 high-NA EXE5000 prototype from ASML becomes available.
The high-NA exposure at 13.5 nm was emulated with a coherent high-flux laser source of KMLabs in a Lloyd's-Mirror-based interference setup for coupon experiments on imec's spectroscopy beamline. This apparatus supplies critical learning for the next step, expansion to 300 mm wafer interference exposures. In this arrangement, light reflected from a mirror interferes with light directly emitted by the 13.5 nm laser source, generating a finely detailed interference pattern suited for resist imaging. The pitch of the imaged resist pattern can be tuned by changing the angle between the interfering light beams. With this setup, 20 nm line/spaces could for the first time at imec be successfully patterned in an Inpria metal-oxide resist (exposure dose range of ~54-64mJ/cm2, interference angle 20 degrees) using a single-exposure, coated on coupon samples.
"The high-flux laser source of KMLabs was used at a record small wavelength of 13.5 nm, emitting a series of attosecond (10-18s) pulses that reaches the photoresist with a pulse duration that is a few femtoseconds (10-15s) in width. This imposed challenging requirements on the temporal coherence of the interfering waves," explains John Petersen, Principal Scientist at imec and SPIE Fellow. "The demonstrated capability of this setup for emulating high-NA EUV lithography exposures is an important AttoLab milestone. It demonstrates that we can synchronize femtosecond wide pulses, that we have excellent vibration control, and excellent beam pointing stability. The 13.5 nm femtosecond enveloped attosecond laser pulses allow us to study EUV photon absorption and ultrafast radiative processes that are subsequently induced in the photoresist material. For these studies, we will couple the beamline with spectroscopy techniques, such as time-resolved infrared and photoelectron spectroscopy, that we earlier installed within the laboratory facility. The fundamental learnings from this spectroscopy beamline will contribute to developing the lithographic materials required for the next-generation (i.e., 0.55 NA) EUV lithography scanners, before the first 0.55 EXE5000 proto-type becomes available."
Interference chamber for full-wafer experiments. Credit: IMEC Next up, the learnings from this first proof of concept will now be transferred to a second, 300mm-wafer-compatible EUV interference lithography beamline that is currently under installation. This beamline is designed for screening various resist materials under high-NA conditions with a few seconds per single-exposure, and for supporting the development of optimized pattern, etch and metrology technologies viable for high-NA EUV lithography."The lab's capabilities are instrumental for fundamental investigations to accelerate material development toward high NA EUV," said Andrew Grenville, CEO of Inpria. "We are looking forward to deeper collaboration with the AttoLab."
(Left) Cross-section SEM image of a 20nm L/S pattern imaged an Inpria metal-oxide resist, exposed in a Lloyd’s mirror interference setup at a dose of 64mJ/cm2 and interference angle 20°. (Right) Fourier transform analysis where 0.05=20nm pitch. Credit: IMEC "Our interference tools are designed to go from 32 nm pitch to an unprecedented 8 nm pitch on 300 mm wafers, as well as smaller coupons," says John Petersen. "They will offer complementary insights in what is already gained from 0.33NA EUV lithography scanners—which are currently being pushed to their ultimate single-exposure resolution limits. In addition to patterning, many other materials research areas will benefit from this state-of-the-art AttoLab research facility. For example, the ultrafast analytic capability will accelerate materials development of the next-generation logic, memory, and quantum devices, and of the next-generation metrology and inspection techniques."
More information: Introduction to imec's AttoLab for ultrafast kinetics of EUV exposure processes and ultra-small pitch lithography, Paper 11610-46
Citation: Imec demonstrates 20-nm pitch line/space resist imaging with high-NA EUV interference lithography (2021, February 23) retrieved 24 February 2021 from techxplore.com
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only. |
| New Technology | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last Read |
|
From: FJB | 3/3/2021 9:42:23 AM | | | |
Graphene 'Nano-Origami' Could Take Us Past the End of Moore's Law Edd Gent singularityhub.com
Wonder material graphene is often touted as a potential way around the death of Moore’s Law, but harnessing its promising properties has proven tricky. Now, researchers have shown they can build graphene chips 100 times smaller than normal ones using a process they’ve dubbed “nano-origami.”
For decades our ability to miniaturize electronic components improved exponentially, and with it the performance of our chips. But in recent years we’ve started approaching the physical limits of the silicon technology we’ve become so reliant on, and progress is slowing.
The ability to build ever-faster chips has underpinned the rapid technological advances we’ve made in the last half-century, so understandably people are keen to keep that trend going. As a result, a plethora of new technologies are vying to take us past the end of Moore’s Law, but so far none have taken an obvious lead.
One of the most promising candidates is graphene, a form of carbon that comes in one-atom-thick sheets, which are both incredibly strong and have a range of remarkable electronic properties. Despite its potential, efforts to create electronics out of graphene and similar 2D materials have been progressing slowly.
One of the reasons is that the processes used to create these incredibly thin layers inevitably introduce defects that can change the properties of the material. Typically, these imperfections are seen as problematic, as any components made this way may not behave as expected.
But in a paper published in the journal ACS Nano, researchers from the University of Sussex in the UK decided to investigate exactly how these defects impact the properties of graphene and another 2D material called molybdenum disulfide, and how they could be exploited to design ultra-small microchips.
Building on their findings, the team has now shown that they can direct these defects to create minuscule electronic components. By wrinkling a sheet of graphene, they were able to get it to behave like a transistor without adding any additional materials.
“We’re mechanically creating kinks in a layer of graphene. It’s a bit like nano-origami,” Alan Dalton, who led the research, said in a press release.
“Using these nanomaterials will make our computer chips smaller and faster. It is absolutely critical that this happens as computer manufacturers are now at the limit of what they can do with traditional semiconducting technology.”
The work falls into an emerging line of research known as “straintronics,” which is uncovering the surprising ways in which mechanical strains in nanomaterials can dramatically change their electronic, magnetic, and even optical characteristics.
Now that the researchers have elucidated how different kinds of defects like wrinkles, domes, and holes impact the properties of these 2D materials, they’re working on ways to precisely pattern them to create more complex chips.
According to New Scientist, they have already mastered creating rows of wrinkles using pattern molds and generating domes by firing lasers at water molecules to make them expand, and they hope to have a functional prototype chip within five years.
They say that the approach allows them to build processors around 100 times smaller than conventional microchips, which could be thousands of times faster than today’s devices and would require far less energy and resources to make.
There’s still a long way to go to flesh out the potential of the approach, but it represents a promising new front in the race to keep the technological juggernaut we’ve created steaming ahead at full power.
Image Credit: seagul from Pixabay |
| New Technology | Stock Discussion ForumsShare | RecommendKeepReplyMark as Last ReadRead Replies (1) |
|
| |