SI
SI
discoversearch

   Technology StocksNew Technology


Previous 10 Next 10 
From: FUBHO1/20/2017 3:33:50 PM
   of 323
 













Since its discovery in 2004, scientists have believed that graphene may have the innate ability to superconduct. Now Cambridge researchers have found a way to activate that previously dormant potential.






It has long been postulated that graphene should undergo a superconducting transition, but can’t. The idea of this experiment was, if we couple graphene to a superconductor, can we switch that intrinsic superconductivity on?

Jason Robinson


Researchers have found a way to trigger the innate, but previously hidden, ability of graphene to act as a superconductor – meaning that it can be made to carry an electrical current with zero resistance.

The finding, reported in Nature Communications, further enhances the potential of graphene, which is already widely seen as a material that could revolutionise industries such as healthcare and electronics. Graphene is a two-dimensional sheet of carbon atoms and combines several remarkable properties; for example, it is very strong, but also light and flexible, and highly conductive.

Since its discovery in 2004, scientists have speculated that graphene may also have the capacity to be a superconductor. Until now, superconductivity in graphene has only been achieved by doping it with, or by placing it on, a superconducting material - a process which can compromise some of its other properties.

But in the new study, researchers at the University of Cambridge managed to activate the dormant potential for graphene to superconduct in its own right. This was achieved by coupling it with a material called praseodymium cerium copper oxide (PCCO).

Superconductors are already used in numerous applications. Because they generate large magnetic fields they are an essential component in MRI scanners and levitating trains. They could also be used to make energy-efficient power lines and devices capable of storing energy for millions of years.

Superconducting graphene opens up yet more possibilities. The researchers suggest, for example, that graphene could now be used to create new types of superconducting quantum devices for high-speed computing. Intriguingly, it might also be used to prove the existence of a mysterious form of superconductivity known as “p-wave” superconductivity, which academics have been struggling to verify for more than 20 years.

The research was led by Dr Angelo Di Bernardo and Dr Jason Robinson, Fellows at St John’s College, University of Cambridge, alongside collaborators Professor Andrea Ferrari, from the Cambridge Graphene Centre; Professor Oded Millo, from the Hebrew University of Jerusalem, and Professor Jacob Linder, at the Norwegian University of Science and Technology in Trondheim.

“It has long been postulated that, under the right conditions, graphene should undergo a superconducting transition, but can’t,” Robinson said. “The idea of this experiment was, if we couple graphene to a superconductor, can we switch that intrinsic superconductivity on? The question then becomes how do you know that the superconductivity you are seeing is coming from within the graphene itself, and not the underlying superconductor?”

Similar approaches have been taken in previous studies using metallic-based superconductors, but with limited success. “Placing graphene on a metal can dramatically alter the properties so it is technically no longer behaving as we would expect,” Di Bernardo said. “What you see is not graphene’s intrinsic superconductivity, but simply that of the underlying superconductor being passed on.”

PCCO is an oxide from a wider class of superconducting materials called “cuprates”. It also has well-understood electronic properties, and using a technique called scanning and tunnelling microscopy, the researchers were able to distinguish the superconductivity in PCCO from the superconductivity observed in graphene.

Superconductivity is characterised by the way the electrons interact: within a superconductor electrons form pairs, and the spin alignment between the electrons of a pair may be different depending on the type - or “symmetry” - of superconductivity involved. In PCCO, for example, the pairs’ spin state is misaligned (antiparallel), in what is known as a “d-wave state”.

By contrast, when graphene was coupled to superconducting PCCO in the Cambridge-led experiment, the results suggested that the electron pairs within graphene were in a p-wave state. “What we saw in the graphene was, in other words, a very different type of superconductivity than in PCCO,” Robinson said. “This was a really important step because it meant that we knew the superconductivity was not coming from outside it and that the PCCO was therefore only required to unleash the intrinsic superconductivity of graphene.”

It remains unclear what type of superconductivity the team activated, but their results strongly indicate that it is the elusive “p-wave” form. If so, the study could transform the ongoing debate about whether this mysterious type of superconductivity exists, and – if so – what exactly it is.

In 1994, researchers in Japan fabricated a triplet superconductor that may have a p-wave symmetry using a material called strontium ruthenate (SRO). The p-wave symmetry of SRO has never been fully verified, partly hindered by the fact that SRO is a bulky crystal, which makes it challenging to fabricate into the type of devices necessary to test theoretical predictions.

“If p-wave superconductivity is indeed being created in graphene, graphene could be used as a scaffold for the creation and exploration of a whole new spectrum of superconducting devices for fundamental and applied research areas,” Robinson said. “Such experiments would necessarily lead to new science through a better understanding of p-wave superconductivity, and how it behaves in different devices and settings.”

The study also has further implications. For example, it suggests that graphene could be used to make a transistor-like device in a superconducting circuit, and that its superconductivity could be incorporated into molecular electronics. “In principle, given the variety of chemical molecules that can bind to graphene’s surface, this research can result in the development of molecular electronics devices with novel functionalities based on superconducting graphene,” Di Bernardo added.

The study, p-wave triggered superconductivity in single layer graphene on an electron-doped oxide superconductor, is published in Nature Communications. (DOI: 101038/NCOMMS14024).

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen3/26/2017 9:58:48 AM
1 Recommendation   of 323
 
What Blockchain Means for the Sharing Economy


Primavera De Filipp
Harvard Business Review
March 15, 2017

Executive Summary

Blockchain technology is facilitating the emergence a new kind of radically decentralized organization. These organizations — which have no director or CEO, or any sort of hierarchical structure — are administered, collectively, by individuals interacting on a blockchain. As such, it is important not to confuse them with the traditional model of “crowdsourcing,” where people contribute to a platform but do not benefit proportionately from the success of that platform. Blockchain technologies can support a much more cooperative form of crowdsourcing — sometimes referred to as “platform cooperativism”— where users qualify both as contributors and shareholders of the platforms to which they contribute. The value produced within these platforms can be more equally redistributed among those who have contributed to the value creation. With this new opportunity for increased “cooperativism,” we could be moving toward a true sharing economy.


jakub nanista
___________________


Look at the modus operandi of today’s internet giants — such as Google, Facebook, Twitter, Uber, or Airbnb — and you’ll notice they have one thing in common: They rely on the contributions of users as a means to generate value within their own platforms. Over the past 20 years the economy has progressively moved away from the traditional model of centralized organizations, where large operators, often with a dominant position, were responsible for providing a service to a group of passive consumers. Today we are moving toward a new model of increasingly decentralized organizations, where large operators are responsible for aggregating the resources of multiple people to provide a service to a much more active group of consumers. This shift marks the advent of a new generation of “dematerialized” organizations that do not require physical offices, assets, or even employees.

The problem with this model is that, in most cases, the value produced by the crowd is not equally redistributed among all those who have contributed to the value production; all of the profits are captured by the large intermediaries who operate the platforms.

Recently, a new technology has emerged that could change this imbalance. Blockchain facilitates the exchange of value in a secure and decentralized manner, without the need for an intermediary.
________________________

How Blockchain Works

Here are five basic principles underlying the technology.

1. Distributed Database

Each party on a blockchain has access to the entire database and its complete history. No single party controls the data or the information. Every party can verify the records of its transaction partners directly, without an intermediary.

2. Peer-to-Peer Transmission

Communication occurs directly between peers instead of through a central node. Each node stores and forwards information to all other nodes.

3. Transparency with Pseudonymity

Every transaction and its associated value are visible to anyone with access to the system. Each node, or user, on a blockchain has a unique 30-plus-character alphanumeric address that identifies it. Users can choose to remain anonymous or provide proof of their identity to others. Transactions occur between blockchain addresses.

4. Irreversibility of Records

Once a transaction is entered in the database and the accounts are updated, the records cannot be altered, because they’re linked to every transaction record that came before them (hence the term “chain”). Various computational algorithms and approaches are deployed to ensure that the recording on the database is permanent, chronologically ordered, and available to all others on the network.

5. Computational Logic

The digital nature of the ledger means that blockchain transactions can be tied to computational logic and in essence programmed. So users can set up algorithms and rules that automatically trigger transactions between nodes.
________________

But the most revolutionary aspect of blockchain technology is that it can run software in a secure and decentralized manner. With a blockchain, software applications no longer need to be deployed on a centralized server: They can be run on a peer-to-peer network that is not controlled by any single party. These blockchain-based applications can be used to coordinate the activities of a large number of individuals, who can organize themselves without the help of a third party. Blockchain technology is ultimately a means for individuals to coordinate common activities, to interact directly with one another, and to govern themselves in a more secure and decentralized manner.

There are already a fair number of applications that have been deployed on a blockchain. Akasha, Steem.io, or Synereo, for instance, are distributed social networks that operate like Facebook, but without a central platform. Instead of relying on a centralized organization to manage the network and stipulate which content should be displayed to whom (often through proprietary algorithms that are not disclosed to the public), these platforms are run in a decentralized manner, aggregating the work of disparate groups of peers, which coordinate themselves, only and exclusively, through a set of code-based rules enshrined in a blockchain. People must pay microfees to post messages onto the network, which will be paid to those who contribute to maintaining and operating the network. Contributors may earn back the fee (plus additional compensation) as their messages spread across the network and are positively evaluated by their peers.

Similarly, OpenBazaar is a decentralized marketplace, much like eBay or Amazon, but operates independently of any intermediary operator. The platform relies on blockchain technology to ensure that buyers and sellers can interact directly with one another, without passing through any centralized middleman. Anyone is free to register a product on the platform, which will become visible to all users connected to the network. Once a buyer agrees to the price for that product, an escrow account is created on the bitcoin blockchain that requires two out of three people (i.e., the buyer, the seller, and a potential third-party arbitrator) to agree for the funds to be released (a so-called multisignature account). Once the buyer has sent the payment to the account, the seller ships the product; after receiving the product, the buyer releases the funds from the escrow account. Only if there is an issue between the two does the system require the intervention of a third party (e.g., a randomly selected arbitrator) to decide whether to release the payment to the seller or whether to return the money to the buyer.

There are also decentralized carpooling platforms, such as Lazooz or ArcadeCity, which operate much like Uber, but without a centralized operator. These platforms are governed only by the code deployed on a blockchain-based infrastructure, which is designed to govern peer-to-peer interactions between drivers and users. These platforms rely on a blockchain to reward drivers contributing to the platform with specially designed tokens that represent a share in the platform. The more a driver contributes to the network, the more they will be able to benefit from the success of that platform, and the greater their influence in the governance of that organization.

Blockchain technology thus facilitates the emergence of new forms of organizations, which are not only dematerialized but also decentralized. These organizations — which have no director or CEO, or any sort of hierarchical structure — are administered, collectively, by all individuals interacting on a blockchain. As such, it is important not to confuse them with the traditional model of “crowd-sourcing,” where people contribute to a platform but do not benefit from the success of that platform. Blockchain technologies can support a much more cooperative form of crowd-sourcing — sometimes referred to as “platform cooperativism”— where users qualify both as contributors and shareholders of the platforms to which they contribute. And since there is no intermediary operator, the value produced within these platforms can be more equally redistributed among those who have contributed to the value creation.

With this new opportunity for increased “cooperativism,” we’re moving toward a true sharing or collaborative economy — one that is not controlled by a few large intermediary operators, but that is governed by and for the people.

There’s nothing new about that, you might say — haven’t we heard these promises before? Wasn’t the mainstream deployment of the internet supposed to level the playing field for individuals and small businesses competing against corporate giants? And yet, as time went by, most of the promises and dreams of the early internet days faded away, as big giants formed and took control over our digital landscape.

Today we have a new opportunity to fulfill these promises. Blockchain technology makes it possible to replace the model of top-down hierarchical organizations with a system of distributed, bottom-up cooperation. This shift could change the way wealth is distributed in the first place, enabling people to cooperate toward the creation of a common good, while ensuring that everyone will be duly compensated for their efforts and contributions.

And yet nothing should be taken for granted. Just as the internet has evolved from a highly decentralized infrastructure into an increasingly centralized system controlled by only a few large online operators, there is always the risk that big giants will eventually form in the blockchain space. We’ve lost our first window of opportunity with the internet. If we, as a society, really value the concept of a true sharing economy, where the individuals doing the work are fairly rewarded for their efforts, it behooves us all to engage and experiment with this emergent technology, to explore the new opportunities it provides and deploy large, successful, community-driven applications that enable us to resist the formation of blockchain giants.

Primavera De Filippi is a permanent researcher at the National Center of Scientific Research (CNRS) in Paris. She is faculty associate at the Berkman Center for Internet & Society at Harvard Law School, where she is investigating the concept of “governance-by-design” as it relates to online distributed architectures.

hbr.org

Share RecommendKeepReplyMark as Last Read


From: FUBHO3/28/2017 8:16:12 AM
   of 323
 
EUV as Pizza
Perfecting the Recipe

by Bryon Moyer March 27, 2017

What’s the most important thing for the perfect pizza? This isn’t a fair question, of course, because there’s no definition of “perfect” when it comes to pizza. OK, maybe there is, but each person has their own. But stay with me for a sec here: for a certain style of pizza, you need an oven that’s over 500 °F – higher than home ovens can go, for sure. And the right old-school wood-fired ovens can do that.

So if you’re in search of that perfect pizza, the first thing you might have to do is to splurge to get an oven that will finally give you the heat you need. You might play with the amount of wood you use, the best pizza positioning to ensure even heating, and the best oven placement for not burning the house down before you’re satisfied that you’ve nailed it. It could take a lot of work – probably more than you expected.

And then, at last, you declare the oven problem solved. Do you now have the perfect pizza? Well… not yet. Now you need to make sure the dough is perfect, and there’s the sauce, and then there’s how you assemble it – how thin you make the crust, how much sauce, which and how many toppings. You’ve still got some work cut out for yourself.

That feels like where we are with EUV. We now have our oven – the EUV source. Still needs some tuning, but, as of last year, it feels like the worst is behind us. IBM, GlobalFoundries, and Samsung presented at IEDM last December, introducing a 7-nm FinFET process platform that, for the first time, included EUV. So the technology is finally starting to migrate towards production, and not just at Intel. But there’s the matter of this laundry list of things that need to be tidied up before we can launch. We got a rundown of the issues at the recent SPIE Advanced Litho conference, so let’s review them.

The Source

While you may think that we’ve been over this hump for a year or so, the source still tends to grab ongoing attention. ASML is still the leading voice here, although there was a mention of Gigaphoton as a credible second source. Intel has 14 scanners; ASML says there are 18 units on back order – no small thing with a price that they say is on the order of hundreds of millions of dollars per unit. They’re looking at a production ramp in 2018.

The NXE3300B is the incumbent model at the moment, but the next version – worthy of a number change – will be the NXE3400. It’s expected to support 5-nm processes and DRAM below 15 nm with 125 wafers/hour-throughput. The numerical aperture (NA) will remain at the current 0.33, giving 13-nm resolution. Critical dimension uniformity (CDU) will be 0.3 nm; the depth of field will be 100 nm; and they’re expecting 20% exposure latitude.

Power is now over 200 W – 205 to be precise. But the target for high-volume production is now 250 W, so there’s still work ahead.

A high-level parameter to watch is availability, which has risen to over 80% – but needs to be over 90% for economic high-volume production. There are a number of items that can take the machine down – they’re undergoing extreme tune-ups.

Tune-Up Issues

Droplet generator: I don’t recall this being on the hit parade in the past, but apparently the lifetime of the droplet generator hasn’t been what was hoped, running at present at around 80% of expectations. You may recall that this whole system works by carefully timing drops of molten tin and then zapping them – not once, but twice – with a laser as the droplet falls. So this is the critical element that feeds the beast.

While more is needed, they’ve improved that lifetime by 3.5 times as compared to last year (Samsung claims a 5X gain), and improvements in the works are expected to further triple the lifetime. That aside, Samsung is also hoping for faster tin refill to maintain uptime.

The collector: You may recall this from past years – it’s the metal shroud that takes the EUV from the zapped droplet, which emanates in all directions, and focuses it into the beam that will make its way to the wafer. And it is also degrading too quickly. So ASML has a newer version coming that should address this maintenance issue.

Pellicles: We talked about these in more detail last year; they’re the mask “cover,” if you will, that keeps fall-on defects out of the focus region so that they won’t print. Fundamentally, they have a working solution now, although, again, it can be improved. First, there’s no change to the fact that they’re still needed. Intel said that they’re seeing fall-on defects at higher levels than ASML is claiming. Defects on the pellicles themselves remain, although the numbers have been reduced. This really needs to get to 0 to be acceptable.

The pellicle material itself is OK, but Intel could do with better transmissivity and the ability to handle higher power when that becomes available.

Mask inspection and defectivity: The quality of blank masks has improved to the point where they can map the defects and then shift the pattern slightly to keep those defects out of critical points. There is still, however, no actinic (i.e., illuminated with the same light frequency as is used for exposure) inspection available for patterned masks.

Edge-placement error: I must not have been paying attention, since this was a new term to me this year. And yet it’s a hot issue (meriting its own TLA: EPE) – not just for EUV, but also for 193i and, in particular, for multi-patterning. It’s described by KLA-Tencor as a convolution of overlay and CDU – anything that can make edges on multiple layers fail to line up. That would also include etch steps as well.

Applied Materials has focused in particular on improvements to etch, but the ultimate solution requires yet more development so that self-aligning techniques with new materials and highly selective etching can use hard masks, rather than litho, to define edges, granting litho a bit of slop.

One approach being discussed to eliminate machine-to-machine variation is to dedicate machines to a particular lot. If lot A uses scanner X for a critical layer, then all subsequent exposures should use the same scanner – at least for other steps involving edge placement. Obviously this reduces manufacturing flexibility, so it’s likely to be used reluctantly.

Line-edge roughness (LER): This, along with the related – but different – line-width roughness, is a perennial issue. And I learned more about the diabolical triangle connecting EUV dose, resolution, and LER. Its origins lie in – surprise! – the source power we’ve been agonizing over for the last many years. Turns out that, even with the improvements in EUV power, ordinary deep-UV lithography delivers 14 times more photons to the resist on the wafer than EUV does.

The thing about photons is that they arrive and position themselves somewhat randomly. If you have enough of them, they average out and, ultimately, fill the expected areas of the resist with smooth edges. But with EUV, we can’t wait long enough for this averaging to be effective – we’d never make any money. So you end up with these ratty edges that scatter the poor electrons as they try to make their way through.

High-NA: The standard NA is 0.33; ASML is working on a lens that will raise the NA to 0.5 or higher. Interestingly, this will be an “anamorphic” lens – the x-direction scale will be different from the y direction (so, for example, a circle would end up looking like an ellipse). The new lens has a smaller field, which means less exposed in one shot, which means more shots per wafer – which means slower. They’re compensating for this with faster wafer and mask stages.


Interestingly, the ASML paper describing this includes a roadmap – with no years labeled for availability of this solution. So this may be a ways out there yet…

Mix-n-match: Of course, not every layer on a chip is going to require EUV – which is good, since there’s not enough of it to go around (and what there is is expensive). That means, for instance, SAQP for metal lines and then EUV for the block mask. (I was confused as to what a “block” mask is; it’s effectively the same as a “cut” mask. With aluminum, you can create lines and then cut them after. But with copper and dual-damascene, you interrupt the trenches with a block that defines the end of the lines and then fill with metal.)

This means that wafers will be going back and forth between conventional and EUV machines – creating a need to match characteristics to reduce yet another source of variation.



So that’s a super-fast rundown of EUV goings-on. I downloaded the EUV related papers, and there were – count them – 59 papers. Which is why I’m not even attempting detail. There’s lots more to explore in those papers.



More info:

SPIE proceedings (membership or attendance required – you may need a friend)

Share RecommendKeepReplyMark as Last Read


From: aknahow4/13/2017 9:45:25 AM
   of 323
 
news.panasonic.com

Share RecommendKeepReplyMark as Last Read


From: FUBHO5/3/2017 4:24:29 PM
   of 323
 
Inside Lithography And Masks

Experts at the table, part 3: EUV, DSA, nanoimprint, nanopatterning, and best guesses for how far lithography can be extended.



MAY 1ST, 2017 - BY: MARK LAPEDUS

semiengineering.com


Semiconductor Engineering sat down to discuss lithography and photomask technologies with Gregory McIntyre, director of the Advanced Patterning Department at IMEC; Harry Levinson, senior fellow and senior director of technology research at GlobalFoundries; David Fried, chief technology officer at Coventor; Naoya Hayashi, research fellow at Dai Nippon Printing (DNP); and Aki Fujimura, chief executive of D2S. What follows are excerpts of that conversation. To view part one, click here. Part two is here.

SE: For some time, Canon has been developing and shipping nanoimprint lithography systems. (Nanoimprint lithography resembles a hot embossing process. Tiny structures are patterned onto a template or mold using an e-beam tool, and then the patterns are pressed into a resist on a substrate, enabling tiny features.) Nanoimprint is still targeted for NAND, right?

Hayashi: For 2xnm, like 24nm and 26nm line and space, nanoimprint could reach production last year. Then, last year as Toshiba said, we confirmed that certain yields of 15nm 2D NAND were made using nanoimprint. But then, that represents the last products for 2D NAND.

SE: At 16nm or 15nm, NAND flash vendors are moving from traditional 2D or planar NAND to 3D NAND. Is nanoimprint being used for 3D NAND?

Hayashi: Some are confirming the yield of 3D NAND.

SE: DNP has been making the templates for nanoimprint lithography. What’s the status?

Hayashi: The nanoimprint template is 1:1. We are currently making the templates for 3D NAND. The most critical layers are the holes. As we discussed before, for any type of lithography like optical and EUV, the shot noise is a big issue for making the contact holes. With nanoimprint, we can get a very narrow gap for contact holes with good CD uniformity and pattern fidelity. That’s a good place to use nanoimprint.

SE: What are some of the remaining challenges with nanoimprint?

Hayashi: There are still some defectivity issues. We still need another two digits of improvement to extend the technology for another memory like DRAM. For logic, it’s still far away. Overlay is currently like the 3nm range. It’s enough for NAND. The DRAM people want to shrink that number.


Fig. 1: Nanoimprint schematic. Source: DNP

SE: Let’s move to directed self-assembly ( DSA). (DSA is a technology that makes use of block copolymer materials. In the DSA process, the copolymers undergo a separation phase. Then, when used in conjunction with a pre-pattern that directs the orientation of the materials, the copolymers self-assemble into a tiny pattern. DSA was a rising star in the next-generation lithography (NGL) landscape, but the technology has lost momentum and has been pushed out.) Where are we in DSA?

McIntyre: It’s not a secret that the momentum has slowed down a little for DSA compared to what it was a few years ago. But it hasn’t gone away. It’s not completely off the table. There are applications where we think it’s potentially still feasible. For example, Imec has been working on DSA quite a while. We focus on three activities in DSA. One is called Chips Flow. It’s a chemo-epitaxy flow for creating hexagonal arrays of holes, which is potentially interesting for the DRAM folks. We have just recently decreased the defectivity compared to where it was six months ago or so. If we can keep going on that path, it may be a viable option for the DRAM folks. There is another DSA application. It’s not going to beat out SAQP for forming dense lines and spaces at the 20nm-something pitch. But if we go below 20nm, and have to do something like SAOP, DSA could be a potential alternative there, as well. For this, there is a lot of focus on high-chi materials, leading to really dense pitches. Then, the third application could potentially go hand-in-hand with EUV, essentially as a healing technique to smooth the roughness that you get in EUV holes. The template-based approach is a nice way to do that. So, DSA has slowed down a little bit, but it’s not off the table.

Fried: My suspicion is that if we see DSA, it will not be in a pattern multiplication mode. It will be in a pattern healing mode. And the likelihood of seeing something in a pattern healing mode actually seems reasonable. There are defectivity concerns, of course. But not having to lump pattern multiplication on top of the defectivity concerns in pattern healing seems like a slightly relaxed set of criteria, and that could be a reality.

McIntyre: Pattern healing is getting there. It’s a potentially interesting technique that could be used in our toolbox.


Fig. 2: DSA flow. Source: University of Chicago Institute of Molecular Engineering

SE: Let’s talk about another futuristic patterning technology called selective deposition or ALD nanopatterning. (Using atomic layer deposition ( ALD) tools, selective deposition involves a process of depositing materials and films in exact places.) Where is selective deposition now?

McIntyre: It is definitely very interesting. There is a potential for various uses such as growing dielectrics on dielectrics, or dielectrics on metals. What we are trying to understand are the fundamentals of which materials can you grow on what other materials. You often see a different behavior between blanket wafers. You get nice growth, but if you try to grow the material in actual patterns, the behavior can be completely different. So the next step will be to put it in a couple of applications and see where it could potentially help you.

Levinson: It is nice to know there are some innovative concepts out there. But it does take a long time for something like this to go from a laboratory into manufacturing. We have to wait and see. There are many techniques that fail at some point for some reason or another. We’ll have to see which ones pan out and which ones don’t.

McIntyre: Right now, this technology is sort of in the research sandbox type stage. It might see some applications in some of these self-aligned techniques like a fully self-aligned via. For a while, we’ve done self-aligned vias in one direction. But if you want to line it up the other direction as well, it requires some topography to take advantage of it. There are a couple of ways to do that topography. You can do it with either traditional metal etching or use something like selective deposition to grow little pieces of metals. Then, you use them to help self-align a via landing on a metal line or something like that.

SE: Let’s make some predictions. What will happen in the future, say in the next 5 years or more, from your vantage point or area of interest?

Fujimura: I have faith that this community of people and a $300 billion semiconductor industry are going to figure out a way to solve the problems. From a need point of view, we have a computational design platform. We know GPU acceleration. So we use supercomputers and build them ourselves. I can tell you we need computing power. IoT and PCs don’t need to be incredibly faster. But you need more computational power for AI, deep learning and all of these hot topics. Today, just doing simulation, we need computing power. We can do a lot better if we had more. So, I don’t see an end of that demand. We could figure a way to utilize 100 times more computing power if we had it.

Fried: When I was in graduate school, I won a fellowship and met with (co-founder and former Intel CEO) Andy Grove. It was a long time ago. It was the quarter-micron time or even earlier. People asked Andy Grove about why do we need a faster computer. He had a list of applications that we just didn’t have the compute power for at that time. He brought up one that always stuck in the back of my head—voice recognition. Grove said, ‘Voice recognition is terrible right now.’ Now, if you look at it, voice recognition is still awful. So you have AI and machine learning and the need for voice recognition. There are drivers that would want that extra compute power. There will be demand for this. Whether we make a cost-effective answer to that demand is going to determine whether there is a 5nm or 3nm node. We can build it. It’s going to be demand and the cost-efficiency of a solution to that demand.

Levinson: I have a great deal of confidence in the patterning community to continue our ability to scale geometrically for some time. What’s unclear are the devices and the transistors. It’s also unclear about the interconnect technology, whether that’s going to be able to continue to scale. But there are definitely ways to extend the technology other than scaling.

McIntyre: I do believe we will be able to continue to make things smaller. The device folks will figure it out. Maybe the front-end device stuff won’t scale as much as it has in the past. But there does seem to be room to grow or shrink in the backend with new metals and direct-metal etch techniques. There seems to be enough things that are plausible out there. So, we are probably going to keep scaling, maybe not at the same rate that we did a number of years ago. Physical scaling might continue to slow down. In addition, we will probably see high NA EUV. It will be used first for an EPE reduction scheme. If you can go to higher NA, you go to higher image contrast. You get less stochastics in your materials.

Fried: We’ve milked drift-diffusion field-effect devices for five generations longer than anyone said we could. We may break that at some point. But then, there are tunnel devices and then you can go to very low voltage devices. There are all these things that may eventually kick in when we really break with what we have now. There are a huge pile of these things. So if you can pattern it, we can make something.

Related Stories
Ready For Nanoimprint?

NGL option gains ground and adherents for single-digit process nodes, but more work is still needed.
What Happened To DSA?
Alternative patterning technology makes incremental gains, but the big money is still behind EUV.
Inside Advanced Patterning
What’s in store for chipmakers at 7nm, 5nm and beyond, and why atomic-level etch and deposition are getting new attention.
Uncertainty Grows For 5nm, 3nm
Nanosheets and nanowire FETs under development, but costs are skyrocketing. New packaging options could provide an alternative.

Share RecommendKeepReplyMark as Last Read


From: FUBHO5/22/2017 12:49:52 PM
   of 323
 
IBM Unveils Its Most Powerful Quantum Processor Yet for Business and Science

Like | by Sergio De Simone on May 19, 2017.

IBM 16 Qubit Processor, photo by IBM Research

IBM has announced a new feat in its race towards building ever more powerful quantum processors, with new 16 and 17 qubit processors that are its most powerful yet.

The two new processors from IBM aim to address the needs of the scientific community with a 16 qubit processor, shown above, that will supersede the previously available 5 qubit processor as well as provide the foundation for a commercial solution based on a new 17 qubit processor. In particular, IBM researchers explain, it is the 17 qubit processor that brings significant material, device, and architecture improvements that makes it IBMs most powerful quantum processor to date, being roughly twice as powerful as what IBM offered before.

According to Arvind Krishna, senior vice president and director of IBM Research and Hybrid Cloud, this is only an incremental step that "will allow IBM to scale future processors to include 50 or more qubits, and demonstrate computational capabilities beyond today’s classical computing systems."

While it is certainly true that the newly announced processors sport many more qubits than previous processors, it is also true that defining the power of a quantum system is not an easy task and, as the IBM researchers themselves explained, there is much more than the number of qubits to the equation that defines it. IBM is proposing Quantum Volume as a metric to characterize the computational power of quantum systems. This metric takes into account the number of qubits as well as the circuit depth, which determine respectively whether a quantum algorithm can be run or not, and the fidelity to the correct answer that can be expected. This in turn depends on how the qubits are connected and on the error that each basic operation can introduce.

The field of quantum computing has seen growing interest in recent years and IBM is not the only player. In particular, D-Wave Systems is taking a slightly different approach to IBM by selling quantum computers to the likes of NASA and Google. Chinese researchers already achieved the milestone of a 10 qubit processor and have announced plans to scale up their quantum processor to 20 qubits by the end of the year. Additionally, Google, Microsoft, and others have announced plans to enter the quantum computing field in the coming years.

As InfoQ reported, IBM provides a Python-based quantum development SDK called QISKit that can be used to run experiments on the IBM Q processors. IBM’s new 16 qubit processor is available for beta access, while the 17 qubit one is still considered a prototype.

Share RecommendKeepReplyMark as Last Read


From: FUBHO5/23/2017 8:08:42 AM
   of 323
 
In fact, there is no one technology that can fit all needs. For example, GlobalFoundries is readying a 22nm FD-SOI technology for low-power applications. “FD-SOI makes sense for certain people,” said Gary Patton, chief technology officer at GlobalFoundries. “FinFETs makes sense for certain people.”

For those who migrate beyond 16nm/14nm, it will require deep pockets. In total, it will cost $271 million to design a 7nm chip, according to Gartner. In comparison, it costs around $80 million to design a 16nm/14nm chip and $30 million for a 28nm planar device, the research firm said.

semiengineering.com

Share RecommendKeepReplyMark as Last Read


From: FUBHO7/1/2017 8:48:04 AM
   of 323
 
Watch Out, Intel. New Types of Chips Are Gaining Ground

Revolution in chip design may upend old guard, including Intel and Nvidia.

By Tiernan Ray - July 1, 2017 2:05 a.m. ET

The chip revolution starts now. Today’s general-purpose computer chips are losing ground to domain-specific chips—customized parts dedicated to more specific tasks. These chips are tailored to the needs of mobile devices, servers running machine-language tasks in artificial intelligence, and the vast constellation of connected devices known as the Internet of Things.

The implications for Intel (ticker: INTC) and Nvidia (NVDA) and other established chip vendors are stark. Companies that were never involved in semiconductors, such as Alphabet’s (GOOGL) Google, can become their own chip houses. A whole new wave of chip startups can be funded with less money, bringing fresh competition.

Leading the charge is David Patterson, a computer scientist with the University of California, Berkeley. Starting in the 1970s, Patterson proposed a simplified vocabulary for programmers to control chips that would be more efficient than the verbose set of controls Intel offered. Industry embraced Patterson’s “reduced-instruction set computer,” or RISC, as it came to be known.

The personal computing era was dominated by Intel’s microprocessors, but that changed with Apple’s (AAPL) iPhone, which went on sale 10 years ago last week. The chips that run the iPhone, and other RISC-based chips like it, use technology from ARM Holdings.ARM, owned by Japan’s SoftBank Group (9984.Japan), was more aggressive in embracing Patterson’s RISC innovations than was Intel. ARM-based parts sell in the billions every year, versus Intel’s market for PC and server chips in the hundreds of millions.

Patterson sees an equal if not greater challenge coming to Intel, Nvidia, and even ARM, prompted by the crumbling of Moore’s Law. Formulated by Intel co-founder Gordon Moore in 1965, Moore’s Law says that the number of transistors on a chip doubles every 18 to 24 months, powering ever-faster, ever-cheaper computers. But Patterson says plainly that Moore’s Law is dead, finished, kaput. “If I look at the latest generation of microprocessors, this year, performance only went up by 3%,” he told Barron’s. At that rate, it will take two decades for chips to double in performance.

The next big reduction in size of transistors, to 7 billionths of a meter, or 7-nanometer, “won’t make general-purpose microprocessors that much faster,” says Patterson. Moreover, costs are skyrocketing to eke out meager gains. Data from Gartner say the upfront cost to develop a chip at 7-nanometer is $271 million, up from $30 million a couple of generations ago.

The solution, as Henry David Thoreau once wrote, is to “simplify, simplify.” Last week, researchers at Google presented a paper co-authored by Patterson at an academic conference, describing a novel chip called the Tensor Processing Unit, or TPU. Developed by Google, the TPU vastly outperformed comparable chips from Intel and Nvidia for tasks like machine learning.

While Intel’s microprocessor is broadly useful, running everything from scientific computing to spreadsheets, the TPU focuses on a specific problem such as speech recognition so it has power where it counts. It has 3.5 times as much memory as a comparable Intel part in a chip half the size. “We threw out a lot of stuff that was not needed,” says Patterson, who serves as distinguished engineer at Google in its Google Brain unit that focuses on machine learning. “Instead of the Honda for everyone, we are making these Formula One race cars for some things.”

Moreover, the TPU went from sketch to finished chip in just 15 months, he says, whereas the latest Intel processors take years to develop. “We are at a paradigm shift in computing architecture,” he says, and some longtime observers agree. “This is a big revolution in terms of the technology approach,” says Linley Gwennap, editor with chip newsletter Microprocessor Report, referring to domain-specific chips. “Intel is working for two years to squeeze out 10% improvements in performance, and this can get you 10 times the performance,” while being less expensive than Intel’s most complex parts, he says.

To enable the revolution, Patterson and others have created what is now the fifth version of RISC, with commands that are open-source—meaning they can be modified by anyone, just like the freely available Linux operating system. As with Linux, designs tailored to a problem can be made by anyone who grabs the code. And rapid development and improvement are promoted versus the monolithic, years-long process of Intel’s generic chips. “RISC-V shows things can be done by smaller teams much more cheaply,” says Patterson.

ONE OF THOSE STARTUPS is San Francisco–based SiFive, founded by Berkeley alums who built RISC-V, and for whom Patterson is a technical advisor. Using RISC-V, SiFive aims to be the “Amazon of chip development,” says Jack Kang, head of business development, likening it to Amazon’s Web Services cloud computing operation. SiFive uses the open collaboration of RISC-V to automate the design of chips. A company can use SiFive’s automation service to obtain a part at 10% to 20% of the cost it would normally take.

For now, Patterson’s vision faces plenty of skeptics
. Some doubt the economic benefits of RISC-V; others argue the narrower focus of domain-specific chips makes them a niche. Having propelled one major revolution, Patterson is undaunted. The death of Moore’s Law means domain-specific chips are not a philosophical stance but a necessity. “We have no other way to build a more energy-efficient processor,” he says.

The market will decide. “It’s not like you’re debating how many angels can dance on the head of a pin,” says Patterson. “We will know in the next five years because the markets are going to tell us who wins.”

TIERNAN RAY can be reached at: tiernan.ray@barrons.com

Share RecommendKeepReplyMark as Last Read


From: FUBHO7/3/2017 2:09:43 PM
   of 323
 
Carbon Nanotubes Found to Be a Safe Bet For Reconnecting Neurons

Best hope of repairing injured spines.


ANDREW STAPLETON
1 JUL 2017


Scientists have integrated carbon nanotubes in neurons to control growth and restore lost electrical connections between nerve cells.

They have shown that the carbon nanotubes can be used safely and hope they can restore neural function to people with spinal injuries. The integration of carbon nanotubes brought along some unexpected benefits too.

Carbon nanotubes have some remarkable properties: excellent thermal conductivity, mechanical strength, and electrical conductivity. They have been used to make the toughest fibre ever made, computer chips that run twice as fast as silicon chips and they have also been used to create the world's blackest material – Vantablack.

Because they are long, thin and conductive, carbon nanotubes seemed like the ideal candidate for neuronal prostheses, restoring function to damaged neural pathways, and systems that interface with the human body.

"The perfect material to build neural interfaces does not exist, yet the carbon nanotubes we are working on have already proved to have great potentialities," said Laura Ballerini, one of the researchers from the International School for Advanced Studies in Italy.

"After all, nanomaterials currently represent our best hope for developing innovative strategies in the treatment of spinal cord injuries."

So why aren't we already using them?

There have been concerns in the past about the safety of carbon nanotubes. Their fibrous nature puts them in the same class as asbestos and they have been shown to penetrate the cell membrane – a delicate layer made of lipid molecules.

In this study, the researchers chemically modified the surface of carbon nanotubes so that they could be turned into a carbon nanotube ink for easy processing. The ink was dropped onto a flat glass surface and heated to a temperature of 350 degrees Celsius to create a thin mat of pure carbon nanotubes.

The neurons were harvested from the hippocampus of laboratory rats and deposited directly on top of the nanotube mats. After an incubation period at body temperature, the cells were tested for conductivity and compatibility with the carbon nanotube surface.

Ballerini and her team are confident that, this time, they have shown carbon nanotubes can be used safely.

"First of all, we have proved that nanotubes do not interfere with the composition of lipids, of cholesterol in particular, which make up the cellular membrane in neurons," said Ballerini.

Just when the researchers thought it couldn't get any better, their study also found that nerve cells growing on a flat bed of carbon nanotubes reached maturity much quicker than normal.

"[Carbon] nanotubes facilitate the full growth of neurons and the formation of new synapses. Having established the fact that this interaction is stable and efficient is an aspect of fundamental importance," said Ballerini.

These are still early days and there are still a couple of important issues that need to be addressed. Understanding exactly how the integration of carbon nanotubes impacts the creation and structure of neuronal pathways will need to be fleshed out.

"If, for example, the mere contact [with carbon nanotubes] provoked a vertiginous rise in the number of synapses, these materials would be essentially unusable," said Maurizio Prato, another member of the research team.

Despite this concern, the researchers are hopeful that carbon nanotubes can be used safely as neuronal prostheses and are confidently pursuing the next stage of research – animal testing.

"We are proving that carbon nanotubes perform excellently in terms of duration, adaptability and mechanical compatibility with the tissue. Now we know that their interaction with the biological material, too, is efficient."

"Based on this evidence, we are already studying the in vivo application, and preliminary results appear to be quite promising also in terms of recovery of the lost neurological functions."

The study has been reported in Nanomedicine: Nanotechnology, Biology and Medicine.

Share RecommendKeepReplyMark as Last Read


From: FUBHO7/10/2017 1:39:26 PM
   of 323
 
DARPA Wants Brain Implants That Record From 1 Million Neurons

Image: ParadromicsDARPA is known for issuing big challenges. Still, the mission statement for its new Neural Engineering Systems Design program is a doozy: Make neural implants that can record high-fidelity signals from 1 million neurons.

Today’s best brain implants, like the experimental system that a paralyzed man used to control a robotic arm, record from just a few hundred neurons. Recording from 1 million neurons would provide a much richer signal that could be used to better control external devices such as wheelchairs, robots, and computer cursors.

What’s more, the DARPA program calls for the tech to be bidirectional; the implants must be able to not only record signals, but also to transmit computer-generated signals to the neurons. That feature would allow for neural prosthetics that provide blind people with visual information or deaf people with auditory info.

Today the agency announced the six research groups that have been awarded grants under the NESD program. In a press release, DARPA says that even the 1-million-neuron goal is just a starting point. “A million neurons represents a miniscule percentage of the 86 billion neurons in the human brain. Its deeper complexities are going to remain a mystery for some time to come,” says Phillip Alvelda, who launched the program in January. “But if we’re successful in delivering rich sensory signals directly to the brain, NESD will lay a broad foundation for new neurological therapies.”

Image: ParadromicsOne of the teams taking on the challenge is the Silicon Valley startup Paradromics. Company CEO Matt Angle says his company is developing a device called the Neural Input-Output Bus (NIOB) that will use bundles of microwire electrodes to interface with neurons. With four bundles containing a total of 200,000 microwires, he says, the NIOB could record from or stimulate 1 million neurons.

“Microwire electrodes have been used since the 1950s, but traditionally they’re un-scaleable,” Angle tells IEEE Spectrum in an interview. With existing systems “you need to wire up one microwire to one amplifier—so if you want to use 100,000 microwires, that’s a lot of soldering work for a grad student,” he says.

Paradromics gets around this problem by polishing the end of a microwire bundle to make it very flat, and then bonding the whole bundle to a chip containing an array of CMOS amplifiers. “We make sure the probability of a single wire coming down and touching the pad on the CMOS is very, very high,” says Angle, “but if you have a few spots that don’t get wires, that doesn’t matter much.”

Image: ParadromicsAs always, DARPA emphasizes the practical application of technology. By the end of the four-year NESD program, the teams are expected to have working prototypes that can be used in therapies for sensory restoration.

Paradromics’ goal is a speech prosthetic. The NIOB device’s microwires will record signals from the superior temporal gyrus, a brain area involved in audio processing that decodes speech at the level of sound units called phonemes (other areas of the brain deal with higher-level semantics).

The company drew inspiration from neuroscientist Robert Knight at University of California Berkeley, who has shown that when people read aloud or read silently to themselves the neural signal in the superior temporal gyrus can be used to reconstruct the words. This finding suggests that a user could just imagine speaking a phrase, and a neural implant could record the signal and send the information to a speech synthesizer.

While Paradromics has chosen this speech prosthetic as its DARPA-funded goal, its hardware could be used for any number of neural applications. The differences would come from changing the location of the implant and from the software that decodes the signal.

The challenges ahead of Paradromics are significant. Angle imagines a series of implanted chips, each bonded to 50,000 microwires, that send their data to one central transmitter that sits on the surface of the skull, beneath the skin of the scalp. To deal efficiently with all that data, the implanted system will have to do some processing: “You need to make some decisions inside the body about what you want to send out,” Angle says, “because you can’t have it digitizing and transmitting 50 GB per second.” The central transmitter must then wirelessly send data to a receiver patch worn on the scalp, and must also wirelessly receive power from it.

The other five teams that won NESD grants are research groups investigating vision, speech, and the sense of touch. The group from Brown University, led by neural engineer Arto Nurmikko, is working on a speech prosthetic using tens of thousands of independent “neurograins,” each about the size of a grain of table salt. Those grains will interface with individual neurons, and send their data to one electronics patch that will either be worn on the scalp or implanted under the skin.

Image: Brown UniversityIn an email, Nurmikko writes that his team is working on such challenges as how to implant the neurograins, how to ensure that they’re hermetically sealed and safe, and how to handle the vast amount of data that they’ll generate. And the biggest challenge of all may be networking 10,000 or 100,000 neurograins together to make one coherent telecommunications system that provides meaningful data.

“Even with a hundred thousand such grains, we would still not reach every neuron—and that’s not the point,” Nurmikko writes. “You want to listen to a sufficiently large number of neurons to understand how, say, the auditory cortex computes ‘the Star Spangled Banner’ for us to have a clear perception of both the music and the words.”

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.