SI
SI
discoversearch

   Technology StocksNew Technology


Previous 10 Next 10 
From: FUBHO8/19/2016 6:50:01 AM
   of 330
 

Organisms might be quantum machines



18 July 2016


If there’s any subject that perfectly encapsulates the idea that science is hard to understand, it’s quantum physics. Scientists tell us that the miniature denizens of the quantum realm behave in seemingly impossible ways: they can exist in two places at once, or disappear and reappear somewhere else instantly.

The one saving grace is that these truly bizarre quantum behaviours don’t seem to have much of an impact on the macroscopic world as we know it, where “classical” physics rules the roost.

Or, at least, that’s what scientists thought until a few years ago.

Quantum processes might be at work behind some very familiar processes

Now that reassuring wisdom is starting to fall apart. Quantum processes may occur not quite so far from our ordinary world as we once thought. Quite the opposite: they might be at work behind some very familiar processes, from the photosynthesis that powers plants – and ultimately feeds us all – to the familiar sight of birds on their seasonal migrations. Quantum physics might even play a role in our sense of smell.

In fact, quantum effects could be something that nature has recruited into its battery of tools to make life work better, and to make our bodies into smoother machines. It’s even possible that we can do more with help from the strange quantum world than we could without it.







At one level, photosynthesis looks very simple. Plants, green algae and some bacteria take in sunlight and carbon dioxide, and turn them into energy. What niggles in the back of biologists minds, though, is that photosynthetic organisms make the process look just a little bit too easy.

It’s one part of photosynthesis in particular that puzzles scientists. A photon – a particle of light – after a journey of billions of kilometres hurtling through space, collides with an electron in a leaf outside your window. The electron, given a serious kick by this energy boost, starts to bounce around, a little like a pinball. It makes its way through a tiny part of the leaf’s cell, and passes on its extra energy to a molecule that can act as an energy currency to fuel the plant.

Photosynthetic organisms make the process look just a little bit too easy

The trouble is, this tiny pinball machine works suspiciously well. Classical physics suggests the excited electron should take a certain amount of time to career around inside the photosynthetic machinery in the cell before emerging on the other side. In reality, the electron makes the journey far more quickly.

What’s more, the excited electron barely loses any energy at all in the process. Classical physics would predict some wastage of energy in the noisy business of being batted around the molecular pinball machine. The process is too fast, too smooth and too efficient. It just seems too good to be true.







Then, in 2007, photosynthesis researchers began to see the light. Scientists spotted signs of quantum effects in the molecular centres for photosynthesis. Tell-tale signs in the way the electrons were behaving opened the door to the idea that quantum effects could even be playing an important biological role.

This could be part of the answer to how the excited electrons pass through the photosynthetic pinball machine so quickly and efficiently. One quantum effect is the ability to exist in many places at the same time – a property known as quantum superposition. Using this property, the electron could potentially explore many routes around the biological pinball machine at once. In this way it could almost instantly select the shortest, most efficient route, involving the least amount of bouncing about.

Quantum physics had the potential to explain why photosynthesis was suspiciously efficient – a shocking revelation for biologists.

“I think this was when people started to think that something really exciting was going on,” says Susana Huelga, a quantum physicist at Ulm University in Germany.

Quantum physics had the potential to explain why photosynthesis was suspiciously efficient – a shocking revelation for biologists

Quantum phenomena such as superposition had previously been observed mostly under highly controlled conditions. Typical experiments to observe quantum phenomena involve cooling down materials to bitingly cold temperatures in order to dampen down other atomic activity that might drown out quantum behaviour. Even at those temperatures, materials must be isolated in a vacuum – and the quantum behaviours are so subtle that scientists need exquisitely sensitive instruments to see what’s going on.







The wet, warm, bustling environment of living cells is the last place you might expect to see quantum events. “[But] even here, quantum features are still alive,” Huelga says.

Of course, just because these quantum features make an unexpected appearance in living cells, it doesn’t necessarily mean that they’re playing a useful role. There are theories as to how quantum superposition may be speeding up the process of photosynthesis, but a hard link between this behaviour and a biological function is still missing, Huelga says.

“The next step will be having some quantitative results saying that the efficiency of this biological machine is this due to quantum phenomena.”







Quantum effects in biology aren’t just a quirk of plants and other organisms that do the peculiar job of turning sunlight into fuel. They may also provide an answer to a scientific puzzle that’s been around since the 19th Century: how migratory birds know which way to fly.

Quantum effects in biology might explain how migratory birds know which way to fly

In a journey thousands of kilometres long, a migratory bird such as the European robin will often fly to southern Europe or North Africa to escape particularly cold winters. This journey over an unfamiliar landscape would be dangerous, if not impossible, without a compass. Start the journey in the wrong direction and a robin setting off from Poland might end up in Siberia rather than Morocco.

A biological compass isn’t an easy thing to picture. If there was some form of tiny magnetic iron needle-like structure spinning deep inside a robin’s brain or eyes, biologists would almost certainly know about it by now. But no such luck: a biological structure that could do the job has never been found.

Another theory, first proposed in the late 1970s, suggested an alternative way birds might know which way to fly: perhaps they carry a chemical compass that relies on quantum phenomena to tell which way is north.







Peter Hore, a chemist at the University of Oxford in the UK, says that such a chemical compass would work with the help of molecules with excitable lone electrons, known as radicals, and a quantum property known as spin.

Electrons in molecules usually come in pairs, spinning in opposite directions and effectively cancelling out each other’s spin. A “lone” electron spinning on its own, though, isn’t cancelled out. This means it is free to interact with its environment – including magnetic fields.

A “lone” electron spinning on its own is free to interact with its environment – including magnetic fields

As it turns out, Hore says, robins can become temporarily disorientated when exposed to radio waves – a type of electromagnetic wave – of a particular range of frequencies. If a radio wave has a frequency of just the same rate that an electron spins, it could cause the electron to resonate. This is the same kind of resonance you might experience when you sing in the shower – certain notes sound a lot louder and fuller than others. Hitting the right radio wave frequency will make the electron vibrate more vigorously in the same way.

But what does this have to do with the idea that birds use a chemical compass? The theory is that ordinarily, radicals at the back of the bird’s eye respond to the Earth’s magnetic field. The magnetic field will cause the electron to leave its spot in the chemical compass and start a chain of reactions to produce a particular chemical. As long as the bird keeps pointing in the same direction, more of the chemical will build up.

So the amount of this chemical present is a source of information, generating signals in the bird’s nerve cells. As part of many different environmental cues, this information will inform the bird about whether it is pointing towards Siberia or Morocco.








The radio wave observation is an important one because we would expect anything that interferes with electron spin to be able, at least in principle, to disrupt the chemical compass. It can be as useful to study why something sometimes doesn’t work as it is to study why it generally does work.

Even so, the quantum compass remains an idea. It hasn’t yet been found in nature. Hore has been focusing on finding out how the quantum compass can work in principle, using molecules that theoretically ought to be able to do the job.

The quantum compass remains an idea – it hasn’t yet been found in nature

“We’ve done experiments on model compounds to establish the principle that one can make a chemical compass,” Hore says. These have helped to pin down some molecules that do seem to be fit for the purpose of detecting magnetic fields, he says. “What we don’t know is whether they behave in exactly the same way inside a cell in the bird’s body.”

The magnetic compass is just part of a complex and poorly understood system of navigation in birds, Hore says. The quantum theory for how such a compass works may be the best out there so far, but there’s still a lot of ground to cover to link up the behavioural patterns of birds with the theoretical chemistry.







There is one field that seems tantalisingly close to demonstrating the reality of quantum biology, though: the science of smell.

Exactly how our noses are capable of distinguishing and identifying a myriad of differently shaped molecules is a big challenge for conventional theories of olfaction. When a smelly molecule wafts into one of our nostrils, no one is yet entirely sure what happens next. Somehow the molecule interacts with a sensor – a molecular receptor – embedded in the delicate inner skin of our nose.

Exactly how are our noses capable of distinguishing and identifying a myriad of differently shaped molecules?

A well-trained human nose can distinguish between thousands of different smells. But how this information is carried in the shape of the smelly molecule is a puzzle. Many molecules that are almost identical in shape, but for swapping around an atom or two, have very different smells. Vanillin smells of vanilla, but eugenol, which is very similar in shape, smells of cloves. Some molecules that are a mirror image of each other – just like your right and left hand – also have different smells. But equally, some very differently shaped molecules can smell almost exactly the same.

Luca Turin, a chemist at the BSRC Alexander Fleming institute in Greece, has been working to crack the way that the properties of a molecule encode its scent. “There is something very, very peculiar at the core of olfaction, which is that our ability to somehow analyse molecules and atoms is inconsistent with what we think we know about molecular recognition,” Turin says.

He argues that the molecule’s shape alone isn’t enough to determine its smell. He says that it’s the quantum properties of the chemical bonds in the molecule that provides the crucial information.

According to Turin’s quantum theory of olfaction, when a smelly molecule enters the nose and binds to a receptor, it allows a process called quantum tunnelling to happen in the receptor.

When a smelly molecule enters the nose and binds to a receptor, it allows quantum tunnelling to happen

In quantum tunnelling, an electron can pass through a material to jump from point A to point B in a way that seems to bypass the intervening space. As with the bird’s quantum compass, the crucial factor is resonance. A particular bond in the smelly molecule, Turin says, can resonate with the right energy to help an electron on one side of the receptor molecule leap to the other side. The electron can only make this leap through the so-called quantum tunnel if the bond is vibrating with just the right energy.

When the electron leaps to the other site on the receptor, it could trigger a chain reaction that ends up sending signals to the brain that the receptor has come into contact with that particular molecule. This, Turin says, is an essential part of what gives a molecule its smell, and the process is fundamentally quantum.

“Olfaction requires a mechanism that somehow involves the actual chemical composition of the molecule,” he says. “It was that factor that found a very natural explanation in quantum tunnelling.”

The strongest evidence for the theory is Turin’s discovery that two molecules with extremely different shapes can smell the same if they contain bonds with similar energies.

Turin predicted that boranes – relatively rare compounds that are hard to come by – smelled very like sulphur, or rotten eggs. He’d never smelt a borane before, so the prediction was quite a gamble.







He was right. Turin says that, for him, that was the clincher. “Borane chemistry is vastly different – in fact there’s zero relation – to sulphur chemistry. So the only thing those two have in common is a vibrational frequency. They are the only two things out there in nature that smell of sulphur.”

While that prediction was a great success for the theory, it’s not ultimate proof. Ideally Turin wants to catch these receptors in the act of exploiting quantum phenomena. He says they are getting “pretty close” to nailing those experiments. “I don’t want to jinx it, but we’re working on it,” he says. “We think we have a way to do it, so we’re definitely going to have a go in the next few months. I think that nothing short of that will really move things forward.”

Turin wants to catch these receptors in the act of exploiting quantum phenomena

Whether or not nature has evolved to make use of quantum phenomena to help organisms make fuel from light, tell north from south, or distinguish vanilla from clove, the strange properties of the atomic world can still tell us a lot about the finer workings of living cells

“There is a second way of seeing how quantum mechanics interacts with biology, and that is by sensing and probing,” Huelga says. “Quantum probes would be able to shed light on many interesting things in the dynamics of biological systems.”


And whether or not nature got there first, it’s no excuse for us not to mix biology with quantum phenomena to develop new technologies, she says. Making use of quantum effects in biologically inspired photovoltaic cells, for instance, could give solar panels a huge boost in efficiency. “At this very moment there is quite a lot of activity in organic photovoltaics, to see whether with natural or artificial structures one can have an enhanced efficiency that exploit quantum effects.”

So even if alternative, as yet entirely unknown mechanisms emerge for these stubborn biological puzzles, biologists and quantum physicists certainly won’t have seen the last of each other. “This will definitely be a story with a happy end,” she says.

Share RecommendKeepReplyMark as Last Read


From: aknahow9/17/2016 5:20:33 PM
   of 330
 
Electronically Dimmable Windows






For decades people have used sunshades or blinds to keep out the heat of the sun, or protect themselves for sunrays. Typical sunshades have always been mechanical, and their reliability is not the best. Normal cloth has also been used to take out sunrays, but cloth needs maintenance and also to be replaced every now and then. Both the mechanical solutions and cloth based sunshades are also open for abuse.

Now technology has taken a quantum leap forward, as we can now build into the glass or polycarbonate windows a thin film, which can be electronically controlled so that the end result is like a classic sunshade, but we can actually do much more with it.

If you want to have a quick overview and see the possibilities of this technology, watch this YouTube video.

If we use a train as an example, the heath penetrating the train can be massive when weather is hot and sunny. This makes the passengers feel bad and comfort is bad. The climate control equipment can be overloaded, and most of all, if the sunshades are down, the passengers cannot enjoy the scenery outside. What if the solution could be so simple that the train guard, by just pressing a button, could dim all windows in the train, but in such a way that the heat of the sun would not any more penetrate, and the passengers could still see out and enjoy the scenery? Another possibility would be that every window is separately controlled by the passenger.
?
The above is no science fiction! With this technology our partner Vision Systems, has been able to produce windows which actually just do that! The picture here shows how the so called Nuance technology works

With the Nuance technology 94 % of solar energy is blocked, and UV filtration is up to 99 %. The Nuance film is laminated between panels of glass, polycarbonate or composite glass. Nuance film contains microscopic particles and regulating the voltage to the film adjusts the particles’ orientation, instantly and precisely controlling the passage of the light, glare and heat through the film. The surface can be flat or even curved. The switching speed (light to dark or vice versa) is between 1-3 seconds, and it is independent of window size.

Power consumption of one window is extremely low (1,1 W/m²), which means that even a small solar panel cell could be installed, and this way the whole window would not need any outside power.

Another big advantage is that the cabin crew can control centrally all windows if needed, but also individuals can control them. There are no mechanical parts involved, which means that maintenance of these windows are basically zero.

Currently dimmable windows are popular within the business jet industry, and more and more planes are fitted out with them. Commercial airliners are currently also being fitted with this interesting technology, This way there is no need anymore for mechanical shades which needs to be open for taxiing, take-off and landing. The yacht industry is also starting to show great interest in the dimmable windows technology. Many boats and yachts have already been fitted out with EDW.


In the train industry we see huge benefits both for passengers and operators - more comfort and less maintenance. Contact us, and we will tell you much more about the benefits and possibilities of the Nuance solutions.
? Click here to read more and/or contact us.



pp_edw.pdf
Download File






























Share RecommendKeepReplyMark as Last Read


From: FUBHO10/5/2016 5:07:02 PM
   of 330
 
How molecules became machines


The Nobel Prize in Chemistry 2016 is awarded to
Jean-Pierre Sauvage,
Sir J. Fraser Stoddart
and
Bernard L. Feringa

for their development of molecular machines that are a thousand times thinner than a hair strand. This is the story of how they succeeded in linking molecules together to design everything from a tiny lift to motors and miniscule muscles.
nobelprize.org

Share RecommendKeepReplyMark as Last Read


From: aknahow10/22/2016 10:29:22 AM
   of 330
 
vision-systems.fr

Interesting video

Share RecommendKeepReplyMark as Last Read


From: FUBHO1/16/2017 12:58:55 PM
   of 330
 


AIRBUS 'flying car' ready by end of year...


Lasers turn Earth atmosphere into massive surveillance system...

Share RecommendKeepReplyMark as Last Read


From: FUBHO1/17/2017 9:14:17 AM
   of 330
 
Some vertebrate species have the ability to reproduce asexually even though they normally reproduce sexually. These include certain sharks, turkeys, Komodo dragons, snakes and rays.

newscientist.com

Share RecommendKeepReplyMark as Last Read


From: FUBHO1/20/2017 3:33:50 PM
   of 330
 













Since its discovery in 2004, scientists have believed that graphene may have the innate ability to superconduct. Now Cambridge researchers have found a way to activate that previously dormant potential.






It has long been postulated that graphene should undergo a superconducting transition, but can’t. The idea of this experiment was, if we couple graphene to a superconductor, can we switch that intrinsic superconductivity on?

Jason Robinson


Researchers have found a way to trigger the innate, but previously hidden, ability of graphene to act as a superconductor – meaning that it can be made to carry an electrical current with zero resistance.

The finding, reported in Nature Communications, further enhances the potential of graphene, which is already widely seen as a material that could revolutionise industries such as healthcare and electronics. Graphene is a two-dimensional sheet of carbon atoms and combines several remarkable properties; for example, it is very strong, but also light and flexible, and highly conductive.

Since its discovery in 2004, scientists have speculated that graphene may also have the capacity to be a superconductor. Until now, superconductivity in graphene has only been achieved by doping it with, or by placing it on, a superconducting material - a process which can compromise some of its other properties.

But in the new study, researchers at the University of Cambridge managed to activate the dormant potential for graphene to superconduct in its own right. This was achieved by coupling it with a material called praseodymium cerium copper oxide (PCCO).

Superconductors are already used in numerous applications. Because they generate large magnetic fields they are an essential component in MRI scanners and levitating trains. They could also be used to make energy-efficient power lines and devices capable of storing energy for millions of years.

Superconducting graphene opens up yet more possibilities. The researchers suggest, for example, that graphene could now be used to create new types of superconducting quantum devices for high-speed computing. Intriguingly, it might also be used to prove the existence of a mysterious form of superconductivity known as “p-wave” superconductivity, which academics have been struggling to verify for more than 20 years.

The research was led by Dr Angelo Di Bernardo and Dr Jason Robinson, Fellows at St John’s College, University of Cambridge, alongside collaborators Professor Andrea Ferrari, from the Cambridge Graphene Centre; Professor Oded Millo, from the Hebrew University of Jerusalem, and Professor Jacob Linder, at the Norwegian University of Science and Technology in Trondheim.

“It has long been postulated that, under the right conditions, graphene should undergo a superconducting transition, but can’t,” Robinson said. “The idea of this experiment was, if we couple graphene to a superconductor, can we switch that intrinsic superconductivity on? The question then becomes how do you know that the superconductivity you are seeing is coming from within the graphene itself, and not the underlying superconductor?”

Similar approaches have been taken in previous studies using metallic-based superconductors, but with limited success. “Placing graphene on a metal can dramatically alter the properties so it is technically no longer behaving as we would expect,” Di Bernardo said. “What you see is not graphene’s intrinsic superconductivity, but simply that of the underlying superconductor being passed on.”

PCCO is an oxide from a wider class of superconducting materials called “cuprates”. It also has well-understood electronic properties, and using a technique called scanning and tunnelling microscopy, the researchers were able to distinguish the superconductivity in PCCO from the superconductivity observed in graphene.

Superconductivity is characterised by the way the electrons interact: within a superconductor electrons form pairs, and the spin alignment between the electrons of a pair may be different depending on the type - or “symmetry” - of superconductivity involved. In PCCO, for example, the pairs’ spin state is misaligned (antiparallel), in what is known as a “d-wave state”.

By contrast, when graphene was coupled to superconducting PCCO in the Cambridge-led experiment, the results suggested that the electron pairs within graphene were in a p-wave state. “What we saw in the graphene was, in other words, a very different type of superconductivity than in PCCO,” Robinson said. “This was a really important step because it meant that we knew the superconductivity was not coming from outside it and that the PCCO was therefore only required to unleash the intrinsic superconductivity of graphene.”

It remains unclear what type of superconductivity the team activated, but their results strongly indicate that it is the elusive “p-wave” form. If so, the study could transform the ongoing debate about whether this mysterious type of superconductivity exists, and – if so – what exactly it is.

In 1994, researchers in Japan fabricated a triplet superconductor that may have a p-wave symmetry using a material called strontium ruthenate (SRO). The p-wave symmetry of SRO has never been fully verified, partly hindered by the fact that SRO is a bulky crystal, which makes it challenging to fabricate into the type of devices necessary to test theoretical predictions.

“If p-wave superconductivity is indeed being created in graphene, graphene could be used as a scaffold for the creation and exploration of a whole new spectrum of superconducting devices for fundamental and applied research areas,” Robinson said. “Such experiments would necessarily lead to new science through a better understanding of p-wave superconductivity, and how it behaves in different devices and settings.”

The study also has further implications. For example, it suggests that graphene could be used to make a transistor-like device in a superconducting circuit, and that its superconductivity could be incorporated into molecular electronics. “In principle, given the variety of chemical molecules that can bind to graphene’s surface, this research can result in the development of molecular electronics devices with novel functionalities based on superconducting graphene,” Di Bernardo added.

The study, p-wave triggered superconductivity in single layer graphene on an electron-doped oxide superconductor, is published in Nature Communications. (DOI: 101038/NCOMMS14024).

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen3/26/2017 9:58:48 AM
1 Recommendation   of 330
 
What Blockchain Means for the Sharing Economy


Primavera De Filipp
Harvard Business Review
March 15, 2017

Executive Summary

Blockchain technology is facilitating the emergence a new kind of radically decentralized organization. These organizations — which have no director or CEO, or any sort of hierarchical structure — are administered, collectively, by individuals interacting on a blockchain. As such, it is important not to confuse them with the traditional model of “crowdsourcing,” where people contribute to a platform but do not benefit proportionately from the success of that platform. Blockchain technologies can support a much more cooperative form of crowdsourcing — sometimes referred to as “platform cooperativism”— where users qualify both as contributors and shareholders of the platforms to which they contribute. The value produced within these platforms can be more equally redistributed among those who have contributed to the value creation. With this new opportunity for increased “cooperativism,” we could be moving toward a true sharing economy.


jakub nanista
___________________


Look at the modus operandi of today’s internet giants — such as Google, Facebook, Twitter, Uber, or Airbnb — and you’ll notice they have one thing in common: They rely on the contributions of users as a means to generate value within their own platforms. Over the past 20 years the economy has progressively moved away from the traditional model of centralized organizations, where large operators, often with a dominant position, were responsible for providing a service to a group of passive consumers. Today we are moving toward a new model of increasingly decentralized organizations, where large operators are responsible for aggregating the resources of multiple people to provide a service to a much more active group of consumers. This shift marks the advent of a new generation of “dematerialized” organizations that do not require physical offices, assets, or even employees.

The problem with this model is that, in most cases, the value produced by the crowd is not equally redistributed among all those who have contributed to the value production; all of the profits are captured by the large intermediaries who operate the platforms.

Recently, a new technology has emerged that could change this imbalance. Blockchain facilitates the exchange of value in a secure and decentralized manner, without the need for an intermediary.
________________________

How Blockchain Works

Here are five basic principles underlying the technology.

1. Distributed Database

Each party on a blockchain has access to the entire database and its complete history. No single party controls the data or the information. Every party can verify the records of its transaction partners directly, without an intermediary.

2. Peer-to-Peer Transmission

Communication occurs directly between peers instead of through a central node. Each node stores and forwards information to all other nodes.

3. Transparency with Pseudonymity

Every transaction and its associated value are visible to anyone with access to the system. Each node, or user, on a blockchain has a unique 30-plus-character alphanumeric address that identifies it. Users can choose to remain anonymous or provide proof of their identity to others. Transactions occur between blockchain addresses.

4. Irreversibility of Records

Once a transaction is entered in the database and the accounts are updated, the records cannot be altered, because they’re linked to every transaction record that came before them (hence the term “chain”). Various computational algorithms and approaches are deployed to ensure that the recording on the database is permanent, chronologically ordered, and available to all others on the network.

5. Computational Logic

The digital nature of the ledger means that blockchain transactions can be tied to computational logic and in essence programmed. So users can set up algorithms and rules that automatically trigger transactions between nodes.
________________

But the most revolutionary aspect of blockchain technology is that it can run software in a secure and decentralized manner. With a blockchain, software applications no longer need to be deployed on a centralized server: They can be run on a peer-to-peer network that is not controlled by any single party. These blockchain-based applications can be used to coordinate the activities of a large number of individuals, who can organize themselves without the help of a third party. Blockchain technology is ultimately a means for individuals to coordinate common activities, to interact directly with one another, and to govern themselves in a more secure and decentralized manner.

There are already a fair number of applications that have been deployed on a blockchain. Akasha, Steem.io, or Synereo, for instance, are distributed social networks that operate like Facebook, but without a central platform. Instead of relying on a centralized organization to manage the network and stipulate which content should be displayed to whom (often through proprietary algorithms that are not disclosed to the public), these platforms are run in a decentralized manner, aggregating the work of disparate groups of peers, which coordinate themselves, only and exclusively, through a set of code-based rules enshrined in a blockchain. People must pay microfees to post messages onto the network, which will be paid to those who contribute to maintaining and operating the network. Contributors may earn back the fee (plus additional compensation) as their messages spread across the network and are positively evaluated by their peers.

Similarly, OpenBazaar is a decentralized marketplace, much like eBay or Amazon, but operates independently of any intermediary operator. The platform relies on blockchain technology to ensure that buyers and sellers can interact directly with one another, without passing through any centralized middleman. Anyone is free to register a product on the platform, which will become visible to all users connected to the network. Once a buyer agrees to the price for that product, an escrow account is created on the bitcoin blockchain that requires two out of three people (i.e., the buyer, the seller, and a potential third-party arbitrator) to agree for the funds to be released (a so-called multisignature account). Once the buyer has sent the payment to the account, the seller ships the product; after receiving the product, the buyer releases the funds from the escrow account. Only if there is an issue between the two does the system require the intervention of a third party (e.g., a randomly selected arbitrator) to decide whether to release the payment to the seller or whether to return the money to the buyer.

There are also decentralized carpooling platforms, such as Lazooz or ArcadeCity, which operate much like Uber, but without a centralized operator. These platforms are governed only by the code deployed on a blockchain-based infrastructure, which is designed to govern peer-to-peer interactions between drivers and users. These platforms rely on a blockchain to reward drivers contributing to the platform with specially designed tokens that represent a share in the platform. The more a driver contributes to the network, the more they will be able to benefit from the success of that platform, and the greater their influence in the governance of that organization.

Blockchain technology thus facilitates the emergence of new forms of organizations, which are not only dematerialized but also decentralized. These organizations — which have no director or CEO, or any sort of hierarchical structure — are administered, collectively, by all individuals interacting on a blockchain. As such, it is important not to confuse them with the traditional model of “crowd-sourcing,” where people contribute to a platform but do not benefit from the success of that platform. Blockchain technologies can support a much more cooperative form of crowd-sourcing — sometimes referred to as “platform cooperativism”— where users qualify both as contributors and shareholders of the platforms to which they contribute. And since there is no intermediary operator, the value produced within these platforms can be more equally redistributed among those who have contributed to the value creation.

With this new opportunity for increased “cooperativism,” we’re moving toward a true sharing or collaborative economy — one that is not controlled by a few large intermediary operators, but that is governed by and for the people.

There’s nothing new about that, you might say — haven’t we heard these promises before? Wasn’t the mainstream deployment of the internet supposed to level the playing field for individuals and small businesses competing against corporate giants? And yet, as time went by, most of the promises and dreams of the early internet days faded away, as big giants formed and took control over our digital landscape.

Today we have a new opportunity to fulfill these promises. Blockchain technology makes it possible to replace the model of top-down hierarchical organizations with a system of distributed, bottom-up cooperation. This shift could change the way wealth is distributed in the first place, enabling people to cooperate toward the creation of a common good, while ensuring that everyone will be duly compensated for their efforts and contributions.

And yet nothing should be taken for granted. Just as the internet has evolved from a highly decentralized infrastructure into an increasingly centralized system controlled by only a few large online operators, there is always the risk that big giants will eventually form in the blockchain space. We’ve lost our first window of opportunity with the internet. If we, as a society, really value the concept of a true sharing economy, where the individuals doing the work are fairly rewarded for their efforts, it behooves us all to engage and experiment with this emergent technology, to explore the new opportunities it provides and deploy large, successful, community-driven applications that enable us to resist the formation of blockchain giants.

Primavera De Filippi is a permanent researcher at the National Center of Scientific Research (CNRS) in Paris. She is faculty associate at the Berkman Center for Internet & Society at Harvard Law School, where she is investigating the concept of “governance-by-design” as it relates to online distributed architectures.

hbr.org

Share RecommendKeepReplyMark as Last Read


From: FUBHO3/28/2017 8:16:12 AM
   of 330
 
EUV as Pizza
Perfecting the Recipe

by Bryon Moyer March 27, 2017

What’s the most important thing for the perfect pizza? This isn’t a fair question, of course, because there’s no definition of “perfect” when it comes to pizza. OK, maybe there is, but each person has their own. But stay with me for a sec here: for a certain style of pizza, you need an oven that’s over 500 °F – higher than home ovens can go, for sure. And the right old-school wood-fired ovens can do that.

So if you’re in search of that perfect pizza, the first thing you might have to do is to splurge to get an oven that will finally give you the heat you need. You might play with the amount of wood you use, the best pizza positioning to ensure even heating, and the best oven placement for not burning the house down before you’re satisfied that you’ve nailed it. It could take a lot of work – probably more than you expected.

And then, at last, you declare the oven problem solved. Do you now have the perfect pizza? Well… not yet. Now you need to make sure the dough is perfect, and there’s the sauce, and then there’s how you assemble it – how thin you make the crust, how much sauce, which and how many toppings. You’ve still got some work cut out for yourself.

That feels like where we are with EUV. We now have our oven – the EUV source. Still needs some tuning, but, as of last year, it feels like the worst is behind us. IBM, GlobalFoundries, and Samsung presented at IEDM last December, introducing a 7-nm FinFET process platform that, for the first time, included EUV. So the technology is finally starting to migrate towards production, and not just at Intel. But there’s the matter of this laundry list of things that need to be tidied up before we can launch. We got a rundown of the issues at the recent SPIE Advanced Litho conference, so let’s review them.

The Source

While you may think that we’ve been over this hump for a year or so, the source still tends to grab ongoing attention. ASML is still the leading voice here, although there was a mention of Gigaphoton as a credible second source. Intel has 14 scanners; ASML says there are 18 units on back order – no small thing with a price that they say is on the order of hundreds of millions of dollars per unit. They’re looking at a production ramp in 2018.

The NXE3300B is the incumbent model at the moment, but the next version – worthy of a number change – will be the NXE3400. It’s expected to support 5-nm processes and DRAM below 15 nm with 125 wafers/hour-throughput. The numerical aperture (NA) will remain at the current 0.33, giving 13-nm resolution. Critical dimension uniformity (CDU) will be 0.3 nm; the depth of field will be 100 nm; and they’re expecting 20% exposure latitude.

Power is now over 200 W – 205 to be precise. But the target for high-volume production is now 250 W, so there’s still work ahead.

A high-level parameter to watch is availability, which has risen to over 80% – but needs to be over 90% for economic high-volume production. There are a number of items that can take the machine down – they’re undergoing extreme tune-ups.

Tune-Up Issues

Droplet generator: I don’t recall this being on the hit parade in the past, but apparently the lifetime of the droplet generator hasn’t been what was hoped, running at present at around 80% of expectations. You may recall that this whole system works by carefully timing drops of molten tin and then zapping them – not once, but twice – with a laser as the droplet falls. So this is the critical element that feeds the beast.

While more is needed, they’ve improved that lifetime by 3.5 times as compared to last year (Samsung claims a 5X gain), and improvements in the works are expected to further triple the lifetime. That aside, Samsung is also hoping for faster tin refill to maintain uptime.

The collector: You may recall this from past years – it’s the metal shroud that takes the EUV from the zapped droplet, which emanates in all directions, and focuses it into the beam that will make its way to the wafer. And it is also degrading too quickly. So ASML has a newer version coming that should address this maintenance issue.

Pellicles: We talked about these in more detail last year; they’re the mask “cover,” if you will, that keeps fall-on defects out of the focus region so that they won’t print. Fundamentally, they have a working solution now, although, again, it can be improved. First, there’s no change to the fact that they’re still needed. Intel said that they’re seeing fall-on defects at higher levels than ASML is claiming. Defects on the pellicles themselves remain, although the numbers have been reduced. This really needs to get to 0 to be acceptable.

The pellicle material itself is OK, but Intel could do with better transmissivity and the ability to handle higher power when that becomes available.

Mask inspection and defectivity: The quality of blank masks has improved to the point where they can map the defects and then shift the pattern slightly to keep those defects out of critical points. There is still, however, no actinic (i.e., illuminated with the same light frequency as is used for exposure) inspection available for patterned masks.

Edge-placement error: I must not have been paying attention, since this was a new term to me this year. And yet it’s a hot issue (meriting its own TLA: EPE) – not just for EUV, but also for 193i and, in particular, for multi-patterning. It’s described by KLA-Tencor as a convolution of overlay and CDU – anything that can make edges on multiple layers fail to line up. That would also include etch steps as well.

Applied Materials has focused in particular on improvements to etch, but the ultimate solution requires yet more development so that self-aligning techniques with new materials and highly selective etching can use hard masks, rather than litho, to define edges, granting litho a bit of slop.

One approach being discussed to eliminate machine-to-machine variation is to dedicate machines to a particular lot. If lot A uses scanner X for a critical layer, then all subsequent exposures should use the same scanner – at least for other steps involving edge placement. Obviously this reduces manufacturing flexibility, so it’s likely to be used reluctantly.

Line-edge roughness (LER): This, along with the related – but different – line-width roughness, is a perennial issue. And I learned more about the diabolical triangle connecting EUV dose, resolution, and LER. Its origins lie in – surprise! – the source power we’ve been agonizing over for the last many years. Turns out that, even with the improvements in EUV power, ordinary deep-UV lithography delivers 14 times more photons to the resist on the wafer than EUV does.

The thing about photons is that they arrive and position themselves somewhat randomly. If you have enough of them, they average out and, ultimately, fill the expected areas of the resist with smooth edges. But with EUV, we can’t wait long enough for this averaging to be effective – we’d never make any money. So you end up with these ratty edges that scatter the poor electrons as they try to make their way through.

High-NA: The standard NA is 0.33; ASML is working on a lens that will raise the NA to 0.5 or higher. Interestingly, this will be an “anamorphic” lens – the x-direction scale will be different from the y direction (so, for example, a circle would end up looking like an ellipse). The new lens has a smaller field, which means less exposed in one shot, which means more shots per wafer – which means slower. They’re compensating for this with faster wafer and mask stages.


Interestingly, the ASML paper describing this includes a roadmap – with no years labeled for availability of this solution. So this may be a ways out there yet…

Mix-n-match: Of course, not every layer on a chip is going to require EUV – which is good, since there’s not enough of it to go around (and what there is is expensive). That means, for instance, SAQP for metal lines and then EUV for the block mask. (I was confused as to what a “block” mask is; it’s effectively the same as a “cut” mask. With aluminum, you can create lines and then cut them after. But with copper and dual-damascene, you interrupt the trenches with a block that defines the end of the lines and then fill with metal.)

This means that wafers will be going back and forth between conventional and EUV machines – creating a need to match characteristics to reduce yet another source of variation.



So that’s a super-fast rundown of EUV goings-on. I downloaded the EUV related papers, and there were – count them – 59 papers. Which is why I’m not even attempting detail. There’s lots more to explore in those papers.



More info:

SPIE proceedings (membership or attendance required – you may need a friend)

Share RecommendKeepReplyMark as Last Read


From: aknahow4/13/2017 9:45:25 AM
   of 330
 
news.panasonic.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.