SI
SI
discoversearch

   Technology StocksNew Technology


Previous 10 Next 10 
From: FUBHO7/3/2017 2:09:43 PM
   of 330
 
Carbon Nanotubes Found to Be a Safe Bet For Reconnecting Neurons

Best hope of repairing injured spines.


ANDREW STAPLETON
1 JUL 2017


Scientists have integrated carbon nanotubes in neurons to control growth and restore lost electrical connections between nerve cells.

They have shown that the carbon nanotubes can be used safely and hope they can restore neural function to people with spinal injuries. The integration of carbon nanotubes brought along some unexpected benefits too.

Carbon nanotubes have some remarkable properties: excellent thermal conductivity, mechanical strength, and electrical conductivity. They have been used to make the toughest fibre ever made, computer chips that run twice as fast as silicon chips and they have also been used to create the world's blackest material – Vantablack.

Because they are long, thin and conductive, carbon nanotubes seemed like the ideal candidate for neuronal prostheses, restoring function to damaged neural pathways, and systems that interface with the human body.

"The perfect material to build neural interfaces does not exist, yet the carbon nanotubes we are working on have already proved to have great potentialities," said Laura Ballerini, one of the researchers from the International School for Advanced Studies in Italy.

"After all, nanomaterials currently represent our best hope for developing innovative strategies in the treatment of spinal cord injuries."

So why aren't we already using them?

There have been concerns in the past about the safety of carbon nanotubes. Their fibrous nature puts them in the same class as asbestos and they have been shown to penetrate the cell membrane – a delicate layer made of lipid molecules.

In this study, the researchers chemically modified the surface of carbon nanotubes so that they could be turned into a carbon nanotube ink for easy processing. The ink was dropped onto a flat glass surface and heated to a temperature of 350 degrees Celsius to create a thin mat of pure carbon nanotubes.

The neurons were harvested from the hippocampus of laboratory rats and deposited directly on top of the nanotube mats. After an incubation period at body temperature, the cells were tested for conductivity and compatibility with the carbon nanotube surface.

Ballerini and her team are confident that, this time, they have shown carbon nanotubes can be used safely.

"First of all, we have proved that nanotubes do not interfere with the composition of lipids, of cholesterol in particular, which make up the cellular membrane in neurons," said Ballerini.

Just when the researchers thought it couldn't get any better, their study also found that nerve cells growing on a flat bed of carbon nanotubes reached maturity much quicker than normal.

"[Carbon] nanotubes facilitate the full growth of neurons and the formation of new synapses. Having established the fact that this interaction is stable and efficient is an aspect of fundamental importance," said Ballerini.

These are still early days and there are still a couple of important issues that need to be addressed. Understanding exactly how the integration of carbon nanotubes impacts the creation and structure of neuronal pathways will need to be fleshed out.

"If, for example, the mere contact [with carbon nanotubes] provoked a vertiginous rise in the number of synapses, these materials would be essentially unusable," said Maurizio Prato, another member of the research team.

Despite this concern, the researchers are hopeful that carbon nanotubes can be used safely as neuronal prostheses and are confidently pursuing the next stage of research – animal testing.

"We are proving that carbon nanotubes perform excellently in terms of duration, adaptability and mechanical compatibility with the tissue. Now we know that their interaction with the biological material, too, is efficient."

"Based on this evidence, we are already studying the in vivo application, and preliminary results appear to be quite promising also in terms of recovery of the lost neurological functions."

The study has been reported in Nanomedicine: Nanotechnology, Biology and Medicine.

Share RecommendKeepReplyMark as Last Read


From: FUBHO7/10/2017 1:39:26 PM
   of 330
 
DARPA Wants Brain Implants That Record From 1 Million Neurons

Image: ParadromicsDARPA is known for issuing big challenges. Still, the mission statement for its new Neural Engineering Systems Design program is a doozy: Make neural implants that can record high-fidelity signals from 1 million neurons.

Today’s best brain implants, like the experimental system that a paralyzed man used to control a robotic arm, record from just a few hundred neurons. Recording from 1 million neurons would provide a much richer signal that could be used to better control external devices such as wheelchairs, robots, and computer cursors.

What’s more, the DARPA program calls for the tech to be bidirectional; the implants must be able to not only record signals, but also to transmit computer-generated signals to the neurons. That feature would allow for neural prosthetics that provide blind people with visual information or deaf people with auditory info.

Today the agency announced the six research groups that have been awarded grants under the NESD program. In a press release, DARPA says that even the 1-million-neuron goal is just a starting point. “A million neurons represents a miniscule percentage of the 86 billion neurons in the human brain. Its deeper complexities are going to remain a mystery for some time to come,” says Phillip Alvelda, who launched the program in January. “But if we’re successful in delivering rich sensory signals directly to the brain, NESD will lay a broad foundation for new neurological therapies.”

Image: ParadromicsOne of the teams taking on the challenge is the Silicon Valley startup Paradromics. Company CEO Matt Angle says his company is developing a device called the Neural Input-Output Bus (NIOB) that will use bundles of microwire electrodes to interface with neurons. With four bundles containing a total of 200,000 microwires, he says, the NIOB could record from or stimulate 1 million neurons.

“Microwire electrodes have been used since the 1950s, but traditionally they’re un-scaleable,” Angle tells IEEE Spectrum in an interview. With existing systems “you need to wire up one microwire to one amplifier—so if you want to use 100,000 microwires, that’s a lot of soldering work for a grad student,” he says.

Paradromics gets around this problem by polishing the end of a microwire bundle to make it very flat, and then bonding the whole bundle to a chip containing an array of CMOS amplifiers. “We make sure the probability of a single wire coming down and touching the pad on the CMOS is very, very high,” says Angle, “but if you have a few spots that don’t get wires, that doesn’t matter much.”

Image: ParadromicsAs always, DARPA emphasizes the practical application of technology. By the end of the four-year NESD program, the teams are expected to have working prototypes that can be used in therapies for sensory restoration.

Paradromics’ goal is a speech prosthetic. The NIOB device’s microwires will record signals from the superior temporal gyrus, a brain area involved in audio processing that decodes speech at the level of sound units called phonemes (other areas of the brain deal with higher-level semantics).

The company drew inspiration from neuroscientist Robert Knight at University of California Berkeley, who has shown that when people read aloud or read silently to themselves the neural signal in the superior temporal gyrus can be used to reconstruct the words. This finding suggests that a user could just imagine speaking a phrase, and a neural implant could record the signal and send the information to a speech synthesizer.

While Paradromics has chosen this speech prosthetic as its DARPA-funded goal, its hardware could be used for any number of neural applications. The differences would come from changing the location of the implant and from the software that decodes the signal.

The challenges ahead of Paradromics are significant. Angle imagines a series of implanted chips, each bonded to 50,000 microwires, that send their data to one central transmitter that sits on the surface of the skull, beneath the skin of the scalp. To deal efficiently with all that data, the implanted system will have to do some processing: “You need to make some decisions inside the body about what you want to send out,” Angle says, “because you can’t have it digitizing and transmitting 50 GB per second.” The central transmitter must then wirelessly send data to a receiver patch worn on the scalp, and must also wirelessly receive power from it.

The other five teams that won NESD grants are research groups investigating vision, speech, and the sense of touch. The group from Brown University, led by neural engineer Arto Nurmikko, is working on a speech prosthetic using tens of thousands of independent “neurograins,” each about the size of a grain of table salt. Those grains will interface with individual neurons, and send their data to one electronics patch that will either be worn on the scalp or implanted under the skin.

Image: Brown UniversityIn an email, Nurmikko writes that his team is working on such challenges as how to implant the neurograins, how to ensure that they’re hermetically sealed and safe, and how to handle the vast amount of data that they’ll generate. And the biggest challenge of all may be networking 10,000 or 100,000 neurograins together to make one coherent telecommunications system that provides meaningful data.

“Even with a hundred thousand such grains, we would still not reach every neuron—and that’s not the point,” Nurmikko writes. “You want to listen to a sufficiently large number of neurons to understand how, say, the auditory cortex computes ‘the Star Spangled Banner’ for us to have a clear perception of both the music and the words.”

Share RecommendKeepReplyMark as Last Read


From: FUBHO7/27/2017 7:09:41 AM
   of 330
 



Google enters race for nuclear fusion technology




The tech giant and a leading US fusion company develop a new computer algorithm that significantly speeds up progress towards clean, limitless energy

Central confinement chamber of C-2U, a plasma confinement experiment comprising 10,000 engineering control tags and 1,000 physics diagnostics channels at Tri Alpha Energy’s research facility in California, US. The algorithm will cut the time it takes to work out best possible options to form plasma from a month to just a few hours. Photograph: Courtesy of Tri Alpha Energy Inc.View more sharing optionsTuesday 25 July 2017 11.38 EDT

Last modified on Tuesday 25 July 2017 12.30 EDT

Google and a leading nuclear fusion company have developed a new computer algorithm which has significantly speeded up experiments on plasmas, the ultra-hot balls of gas at the heart of the energy technology.


Tri Alpha Energy, which is backed by Microsoft co-founder Paul Allen, has raised over $500m (£383m) in investment. It has worked with Google Research to create what they call the Optometrist algorithm. This enables high-powered computation to be combined with human judgement to find new and better solutions to complex problems.

Nuclear fusion, in which atoms are combined at extreme temperatures to release huge amounts of energy, is exceptionally complex. The physics of nuclear fusion involves non-linear phenomena, where small changes can produce large outcomes, making the engineering needed to suspend the plasma very challenging.

“The whole thing is beyond what we know how to do even with Google-scale computer resources,” said Ted Baltz, at the Google Accelerated Science Team. So the scientists combined computer learning approaches with human input by presenting researchers with choices. The researchers choose the option they instinctively feel is more promising, akin to choosing the clearer text during an eye test.

“We boiled the problem down to ‘let’s find plasma behaviours that an expert human plasma physicist thinks are interesting, and let’s not break the machine when we’re doing it’,” said Baltz. “This was a classic case of humans and computers doing a better job together than either could have separately.”

Working with Google enabled experiment’s on Tri Alpha Energy’s C2-U machine to progress much faster, with operations that took a month speeded up to just a few hours. The algorithm revealed unexpected ways of operating the plasma, with the research published on Tuesday in the journal Scientific Reports. The team achieved a 50% reduction in energy losses from the system and a resulting increase in total plasma energy, which must reach a critical threshold for fusion to occur.

“Results like this might take years to solve without the power of advanced computation,” said Michl Binderbauer, president and chief technology officer at Tri Alpha Energy. He said the company was aiming to produce electricity within a decade and Tri Alpha Energy recently added former US energy secretary Ernest Moniz to its board of directors.

The C-2U machine ran an experiment every eight minutes. This involved blasting plasma with a beam of hydrogen atoms to keep it spinning in a magnetic field for up to 10 milliseconds. The aims was to see if it behaved as theory predicts and is a promising route to a fusion reactor that generates more energy than it consumes.

The Optometrist algorithm enabled the researchers to discover a configuration in which the hydrogen beam completely balanced the cooling losses, meaning the total energy in the plasma actually went up after formation. “It was only for about two milliseconds, but still, it was a first!” said Baltz.

The C2-U machine has now been replaced with a more powerful and sophisticated machine called Norman, after the company’s late co-founder Norman Rostoker. It achieved first plasma earlier in July and if experiments on Norman are successful, Tri Alpha Energy will next build a demonstration power generator.

Nuclear fusion has long held the hope of clean, safe and limitless energy and interest has increased as the challenge of climate change and the need to cut carbon emissions has become clear. But despite 60 years and billions of dollars of research, it has yet to be achieved and commercial scale nuclear fusion is still likely to be decades away.


But numerous other groups are chasing the nuclear fusion dream, with the largest by far the publicly funded Iter project in southern France. The €18bn (£16bn) project is a partnership of the US, the European Union, China, India, South Korea, Russia and Japan, and is building a seven-storey facility.

Iter uses a conventional tokamak, or doughnut-shaped, reactor and aims to create its first plasma in 2025, scaling up to its maximum power output by 2035. If successful, Iter could be the foundation of the first fusion power plants.

Other groups are experimenting with different fusion reactor designs that might be better and, in particular, smaller. A €1bn reactor opened in Germany in 2016 uses a stellarator in which the plasma ring is shaped like a Mobius strip, giving it the potential to operate continuously, rather than in pulses as in a tokamak.

There are also a series of private companies, staffed by experienced fusion researchers, including General Fusion, which uses a vortex of molten lead and lithium to contain the plasma and is backed by Amazon’s Jeff Bezos.

Lockheed Martin’s famous Skunk Works team said in 2014 they would produce a truck-sized fusion plant within a decade but attracted criticism for providing few details. The UK’s Tokamak Energy is aiming to harness particle accelerator technology and high-temperature superconductors and other firms include Helion Energy and First Light Fusion

David Kingham, chief of Tokamak Energy said the Tri Alpha Energy was exciting progress: “While publicly funded laboratories excel at fundamental research, the private sector can innovate and adopt new technologies much more rapidly.” In April, Tokamak Energy achieved first plasma in a new reactor, its third in five years, and aims to reach the 100m degrees centigrade needed for fusion in 2018.

Share RecommendKeepReplyMark as Last Read


From: FUBHO7/27/2017 7:17:00 AM
   of 330
 
EXCLUSIVE: First human embryos edited in U.S., using CRISPR

A video shows the injection of gene-editing chemicals into a human egg near the moment of fertilization. The technique is designed to correct a genetic disorder from the father.


The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.

In altering the DNA code of human embryos, the objective of scientists is to show that they can eradicate or correct genes that cause inherited disease, like the blood condition beta-thalassemia. The process is termed “ germline engineering” because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.

Some critics say germline experiments could open the floodgates to a brave new world of “designer babies” engineered with genetic enhancements—a prospect bitterly opposed by a range of religious organizations, civil society groups, and biotech companies.

The U.S. intelligence community last year called CRISPR a potential "weapon of mass destruction.”

Shoukhrat Mitalipov is the first U.S.-based scientist known to have edited the DNA of human embryos. OHSU/KRISTYNA WENTZ-GRAFFReached by Skype, Mitalipov declined to comment on the results, which he said are pending publication. But other scientists confirmed the editing of embryos using CRISPR. “So far as I know this will be the first study reported in the U.S.,” says Jun Wu, a collaborator at the Salk Institute, in La Jolla, California, who played a role in the project.

Better techniqueThe earlier Chinese publications, although limited in scope, found CRISPR caused editing errors and that the desired DNA changes were taken up not by all the cells of an embryo, only some. That effect, called mosaicism, lent weight to arguments that germline editing would be an unsafe way to create a person.

But Mitalipov and his colleagues are said to have convincingly shown that it is possible to avoid both mosaicism and “off-target” effects, as the CRISPR errors are known.

A person familiar with the research says “many tens” of human IVF embryos were created for the experiment using the donated sperm of men carrying inherited disease mutations. Embryos at this stages are tiny clumps of cells invisible to the naked eye. Technology Review could not determine which disease genes had been chosen for editing.

“It is proof of principle that it can work. They significantly reduced mosaicism. I don’t think it’s the start of clinical trials yet, but it does take it further than anyone has before,” said a scientist familiar with the project.

Mitalipov’s group appears to have overcome earlier difficulties by “getting in early” and injecting CRISPR into the eggs at the same time they were fertilized with sperm.

That concept is similar to one tested in mice by Tony Perry of Bath University. Perry successfully edited the mouse gene for coat color, changing the fur of the offspring from the expected brown to white.

Somewhat prophetically, Perry’s paper on the research, published at the end of 2014, said, “This or analogous approaches may one day enable human genome targeting or editing during very early development.”

Genetic enhancementBorn in Kazakhstan when it was part of the former Soviet Union, Mitalipov has for years pushed scientific boundaries. In 2007, he unveiled the world’s first cloned monkeys. Then, in 2013, he created human embryos through cloning, as a way of creating patient-specific stem cells.

His team’s move into embryo editing coincides with a report by the U.S. National Academy of Sciences in February that was widely seen as providing a green light for lab research on germline modification.

The report also offered qualified support for the use of CRISPR for making gene-edited babies, but only if it were deployed for the elimination of serious diseases.

The advisory committee drew a red line at genetic enhancements—like higher intelligence. “Genome editing to enhance traits or abilities beyond ordinary health raises concerns about whether the benefits can outweigh the risks, and about fairness if available only to some people,” said Alta Charo, co-chair of the NAS’s study committee and professor of law and bioethics at the University of Wisconsin–Madison.

In the U.S., any effort to turn an edited IVF embryo into a baby has been blocked by Congress, which added language to the Department of Health and Human Services funding bill forbidding it from approving clinical trials of the concept.

Despite such barriers, the creation of a gene-edited person could be attempted at any moment, including by IVF clinics operating facilities in countries where there are no such legal restrictions.

Steve Connor is a freelance journalist based in the U.K.

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen8/4/2017 9:50:54 AM
   of 330
 
Tech Guru Bill Joy Unveils a Battery to Challenge Lithium-Ion

By Brian Eckhouse
Bloomberg
August 3, 2017

-- Rechargeable alkaline battery could be cheaper, Joy says

-- hium-ion battery pack prices down 73% from 2010 to 2016

Elon Musk isn’t the only visionary betting that the world will soon be reliant on batteries. Bill Joy, the Silicon Valley guru and Sun Microsystems Inc. co-founder, also envisions such dependence. He just thinks alkaline is a smarter way to go than lithium-ion.



Bill Joy
Photographer: Hyoung Chang/The Denver Post via Getty Images
___________________

On Thursday, Joy and Ionic Materials unveiled a solid-state alkaline battery at the Rocky Mountain Institute’s Energy Innovation Summit in Basalt, Colorado, that he says is safer and cheaper than the industry leader, lithium-ion. The appeal of alkaline: it could cost a tiny fraction of existing battery technologies and could be safer in delicate settings, such as aboard airplanes.

“What people didn’t really realize is that alkaline batteries could be made rechargable,” Joy said in a phone interview Thursday. “I think people had given up.”

The Ionic Materials investor envisions three ultimate applications for the polymer technology: consumer electronics, automotive and the power grid. But Joy acknowledged that the technology isn’t quite ready for prime-time. It has yet to be commercialized, and factories are needed to manufacture it. It could be ready for wider use within five years, he said.

On top of that, it would face an entrenched incumbent.

Lithium-Ion

Lithium-ion battery pack prices fell 73 percent from 2010 to 2016, said Logan Goldie-Scot, a San Francisco-based analyst at Bloomberg New Energy Finance, in an email Thursday. “Technology improvements, manufacturing scale, competition between the major battery manufacturers continue to drive costs down. This will make it hard for alternative technologies to compete.”

Ionic expects to talk to potential partners about licenses. Global lithium-ion battery demand from electric vehicles is projected to grow from 21 gigawatt-hours in 2016 to 1,300 gigawatt-hours in 2030, according to Bloomberg New Energy Finance.

“Even if we grew 400 percent every year for a decade, we couldn’t meet the need” alone, Joy said. “We’re starting from a zero base. We don’t have a factory. We have a revolutionary material.”


bloomberg.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen8/29/2017 10:06:11 AM
   of 330
 
"Edge computing"":

Message 31241376

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen9/20/2017 5:10:34 PM
1 Recommendation   of 330
 
British supermarket offers 'finger vein' payment in worldwide first

Katie Morley, Consumer Affairs Editor
The Telegraph
20 September 2017 • 1:04am



Fingerprint technology could be coming to a supermarket near you Credit: Fabrizio Bensc/REUTERS
___________________________--

A UK supermarket has become the first in the world to let shoppers pay for groceries using just the veins in their fingertips.

Customers at the Costcutter store, at Brunel University in London, can now pay using their unique vein pattern to identify themselves.

The firm behind the technology, Sthaler, has said it is in "serious talks" with other major UK supermarkets to adopt hi-tech finger vein scanners at pay points across thousands of stores.

It works by using infrared to scan people's finger veins and then links this unique biometric map to their bank cards. Customers’ bank details are then stored with payment provider Worldpay, in the same way you can store your card details when shopping online. Shoppers can then turn up to the supermarket with nothing on them but their own hands and use it to make payments in just three seconds.

It comes as previous studies have found fingerprint recognition, used widely on mobile phones, is vulnerable to being hacked and can be copied even from finger smears left on phone screens.

But Sthaler, the firm behind the technology, claims vein technology is the most secure biometric identification method as it cannot be copied or stolen.

Shaler said dozens of students were already using the system and it expected 3,000 students out of 13,000 to have signed up by November.

Finger print payments are already used widely at cash points in Poland, Turkey and Japan.

Vein scanners are also used as a way of accessing high-security UK police buildings and authorising internal trading at least one major British investment bank.

The firm is also in discussions with nightclubs, gyms about using the technology to verify membership and even Premier League football clubs to check people have the right access to VIP hospitality areas.

The technology uses an infrared light to create a detailed map of the vein pattern in your finger. It requires the person to be alive, meaning in the unlikely event a criminal hacks off someone’s finger, it would not work. Sthaler said it take just one minute to sign up to the system initially and, after that, it takes just seconds to place your finger in a scanner each time you reach the supermarket checkout.

Simon Binns, commercial director of Sthaler, told the Daily Telegraph: ‘This makes payments so much easier for customers.

"They don’t need to carry cash or cards. They don’t need to remember a pin number. You just bring yourself. This is the safest form of biometrics. There are no known incidences where this security has been breached.

"When you put your finger in the scanner it checks you are alive, it checks for a pulse, it checks for haemoglobin. ‘Your vein pattern is secure because it is kept on a database in an encrypted form, as binary numbers. No card details are stored with the retailer or ourselves, it is held with Worldpay, in the same way it is when you buy online."

Nick Telford-Reed, director of technology innovation at Worldpay UK, said: "In our view, finger vein technology has a number of advantages over fingerprint. This deployment of Fingopay in Costcutter branches demonstrates how consumers increasingly want to see their payment methods secure and simple."

telegraph.co.uk

Share RecommendKeepReplyMark as Last Read


From: FUBHO9/30/2017 12:58:12 PM
   of 330
 
SPECIAL REPORT: BLOCKCHAIN WORLD


When Bitcoin was unleashed on the world, it filled a specific need. But it wasn’t long before people realized the technology behind Bitcoin—the blockchain—could do much more than record monetary transactions. That realization has lately blossomed into a dazzling and often bewildering array of startup companies, initiatives, corporate alliances, and research projects. Billions of dollars will hinge on what they come up with. So you should understand how blockchains work—and what could happen if they don’t.

spectrum.ieee.org

Blockchains: How They Work and Why They’ll Change the WorldThe technology behind Bitcoin could touch every transaction you ever make By Morgen E. Peck



How Smart Contracts WorkBlockchain technology could run a flight-insurance business without any employees By Morgen E. Peck

How Blockchains WorkIllustrated from transaction to reward By Morgen E. Peck

The Ridiculous Amount of Energy It Takes to Run BitcoinRunning Bitcoin uses a small city’s worth of electricity. Intel and others want to make a more sustainable blockchain By Peter Fairley

Do You Need a Blockchain?This chart will tell you if a blockchain can solve your problem By Morgen E. Peck

Wall Street Firms to Move Trillions to Blockchains in 2018The finance industry is eagerly adopting the blockchain, a technology that early fans hoped would obliterate the finance industry By Amy Nordrum

Blockchain LingoThe terms you need to know to understand the blockchain revolution By Morgen E. Peck

Why the Biggest Bitcoin Mines Are in ChinaThe heart of Bitcoin is now in Inner Mongolia, where dirty coal fuels sophisticated semiconductor engineering By Morgen E. Peck

Illinois vs. Dubai: Two Experiments Bring Blockchains to GovernmentDubai wants one blockchain platform to rule them all, while Illinois will try anything By Amy Nordrum

Blockchains Will Allow Rooftop Solar Energy Trading for Fun and ProfitNeighbors in New York City, Denmark, and elsewhere will be able to sell one another their solar power By Morgen E. Peck & David Wagman

Video: The Bitcoin Blockchain ExplainedWhat is a blockchain and why is it the future of the Web? By Morgen E. Peck & IEEE Spectrum Staff

Share RecommendKeepReplyMark as Last Read


From: FUBHO10/11/2017 3:34:48 PM
   of 330
 
Inside Microsoft’s Quest to Make Quantum Computing Scalable

datacenterknowledge.com



The company’s researchers are building a system that’s unlike any other quantum computer being developed today.
There’s no shortage of big tech companies building quantum computers, but Microsoft claims its approach to manufacturing qubits will make its quantum computing systems more powerful than others’. The company’s researchers are pursuing “topological” qubits, which store data in the path of moving exotic Majorana particles. This is different from storing it in the state of electrons, which is fragile.

That’s according to Krysta Svore, research manager in Microsoft’s Quantum Architectures and Computation group. The Majorana particle paths -- with a fractionalized electron appearing in many places along them -- weave together like a braid, which makes for a much more robust and efficient system, she said in an interview with Data Center Knowledge. These qubits are called “topological cubits,” and the systems are called “topological quantum computers.”

With other approaches, it may take 10,000 physical qubits to create a logical qubit that’s stable enough for useful computation, because the state of the qubits storing the answer to your problem “decoheres” very easily, she said. It’s harder to disrupt an electron that’s been split up along a topological qubit, because the information is stored in more places.

In quantum mechanics, particles are represented by wavelengths. Coherence is achieved when waves that interfere with each other have the same frequency and constant phase relation. In other words, they don’t have to be in phase with each other, but the difference between the phases has to remain constant. If it does not, the particle states are said to decohere.

“We’re working on a universally programmable circuit model, so any other circuit-based quantum machine will be able to run the same class of algorithms, but we have a big differentiator,” Svore said. “Because the fidelity of the qubit promises to be several orders of magnitude better, I can run an algorithm that’s several orders of magnitude bigger. If I can run many more operations without decohering, I could run a class of algorithm that in theory would run on other quantum machines but that physically won’t give a good result. Let’s say we’re three orders of magnitude better; then I can run three orders of magnitude more operations in my quantum circuit.”

Theoretically, that could mean a clear advantage of a quantum computer over a classical one. “We can have a much larger circuit which could theoretically be the difference between something that shows quantum advantage or not. And for larger algorithms, where error corrections are required, we need several orders of magnitude less overhead to run that algorithm,” she explained.

A Hardware and Software System that Scales



Microsoft has chosen to focus on topological qubits because the researchers believe it will scale, and the company is also building a complete hardware stack to support the scaling. “We’re building a cryogenic computer to control the topological quantum chip; then we're building a software system where you can compile millions of operations and beyond.”

The algorithms running on the system could be doing things like quantum chemistry – looking for more efficient fertilizer or a room temperature semiconductor – or improving machine learning. Microsoft Research has already shown that deep learning trains faster with a quantum computer. With the same deep learning models in use today, Svore says, the research shows “quadratic speedups” even before you start adding quantum terms to the data model, which seems to improve performance even further.

Redesigning a Programming Language


To get developers used to the rather different style of quantum programming, Microsoft will offer a new set of tools in preview later this year (which doesn’t have a name yet) that’s a superset built on what it learned from the academics, researchers, students, and developers who used Liquid, an embedded domain specific language in F# that Microsoft created some years ago.

The language itself has familiar concepts like functions, if statements, variables, and branches, but it also has quantum-specific elements and a growing set of libraries developers can call to help them build quantum apps.

“We’ve almost completely redesigned the language; we will offer all the things Liquid had, but also much more, and it’s not an embedded language. It’s really a domain-specific language designed upfront for scalable quantum computing, and what we’ve tried to do is raise the level of abstraction in this high-level language with the ability to call vast numbers of libraries and subroutines.”

Some of those are low-level subroutines like an adder, a multiplier, and trigonometry functions, but there are also higher-level functions that are commonly used in quantum computing. “Tools like phase estimation, amplitude amplification, amplitude estimation -- these are very common frameworks for your quantum algorithms. They’re the core framework for setting up your algorithm to measure and get the answer out at the end [of the computation], and they’re available in a very pluggable way.”

A key part of making the language accessible is the way it’s integrated into Visual Studio, Microsoft’s IDE. “I think this is a huge step forward,” Svore told us. “It makes it so much easier to read the code because you get the syntax coloring and the debugging; you can set a breakpoint, you can visualise the quantum state.”

Being able to step through your code to understand how it works is critical to learning a new language or a new style of programming, and quantum computing is a very different style of computing.

“As we’ve learned about quantum algorithms and applications, we’ve put what we’ve learned into libraries to make it easier for a future generation of quantum developers,” Svore said. “Our hope is that as a developer you’re not having to think at the lower level of circuits and probabilities. The ability to use these higher-level constructs is key.”

Hybrid Applications

The new language will also make it easier to develop hybrid applications that use both quantum and classical computing, which Svore predicts will be a common pattern. “With the quantum computer, many of the quantum apps and algorithms are hybrid. You're doing pre and post-processing or in some algorithms you’ll even be doing a very tight loop with a classical supercomputer.”

How Many Qubits Can You Handle?Microsoft, she says, is making progress with its topological qubits, but, as it’s impossible to put any kind of date on when a working system might emerge from all this work, the company will come out with a quantum simulator to actually run the programs you write, along with the other development tools.

Depending on how powerful your system is, you’ll be able to simulate between 30 and 33 qubits on your own hardware. For 40 qubits and more, you can do the simulation on Azure.

“At 30 qubits, it takes roughly 16GB of classical memory to store that quantum state, and each operation takes a few seconds,” Svore explains. But as you simulate more qubits, you need a lot more resources. Ten qubits means adding two to the power of 10, or 16TB of memory and double that to go from 40 to 41 qubits. Pretty soon, you’re hitting petabytes of memory. “At 230 qubits, the amount of memory you need is 10^80 bytes, which is more bytes than there are particles in the physical universe, and one operation takes the lifetime of the universe,” Svore said. “But in a quantum computer, that one operation takes 100 nanoseconds.”

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: FUBHO who wrote (322)10/11/2017 3:38:39 PM
From: FUBHO
   of 330
 
Commercial Quantum Computing Pushes On
News & Analysis
10/11/2017 Post a comment
Looking to speed the arrival of commercial quantum computers, Intel has prototyped a 17-qubit superconducting chip, which research partner QuTech will test on a suite of quantum algorithms.

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.