We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNew Technology

Previous 10 Next 10 
From: FJB8/19/2018 9:30:43 PM
   of 410
Stacking concrete blocks is a surprisingly efficient way to store energy

By Akshat Rathi in SwitzerlandAugust 18, 2018

Thanks to the modern electric grid, you have access to electricity whenever you want. But the grid only works when electricity is generated in the same amounts as it is consumed. That said, it’s impossible to get the balance right all the time. So operators make grids more flexible by adding ways to store excess electricity for when production drops or consumption rises.

About 96% of the world’s energy-storage capacity comes in the form of one technology: pumped hydro. Whenever generation exceeds demand, the excess electricity is used to pump water up a dam. When demand exceeds generation, that water is allowed to fall—thanks to gravity—and the potential energy turns turbines to produce electricity.

But pumped-hydro storage requires particular geographies, with access to water and to reservoirs at different altitudes. It’s the reason that about three-quarters of all pumped hydro storage has been built in only 10 countries. The trouble is the world needs to add a lot more energy storage, if we are to continue to add the intermittent solar and wind power necessary to cut our dependence on fossil fuels.

A startup called Energy Vault thinks it has a viable alternative to pumped-hydro: Instead of using water and dams, the startup uses concrete blocks and cranes. It has been operating in stealth mode until today (Aug. 18), when its existence will be announced at Kent Presents, an ideas festival in Connecticut.

On a hot July morning, I traveled to Biasca, Switzerland, about two hours north of Milan, Italy, where Energy Vault has built a demonstration plant, about a tenth the size of a full-scale operation. The whole thing—from idea to a functional unit—took about nine months and less than $2 million to accomplish. If this sort of low-tech, low-cost innovation could help solve even just a few parts of the huge energy-storage problem, maybe the energy transition the world needs won’t be so hard after all.

?? Quartz is running a series called The Race to Zero Emissions that explores the challenges and opportunities of energy-storage technologies. Sign up here to be the first to know when stories are published.

Concrete planThe science underlying Energy Vault’s technology is simple. When you lift something against gravity, you store energy in it. When you later let it fall, you can retrieve that energy. Because concrete is a lot denser than water, lifting a block of concrete requires—and can, therefore, store—a lot more energy than an equal-sized tank of water.

Bill Gross, a long-time US entrepreneur, and Andrea Pedretti, a serial Swiss inventor, developed the Energy Vault system that applies this science. Here’s how it works: A 120-meter (nearly 400-foot) tall, six-armed crane stands in the middle. In the discharged state, concrete cylinders weighing 35 metric tons each are neatly stacked around the crane far below the crane arms. When there is excess solar or wind power, a computer algorithm directs one or more crane arms to locate a concrete block, with the help of a camera attached to the crane arm’s trolley.

Energy Vault

Simulation of a large-scale Energy Vault plant.Once the crane arm locates and hooks onto a concrete block, a motor starts, powered by the excess electricity on the grid, and lifts the block off the ground. Wind could cause the block to move like a pendulum, but the crane’s trolley is programmed to counter the movement. As a result, it can smoothly lift the block, and then place it on top of another stack of blocks—higher up off the ground.

The system is “fully charged” when the crane has created a tower of concrete blocks around it. The total energy that can be stored in the tower is 20 megawatt-hours (MWh), enough to power 2,000 Swiss homes for a whole day.

When the grid is running low, the motors spring back into action—except now, instead of consuming electricity, the motor is driven in reverse by the gravitational energy, and thus generates electricity.

Big upThe innovation in Energy Vault’s plant is not the hardware. Cranes and motors have been around for decades, and companies like ABB and Siemens have optimized them for maximum efficiency. The round-trip efficiency of the system, which is the amount of energy recovered for every unit of energy used to lift the blocks, is about 85%—comparable to lithium-ion batteries which offer upto 90%.

Pedretti’s main work as the chief technology officer has been figuring out how to design software to automate contextually relevant operations, like hooking and unhooking concrete blocks, and to counteract pendulum-like movements during the lifting and lowering of those blocks.

Energy Vault keeps costs low because it uses off-the-shelf commercial hardware. Surprisingly, concrete blocks could prove to be the most expensive part of the energy tower. Concrete is much cheaper than, say, a lithium-ion battery, but Energy Vault would need a lot of concrete to build hundreds of 35-metric-ton blocks.

So Pedretti found another solution. He’s developed a machine that can mix substances that cities often pay to get rid off, such as gravel or building waste, along with cement to create low-cost concrete blocks. The cost saving comes from having to use only a sixth of the amount of cement that would otherwise have been needed if the concrete were used for building construction.

Akshat Rathi for Quartz

Rob Piconi (left) and Andrea Pedretti.The storage challengeThe demonstration plant I saw in Biasca is much smaller than the planned commercial version. It has a 20-meter-tall, single-armed crane that lifts blocks weighing 500 kg each. But it does almost all the things its full-scale cousin, which the company is actively looking to sell right now, would do.

Robert Piconi has spent this summer visiting countries in Africa and Asia. The CEO of Energy Vault is excited to find customers for its plants in those parts of the world. The startup also has a sales team in the US and it now has orders to build its first commercial units in early 2019. The company won’t share details of those orders, but the unique characteristics of its energy-storage solution mean we can make a fairly educated guess at what the projects will look like.

Energy-storage experts broadly categorize energy-storage into three groups, distinguished by the amount of energy storage needed and the cost of storing that energy.

First, expensive technologies, such as lithium-ion batteries, can be used to store a few hours worth of energy—in the range of tens or hundreds of MWh. These could be charged during the day, using solar panels for example, and then discharged when the sun isn’t around. But lithium-ion batteries for the electric grid currently cost between $280 and $350 per kWh.

Cheaper technologies, such as flow batteries (which use high-energy liquid chemicals to hold energy) can be used to store weeks worth of energy—in the range of hundreds or thousands of MWh. This second category of energy storage could then be used, for instance, when there’s a lull in wind supply for a week or two.

The third category doesn’t exist yet. In theory, yet-to-be-invented, extra-cheap technologies could store months worth of energy—in the range of tens or hundreds of thousands of MWh—which would be used to deal with interseasonal demands. For example, Mumbai hits peak consumption in the summer when air conditioners are on full blast, whereas London peaks in winters because of household heating. Ideally, energy captured in one season could be stored for months during low-use seasons, and then deployed later in the high-use seasons.

David vs GoliathPiconi estimates that by the time Energy Vault builds its 10th or so 35-MWh plant, it can bring costs down to about $150 per kWh. That means it can’t fill the needs of the third category of energy-storage use; to do that, costs would have to be closer to $10 per kWh. In theory, at the current capacity and price point, it could compete in the second category—if it could find a customer that wanted Energy Vault to build dozens of plants for a single grid. Realistically, Energy Vault’s best bet is to compete in the first category.

That said, some experts told Quartz that the cost of lithium-ion batteries, the current dominant battery technology, could fall to about $100 per kWh, which would make them cheaper even than Energy Vault when it comes to storing days or weeks worth of energy. And because batteries are compact, they can be transported vast distances. Most of the lithium-ion batteries in smartphones used all over the world, for example, are made in East Asia. Energy Vault’s concrete blocks will have to be built on-site, and each 35 MWh system would need a circular piece of land about 100 meters (300 feet) in diameter. Batteries need a fraction of that space to store the same amount of energy.

Batteries do have some limitations. The maximum life of lithium-ion batteries, for example, is 20 or so years. They also lose their capacity to store energy over time. And there aren’t yet reliable ways to recycle lithium-ion batteries.

Energy Vault’s plant can operate for 30 years with little maintenance and almost no fade in capacity. Its concrete blocks also use waste materials. So Piconi is confident that there’s still a niche that Energy Vault can fill: Places that have abundant access to land and building material, combined with the desire to have storage technologies that last for decades without fading in capacity.

Meanwhile, whether or not Energy Vault succeeds, it does make a strong case for the argument that, while everyone else is out looking for high-tech, futuristic battery innovation, there may be real value in thinking about how to apply low-tech solutions to 21st-century problems. Energy Vault built a functional test plant in just nine months, spending relative pennies. It’s a signal of sorts that some of the answers to our energy-storage problems may still be sitting hidden in plain sight.

This article was updated with information about Energy Vault’s first commercial-unit orders.

?? Quartz is running a series called The Race to Zero Emissions that explores the challenges and opportunities of energy-storage technologies. Sign up here to be the first to know when stories are published.

Share RecommendKeepReplyMark as Last Read

From: FJB8/26/2018 9:51:02 AM
   of 410

VLSI 2018: Samsung's 2nd Gen 7nm, EUV Goes HVM


For as long as anyone can remember, EUV has been “just a few years away.” This changed back in 2016 when Samsung put their foot down, announcing that their 8nm node will be the last DUV-based process technology. All nodes moving forward will use EUV. As Yan Borodovsky said at the 2018 SPIE conference, EUV is no longer a question of if or when but how well. At the 2018 Symposia on VLSI Technology and Circuits, Samsung gave us a first glimpse of what their 7nm EUV process looks like. Samsung’s second-generation 7nm process technology was presented by WonCheol Jeong, Principal Research Engineer at Samsung.

2nd Generation 7nm?What Samsung presented at the symposia was what they consider “2nd generation 7nm”. Samsung naming is confusing and almost-intentionally obfuscated. I have asked Jeong about this and he said that by 2nd generation, they are referring to Samsung’s “7LPP” whereas their 1st generation refers to “7LPE” which will likely never see the light of day. Unfortunately, WikiChip has been through this situation before with Samsung’s presentation of their “2nd generation 10nm” last year which ended up being 8nm “8LPP”, therefore it’s entirely possible that this 2 gen 7nm node really refers to their “6nm” or “5nm” nodes. To avoid possible confusion, we will not be using “7LPP” and, instead, stick to the name Samsung used in their presentation (“2nd Gen 7nm”).

Design FeaturesSamsung’s second-generation 7nm process builds on many of their earlier technologies developed over the years.

5th generation FinFET2nd generation hybrid N/P5th generation S/D engineering3rd generation gate stackWhat’s interesting is that both their 2nd generation 7nm and their 8nm 8LPP share much of those rules including the fin, SD, and gate engineering. In fact, we can show the overlap much better in a table below which includes their 14, 10, 8, and 7 nanometer nodes.

Samsung Technology ComparisonTechnology14LPP10LPP1st Gen 7nm8LPP2nd Gen 7nmFinGateS/D EngSDBGate Stack
2nd Gen3rd Gen4th Gen5th Gen
1st Gen2nd Gen3rd Gen
2nd Gen3rd Gen4th Gen5th Gen
1st Gen2nd Gen2nd Gen3rd Gen
1st Gen2nd Gen3rd Gen
From a technology point of view, 8LPP shares many of the device manufacturing details with 2nd Gen 7nm, more so than the first-generation 7nm.

Key DimensionsSamsung’s 7nm node key dimensions are:

Samsung Technology ComparisonFeature7nm10 nm ?14 nm ?
Fin27 nm0.64x0.56x
Gate54 nm0.79x0.69x
M1, Mx36 nm0.75x0.56x
All the pitches reported above are the tightest numbers reported to date for a leading edge foundry.

EUVFor their 10nm, Samsung has been using Litho-Etch-Litho-Etch-Litho-Etch (LELELE or LE3). For their 7nm, Samsung has eliminated most of the complex patterning by using a single-exposure EUV for the three critical layers – fin, contact, and Mx. Samsung reports a mask reduction of >25% when compared to using ArF immersion lithography for comparable features which translates to cost and time reduction.

EUV mask reduction compared to ArF MPT (VLSI 2018, Samsung)CellFor their 7nm, Samsung’s high-density cell has a height of 9 fins or 243nm which works out to 6.75 tracks. This is a cell height reduction of 0.58x over their 10nm or 0.64x over their 8nm.

Samsung’s 14nm, 10nm, 8nm, and 7nm std cells (WikiChip)The high-density cell is a 2-fin device configuration.

10, 8, and 7 nanometer device configuration (WikiChip)For a NAND2 cell, 7nm take up a total area of 0.0394 µm², down from 0.0723 µm² in 8nm or 0.086 µm² in 10nm. That’s a 0.54x and 0.46x scaling for 8nm and 10nm respectively.

NAND2 Cell Scaling (WikiChip)HP CellIn addition to the high-density, Samsung also offers a high-performance cell.

2nd Generation 7nm Std CellCellDeviceHeightTracks
9-fin x 27nm
10-fin x 27nm
Spotted an error? Help us fix it! Simply select the problematic text and press Ctrl+Enter to notify us.


Pattern FidelityOne of the many limitations with conventional multi-patterning techniques is pattern fidelity. What you see is often not what you get.

(VLSI 2018, Samsung)For their 7nm, Samsung is reporting EUV 2D fidelity to be 70% better than ArF multi-patterning.

Samsung 7nm Fidelity Comparison (VLSI 2018, Samsung)

Share RecommendKeepReplyMark as Last Read

From: FJB9/5/2018 4:20:50 PM
   of 410
Strong alloys
Sandia National Laboratories has devised a platinum-gold alloy said to be the most wear-resistant metal in the world.

The alloy is 100 times more durable than high-strength steel, putting it in the same class as diamond and sapphire for wear-resistant materials. “We showed there’s a fundamental change you can make to some alloys that will impart this tremendous increase in performance over a broad range of real, practical metals,” said Nic Argibay, a materials scientist at Sandia.

Share RecommendKeepReplyMark as Last Read

From: FJB10/22/2018 1:44:34 AM
   of 410
Quantum Advantage Formally Proved for Short-Depth Quantum Circuits

Researchers from IBM T. J. Watson Research Center, the University of Waterloo, Canada, and the Technical University of Munich, Germany, have proved theoretically that quantum computers can solve certain problems faster than classical computers. The algorithm they devised fits the limitations of current quantum computing processors, and an experimental demonstration may come soon.

Strictly speaking, the three researchers – Sergey Bravyi, David Gosset, and Robert König – have shown that

parallel quantum algorithms running in a constant time period are strictly more powerful than their classical counterparts; they are provably better at solving certain linear algebra problems associated with binary quadratic forms.

The proof they provided is based on an algorithm to solve a quadratic “hidden linear function” problem that can be implemented in quantum constant-depth. A hidden linear function is a linear function that is not entirely known but is “hidden” inside of another function you can calculate. For example, a linear function could be hidden inside of an oracle that can be queried. The challenge is to fully characterize the hidden linear function based on the results of applying the known function. If this sounds somewhat similar to the problem of inverting a public key to find its private counterpart, it is no surprise, since this is exactly what it is about. In the case of an oracle, the problem is solved by the classical Bernstein-Vazirani algorithm, which minimizes the number of queries to the oracle. Now, according to the three researchers, the fact that the Bernstein-Vazirani algorithm is applied to an oracle limits its practical applicability, so they suggest “hiding” a linear function inside a bidimensional grid graph. After proving that this is indeed possible, they built a quantum constant-depth algorithm to find the hidden function out.

The other half of the proof provided by the researchers is showing that, contrary to a quantum circuit, any classical circuit needs to increase its depth as the number of inputs grows. For example, while the quantum algorithm can solve that problem using at most a quantum circuit of depth 10 no matter how many inputs you have, you need, say, a classical circuit of depth 10 for a 16 inputs problem; a circuit of depth 14 for a 32 inputs problem; a circuit of depth 20 for a 64 inputs problem, and so on. This second part of the proof is philosophically deeply interesting, since it dwells on the idea of quantum nonlocality, which in turn is related to quantum entanglement, one of the most peculiar properties of quantum processors along with superposition. So, quantum advantage would seem to derive from the most intrinsic properties of quantum physics.

At the theortical level, the value of this achievement is not to be underestimated either. As IBM IBM Q Vicepresident Bob Sutor wrote:

The proof is the first demonstration of unconditional separation between quantum and classical algorithms, albeit in the special case of constant-depth computations.

Previously, the idea that quantum computer were more powerful than classical ones was based on factorization problems. Shor showed quantum computers can factor an integer in polynomial time, i.e. more efficiently than any know classical computer algorithms. Albeit an interesting result, this did not rule out the possibility that a more efficient classical factorization algorithm could indeed be found. So unless one conjectured that no efficient solution to the factorization problem could exist, which is equivalent to demonstrate that “ P ? NP”, one could not really say that quantum advantage was proved.

As mentioned, Bravyi, Gosset, and König’s algorithm, relying on a constant number of operations (the depth of a quantum circuit) seems to fit just right with the limitation of current quantum computer processors. Those are basically related to qubits’ error rate and coherence time, which limit the maximal duration of a sequence of operations and their overall number. Therefore, using short-depth circuits is key for any feasible application of current quantum circuits. Thanks to this property of the proposed algorithm, IBM researchers are already at work ot demonstrate quantum advantage using IBM quantum computer, Sutor remarks.

If you are interested in the full details of the proof, do not miss the talk David Gosset gave at the Perimeter Institute for Theoretical Physics along with the presentation slides.

This content is in the Emerging Technologiestopic

Share RecommendKeepReplyMark as Last Read

From: FJB11/6/2018 10:55:22 AM
   of 410
The kilogram is one of the most important and widely used units of measure in the world — unless you live in the US. For everyone else, having an accurate reading on what a kilogram is can be vitally important in fields like manufacturing, engineering, and transportation. Of course, a kilogram is 1,000 grams or 2.2 pounds if you want to get imperial. That doesn’t help you define a kilogram, though. The kilogram is currently controlled by a metal slug in a French vault, but its days of importance are numbered. Scientists are preparing to redefine the kilogram using science.

It’s actually harder than you’d expect to know when a measurement matches the intended standard, even when it’s one of the well-defined Systéme International (SI) units. For example, the meter was originally defined in 1793 as one ten-millionth the distance from the equator to the north pole. That value was wrong, but the meter has since been redefined in more exact terms like krypton-86 wavelength emissions and most recently the speed of light in a vacuum. The second was previously defined as a tiny fraction of how long it takes the Earth to orbit the sun. Now, it’s pegged to the amount of time it takes a cesium-133 atom to oscillate 9,192,631,770 times. Again, this is immutable and extremely precise.

That brings us to the kilogram, which is a measurement of mass. Weight is different and changes based on gravity, but a kilogram is always a kilogram because it comes from measurements of density and volume. The definition of the kilogram is tied to the International Prototype of the Kilogram (IPK, see above), a small cylinder of platinum and iridium kept at the International Bureau of Weights and Measures in France. Scientists have created dozens of copies of the IPK so individual nations can standardize their measurements, but that’s a dangerous way to go about it. If anything happened to the IPK, we wouldn’t have a standard kilogram anymore.

Later this month, scientists at the international General Conference on Weights and Measures are expected to vote on a new definition for the kilogram, one that leaves the IPK behind and ties the measurement to the unchanging laws of the universe. Researchers from the National Institute of Standards and Technology in the US and the National Physical Laboratory in England are working on the problem of connecting mass with electromagnetic forces.

The Kibble Balance at the UK’s National Physical Laboratory.

The heart of this effort is the Kibble Balance, a stupendously complex device that quantifies the electric current needed to match the electromagnetic force equal to the gravitational force acting on a mass. So, it does not measure mass directly but instead measures the electromagnetic force between two plates. This allows scientists to connect the mass of a kilogram to the Planck constant, which is much less likely to change than a metal slug in a French vault.

So, the kilogram isn’t changing in any way that matters in your daily life, but that’s kind of the point. The kilogram is important, so it can’t change. Redefining the kilogram to get away from the IPK ensures it remains the same forever.

Share RecommendKeepReplyMark as Last Read

From: FJB11/13/2018 6:00:54 AM
   of 410
Growing the future

High-tech farmers are using LED lights in ways that seem to border on science fiction

By Adrian Higgins in Cincinnati Nov. 6, 2018

ike Zelkind stands at one end of what was once a shipping container and opens the door to the future.

Thousands of young collard greens are growing vigorously under a glow of pink-purple lamps in a scene that seems to have come from a sci-fi movie, or at least a NASA experiment. But Zelkind is at the helm of an earthbound enterprise. He is chief executive of 80 Acres Farms, with a plant factory in an uptown Cincinnati neighborhood where warehouses sit cheek by jowl with detached houses.

Since plants emerged on Earth, they have relied on the light of the sun to feed and grow through the process of photosynthesis.

But Zelkind is part of a radical shift in agriculture — decades in the making — in which plants can be grown commercially without a single sunbeam. A number of technological advances have made this possible, but none more so than innovations in LED lighting.

“What is sunlight from a plant’s perspective?” Zelkind asks. “It’s a bunch of photons.”

Diode lights, which work by passing a current between semiconductors, have come a long way since they showed up in calculator displays in the 1970s. Compared with other forms of electrical illumination, light-emitting diodes use less energy, give off little heat and can be manipulated to optimize plant growth.

Share RecommendKeepReplyMark as Last Read

From: FJB11/18/2018 6:51:24 AM
   of 410

Startup Offers To Sequence Your Genome Free Of Charge, Then Let You Profit From It"Everything is private information, stored on your computer or a computer you designate," says George Church, genetics professor at Harvard Medical School, about the approach of Nebula Genomics.

Craig Barritt/Getty Images for The New Yorker
A startup genetics company says it's now offering to sequence your entire genome at no cost to you. In fact, you would own the data and may even be able to make money off it.

Nebula Genomics, created by the prominent Harvard geneticist George Church and his lab colleagues, seeks to upend the usual way genomic information is owned.

Today, companies like 23andMe make some of their money by scanning your genetic patterns and then selling that information to drug companies for use in research. (You choose whether to opt in.)

Church says his new enterprise leaves ownership and control of the data in an individual's hands. And the genomic analysis Nebula will perform is much more detailed than what 23andMe and similar companies offer.

Nebula will do a full genome sequence, rather than a snapshot of key gene variants. That wider range of genetic information would makes the data more appealing to biologists and biotech and pharmaceutical companies.

Generating a full sequence costs about $1,000, but the price continues to tumble, Church says. Nebula's business model anticipates that companies and research organizations would be willing to pay for the cost of sequencing, if in exchange they also get some key medical information about the person involved. If that proves to be the case, people would get their genetic information at no cost.

Article continues after this message from our sponsor
The company hopes most people will pony up $99 to get the process going. "Ninety-nine bucks will get you a little bit of genetic information, but to get the full thing, [companies or researchers] will have to be interested in either your traits or your genome or both," Church says.

My Grandmother Was Italian. Why Aren't My Genes Italian?In fact, people might even make money on the deal, especially if they have an unusual trait that a company wants to study. Those payments could be "probably anywhere from $10 to $10,000, if you're some exceptional research resource," Church says.

And it's not just people with diseases who could be of value to pharmaceutical companies. "Even people who seem to be healthy, they might be super healthy and not even know it," he says.

Church's approach is part of a trend that's pushing back against the multibillion-dollar industry to buy and sell medical information. Right now, companies reap those profits and control the data.

"Patients should have the right to decide for themselves whether they want to share their medical data, and, if so, with whom," Adam Tanner, at Harvard's Institute for Quantitative Social Science, says in an email. "Efforts to empower people to fine-tune the fate of their medical information are a step in the right direction." Tanner, author of a book on the subject of the trade in medical data, isn't involved in Nebula.

The current system is "very paternalistic," Church says. He aims to give people complete control over who gets access to their data, and let individuals decide whether or not to sell the information, and to whom.

"In this case, everything is private information, stored on your computer or a computer you designate," Church says. It can be encrypted so nobody can read it, even you, if that's what you want.

Drug companies interested in studying, say, diabetes patients would ask Nebula to identify people in their system who have the disease. Nebula would then identify those individuals by launching an encrypted search of participants.

People who have indicated they're interested in selling their genetic data to a company would then be given the option of granting access to the information, along with medical data that person has designated.

Other companies are also springing up to help people control — and potentially profit from — their medical data. EncrypGen lets people offer up their genetic data, though customers have to provide their own DNA sequence. is also trying to establish a system in which people can sell their medical data to pharmaceutical companies.

Church, a genetics pioneer, has also developed other business models before. The Personal Genome Project allows people to donate their genome and health data to help speed medical research. Veritas Geneticscharges $1,000 to produce a complete genome, and provides the resulting medical information to people along with genetic counseling.

Church has been trying to accelerate research into health and disease through these efforts. Jason Bobe, a geneticist at the Icahn School of Medicine at Mount Sinai, says one bottleneck has been getting people to participate in research. (Bobe has collaborated with Church but isn't involved in Nebula Genomics.)

With more control over personal data, and the possibility of a personal financial return, "I'm hopeful that this model will actually attract people, where historically people have been very disinterested in participation in research."

You can reach Richard Harris at

Share RecommendKeepReplyMark as Last Read

From: FJB12/14/2018 2:51:33 AM
   of 410
RISC-V Will Stop Hackers Dead From Getting Into Your Computer

The greatest hardware hacks of all time were simply the result of finding software keys in memory. The AACS encryption debacle — the 09 F9 key that allowed us to decrypt HD DVDs — was the result of encryption keys just sitting in main memory, where it could be read by any other program. DeCSS, the hack that gave us all access to DVDs was again the result of encryption keys sitting out in the open.

Because encryption doesn’t work if your keys are just sitting out in the open, system designers have come up with ingenious solutions to prevent evil hackers form accessing these keys. One of the best solutions is the hardware enclave, a tiny bit of silicon that protects keys and other bits of information. Apple has an entire line of chips, Intel has hardware extensions, and all of these are black box solutions. They do work, but we have no idea if there are any vulnerabilities. If you can’t study it, it’s just an article of faith that these hardware enclaves will keep working.

Now, there might be another option. RISC-V researchers are busy creating an Open Source hardware enclave. This is an Open Source project to build secure hardware enclaves to store cryptographic keys and other secret information, and they’re doing it in a way that can be accessed and studied. Trust but verify, yes, and that’s why this is the most innovative hardware development in the last decade.

What is an enclave?Although as a somewhat new technology, processor enclaves have been around for ages. The first one to reach the public consciousness would be the Secure Enclave Processor (SEP) found in the iPhone 5S. This generation of iPhone introduced several important technological advancements, including Touch ID, the innovative and revolutionary M7 motion coprocessor, and the SEP security coprocessor itself. The iPhone 5S was a technological milestone, and the new at the time SEP stored fingerprint data and cryptographic keys beyond the reach of the actual SOC found in the iPhone.

The iPhone 5S SEP was designed to perform secure services for the rest of the SOC, primarily relating to the Touch ID functionality. Apple’s revolutionary use of a secure enclave processor was extended with the 2016 release of the Touch Bar MacBook Pro and the use of the Apple T1 chip. The T1 chip was again used for TouchID functionality, and demonstrates that Apple is the king of vertical integration.

But Apple isn’t the only company working on secure enclaves for their computing products. Intel has developed the SGX extension which allows for hardware-assisted security enclaves. These enclaves give developers the ability to hide cryptographic keys and the components for digital rights management inside a hardware-protected bit of silicon. AMD, too, has hardware enclaves with the Secure Encrypted Virtualization (SEV). ARM has Trusted Execution environments. While the Intel, AMD, and ARM enclaves are bits of silicon on other bits of silicon — distinct from Apple’s approach of putting a hardware enclave on an entirely new chip — the idea remains the same. You want to put secure stuff in secure environments, and enclaves allow you to do that.

Unfortunately, these hardware enclaves are black boxes, and while they do provide a small attack surface, there are problems. AMD’s SEV is already known to have serious security weaknesses, and it is believed SEV does not offer protection from malicious hypervisors, only from accidental hypervisor vulnerabilities. Intel’s Management engine, while not explicitly a hardware enclave, has been shown to be vulnerable to attack. The problem is that these hardware enclaves are black boxes, and security through obscurity does not work at all.

The Open Source SolutionAt last week’s RISC-V Summit, researchers at UC Berkeley released their plans for the Keystone Enclave, an Open Source secure enclave based on the RISC-V (PDF). Keystone is a project to build a Trusted Execution Environment (TEE) with secure hardware enclaves based on the RISC-V architecture, the same architecture that’s going into completely Open Source microcontrollers and (soon) Systems on a Chip.

The goals of the Keystone project are to build a chain of trust, starting from a silicon Root of Trust stored in tamper-proof hardware. this leads to a Zeroth-stage bootloader and a tamper-proof platform key store. Defining a hardware Root of Trust (RoT) is exceptionally difficult; you can always decapsulate silicon, you can always perform some sort of analysis on a chip to extract keys, and if your supply chain isn’t managed well, you have no idea if the chip you’re basing your RoT on is actually the chip in your computer. However, by using RISC-V and its Open Source HDL, this RoT can at least be studied, unlike the black box solutions from Intel, AMD, and ARM vendors.

The current plans for Keystone include memory isolation, an open framework for building on top of this security enclave, and a minimal but Open Source solution for a security enclave.

Right now, the Keystone Enclave is testable on various platforms, including QEMU, FireSim, and on real hardware with the SiFive Unleashed. There’s still much work to do, from formal verification to building out the software stack, libraries, and adding hardware extensions.

This is a game changer for security. Silicon vendors and designers have been shoehorning in hardware enclaves into processors for nearly a decade now, and Apple has gone so far as to create their own enclave chips. All of these solutions are black boxes, and there is no third-party verification that these designs are inherently correct. The RISC-V project is different, and the Keystone Enclave is the best chance we have for creating a truly Open hardware enclave that can be studied and verified independently.

Share RecommendKeepReplyMark as Last Read

From: FJB12/15/2018 7:24:12 PM
   of 410

Serverless Computing: One Step Forward, Two Steps Back

Joseph M. Hellerstein, Jose Faleiro, Joseph E. Gonzalez, Johann Schleier-Smith, Vikram Sreekanti, Alexey Tumanov and Chenggang Wu UC Berkeley {hellerstein,jmfaleiro,jegonzal,jssmith,vikrams,atumanov,cgwu}

ABSTRACT Serverless computing offers the potential to program the cloud in an autoscaling, pay-as-you go manner. In this paper we address critical gaps in first-generation serverless computing, which place its autoscaling potential at odds with dominant trends in modern computing: notably data-centric and distributed computing, but also open source and custom hardware. Put together, these gaps make current serverless offerings a bad fit for cloud innovation and particularly bad for data systems innovation. In addition to pinpointing some of the main shortfalls of current serverless architectures, we raise a set of challenges we believe must be met to unlock the radical potential that the cloud—with its exabytes of storage and millions of cores—should offer to innovative developers.

Share RecommendKeepReplyMark as Last Read

From: retrodynamic2/3/2019 5:27:39 PM
   of 410
Featured Project Development - State of the Art Novel InFlow Tech; ·1-Gearturbine RotaryTurbo:

*Wordpress Blog State of the Art Novel InFlow Gearturbine Imploturbocompressor

-YouTube; * Atypical New • GEARTURBINE / Retrodynamic = DextroRPM VS LevoInFlow + Ying Yang Thrust Way Type - Non Waste Looses

*1-GEARTURBINE BEHANCE Images Gallery Portafolio

·1; Rotary-Turbo-InFlow Tech / - GEARTURBINE PROJECT Have the similar basic system of the Aeolipilie Heron Steam Turbine device from Alexandria 10-70 AD * With Retrodynamic = DextroRPM VS LevoInFlow + Ying Yang Way Power Type - Non Waste Looses *8X/Y Thermodynamic CYCLE Way Steps. 4 Turbos, Higher efficient percent. No blade erosion by sand & very low heat target signature Pat:197187IMPI MX Dic1991 Atypical Motor Engine Type.

*Note: Sorry because I don't see the post before.

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10