We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksNew Technology

Previous 10 Next 10 
From: FJB7/27/2017 7:17:00 AM
   of 420
EXCLUSIVE: First human embryos edited in U.S., using CRISPR

A video shows the injection of gene-editing chemicals into a human egg near the moment of fertilization. The technique is designed to correct a genetic disorder from the father.

The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.

In altering the DNA code of human embryos, the objective of scientists is to show that they can eradicate or correct genes that cause inherited disease, like the blood condition beta-thalassemia. The process is termed “ germline engineering” because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.

Some critics say germline experiments could open the floodgates to a brave new world of “designer babies” engineered with genetic enhancements—a prospect bitterly opposed by a range of religious organizations, civil society groups, and biotech companies.

The U.S. intelligence community last year called CRISPR a potential "weapon of mass destruction.”

Shoukhrat Mitalipov is the first U.S.-based scientist known to have edited the DNA of human embryos. OHSU/KRISTYNA WENTZ-GRAFFReached by Skype, Mitalipov declined to comment on the results, which he said are pending publication. But other scientists confirmed the editing of embryos using CRISPR. “So far as I know this will be the first study reported in the U.S.,” says Jun Wu, a collaborator at the Salk Institute, in La Jolla, California, who played a role in the project.

Better techniqueThe earlier Chinese publications, although limited in scope, found CRISPR caused editing errors and that the desired DNA changes were taken up not by all the cells of an embryo, only some. That effect, called mosaicism, lent weight to arguments that germline editing would be an unsafe way to create a person.

But Mitalipov and his colleagues are said to have convincingly shown that it is possible to avoid both mosaicism and “off-target” effects, as the CRISPR errors are known.

A person familiar with the research says “many tens” of human IVF embryos were created for the experiment using the donated sperm of men carrying inherited disease mutations. Embryos at this stages are tiny clumps of cells invisible to the naked eye. Technology Review could not determine which disease genes had been chosen for editing.

“It is proof of principle that it can work. They significantly reduced mosaicism. I don’t think it’s the start of clinical trials yet, but it does take it further than anyone has before,” said a scientist familiar with the project.

Mitalipov’s group appears to have overcome earlier difficulties by “getting in early” and injecting CRISPR into the eggs at the same time they were fertilized with sperm.

That concept is similar to one tested in mice by Tony Perry of Bath University. Perry successfully edited the mouse gene for coat color, changing the fur of the offspring from the expected brown to white.

Somewhat prophetically, Perry’s paper on the research, published at the end of 2014, said, “This or analogous approaches may one day enable human genome targeting or editing during very early development.”

Genetic enhancementBorn in Kazakhstan when it was part of the former Soviet Union, Mitalipov has for years pushed scientific boundaries. In 2007, he unveiled the world’s first cloned monkeys. Then, in 2013, he created human embryos through cloning, as a way of creating patient-specific stem cells.

His team’s move into embryo editing coincides with a report by the U.S. National Academy of Sciences in February that was widely seen as providing a green light for lab research on germline modification.

The report also offered qualified support for the use of CRISPR for making gene-edited babies, but only if it were deployed for the elimination of serious diseases.

The advisory committee drew a red line at genetic enhancements—like higher intelligence. “Genome editing to enhance traits or abilities beyond ordinary health raises concerns about whether the benefits can outweigh the risks, and about fairness if available only to some people,” said Alta Charo, co-chair of the NAS’s study committee and professor of law and bioethics at the University of Wisconsin–Madison.

In the U.S., any effort to turn an edited IVF embryo into a baby has been blocked by Congress, which added language to the Department of Health and Human Services funding bill forbidding it from approving clinical trials of the concept.

Despite such barriers, the creation of a gene-edited person could be attempted at any moment, including by IVF clinics operating facilities in countries where there are no such legal restrictions.

Steve Connor is a freelance journalist based in the U.K.

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen8/4/2017 9:50:54 AM
   of 420
Tech Guru Bill Joy Unveils a Battery to Challenge Lithium-Ion

By Brian Eckhouse
August 3, 2017

-- Rechargeable alkaline battery could be cheaper, Joy says

-- hium-ion battery pack prices down 73% from 2010 to 2016

Elon Musk isn’t the only visionary betting that the world will soon be reliant on batteries. Bill Joy, the Silicon Valley guru and Sun Microsystems Inc. co-founder, also envisions such dependence. He just thinks alkaline is a smarter way to go than lithium-ion.

Bill Joy
Photographer: Hyoung Chang/The Denver Post via Getty Images

On Thursday, Joy and Ionic Materials unveiled a solid-state alkaline battery at the Rocky Mountain Institute’s Energy Innovation Summit in Basalt, Colorado, that he says is safer and cheaper than the industry leader, lithium-ion. The appeal of alkaline: it could cost a tiny fraction of existing battery technologies and could be safer in delicate settings, such as aboard airplanes.

“What people didn’t really realize is that alkaline batteries could be made rechargable,” Joy said in a phone interview Thursday. “I think people had given up.”

The Ionic Materials investor envisions three ultimate applications for the polymer technology: consumer electronics, automotive and the power grid. But Joy acknowledged that the technology isn’t quite ready for prime-time. It has yet to be commercialized, and factories are needed to manufacture it. It could be ready for wider use within five years, he said.

On top of that, it would face an entrenched incumbent.


Lithium-ion battery pack prices fell 73 percent from 2010 to 2016, said Logan Goldie-Scot, a San Francisco-based analyst at Bloomberg New Energy Finance, in an email Thursday. “Technology improvements, manufacturing scale, competition between the major battery manufacturers continue to drive costs down. This will make it hard for alternative technologies to compete.”

Ionic expects to talk to potential partners about licenses. Global lithium-ion battery demand from electric vehicles is projected to grow from 21 gigawatt-hours in 2016 to 1,300 gigawatt-hours in 2030, according to Bloomberg New Energy Finance.

“Even if we grew 400 percent every year for a decade, we couldn’t meet the need” alone, Joy said. “We’re starting from a zero base. We don’t have a factory. We have a revolutionary material.”

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen8/29/2017 10:06:11 AM
   of 420
"Edge computing"":

Message 31241376

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen9/20/2017 5:10:34 PM
1 Recommendation   of 420
British supermarket offers 'finger vein' payment in worldwide first

Katie Morley, Consumer Affairs Editor
The Telegraph
20 September 2017 • 1:04am

Fingerprint technology could be coming to a supermarket near you Credit: Fabrizio Bensc/REUTERS

A UK supermarket has become the first in the world to let shoppers pay for groceries using just the veins in their fingertips.

Customers at the Costcutter store, at Brunel University in London, can now pay using their unique vein pattern to identify themselves.

The firm behind the technology, Sthaler, has said it is in "serious talks" with other major UK supermarkets to adopt hi-tech finger vein scanners at pay points across thousands of stores.

It works by using infrared to scan people's finger veins and then links this unique biometric map to their bank cards. Customers’ bank details are then stored with payment provider Worldpay, in the same way you can store your card details when shopping online. Shoppers can then turn up to the supermarket with nothing on them but their own hands and use it to make payments in just three seconds.

It comes as previous studies have found fingerprint recognition, used widely on mobile phones, is vulnerable to being hacked and can be copied even from finger smears left on phone screens.

But Sthaler, the firm behind the technology, claims vein technology is the most secure biometric identification method as it cannot be copied or stolen.

Shaler said dozens of students were already using the system and it expected 3,000 students out of 13,000 to have signed up by November.

Finger print payments are already used widely at cash points in Poland, Turkey and Japan.

Vein scanners are also used as a way of accessing high-security UK police buildings and authorising internal trading at least one major British investment bank.

The firm is also in discussions with nightclubs, gyms about using the technology to verify membership and even Premier League football clubs to check people have the right access to VIP hospitality areas.

The technology uses an infrared light to create a detailed map of the vein pattern in your finger. It requires the person to be alive, meaning in the unlikely event a criminal hacks off someone’s finger, it would not work. Sthaler said it take just one minute to sign up to the system initially and, after that, it takes just seconds to place your finger in a scanner each time you reach the supermarket checkout.

Simon Binns, commercial director of Sthaler, told the Daily Telegraph: ‘This makes payments so much easier for customers.

"They don’t need to carry cash or cards. They don’t need to remember a pin number. You just bring yourself. This is the safest form of biometrics. There are no known incidences where this security has been breached.

"When you put your finger in the scanner it checks you are alive, it checks for a pulse, it checks for haemoglobin. ‘Your vein pattern is secure because it is kept on a database in an encrypted form, as binary numbers. No card details are stored with the retailer or ourselves, it is held with Worldpay, in the same way it is when you buy online."

Nick Telford-Reed, director of technology innovation at Worldpay UK, said: "In our view, finger vein technology has a number of advantages over fingerprint. This deployment of Fingopay in Costcutter branches demonstrates how consumers increasingly want to see their payment methods secure and simple."

Share RecommendKeepReplyMark as Last Read

From: FJB9/30/2017 12:58:12 PM
   of 420

When Bitcoin was unleashed on the world, it filled a specific need. But it wasn’t long before people realized the technology behind Bitcoin—the blockchain—could do much more than record monetary transactions. That realization has lately blossomed into a dazzling and often bewildering array of startup companies, initiatives, corporate alliances, and research projects. Billions of dollars will hinge on what they come up with. So you should understand how blockchains work—and what could happen if they don’t.

Blockchains: How They Work and Why They’ll Change the WorldThe technology behind Bitcoin could touch every transaction you ever make By Morgen E. Peck

How Smart Contracts WorkBlockchain technology could run a flight-insurance business without any employees By Morgen E. Peck

How Blockchains WorkIllustrated from transaction to reward By Morgen E. Peck

The Ridiculous Amount of Energy It Takes to Run BitcoinRunning Bitcoin uses a small city’s worth of electricity. Intel and others want to make a more sustainable blockchain By Peter Fairley

Do You Need a Blockchain?This chart will tell you if a blockchain can solve your problem By Morgen E. Peck

Wall Street Firms to Move Trillions to Blockchains in 2018The finance industry is eagerly adopting the blockchain, a technology that early fans hoped would obliterate the finance industry By Amy Nordrum

Blockchain LingoThe terms you need to know to understand the blockchain revolution By Morgen E. Peck

Why the Biggest Bitcoin Mines Are in ChinaThe heart of Bitcoin is now in Inner Mongolia, where dirty coal fuels sophisticated semiconductor engineering By Morgen E. Peck

Illinois vs. Dubai: Two Experiments Bring Blockchains to GovernmentDubai wants one blockchain platform to rule them all, while Illinois will try anything By Amy Nordrum

Blockchains Will Allow Rooftop Solar Energy Trading for Fun and ProfitNeighbors in New York City, Denmark, and elsewhere will be able to sell one another their solar power By Morgen E. Peck & David Wagman

Video: The Bitcoin Blockchain ExplainedWhat is a blockchain and why is it the future of the Web? By Morgen E. Peck & IEEE Spectrum Staff

Share RecommendKeepReplyMark as Last Read

From: FJB10/11/2017 3:34:48 PM
   of 420
Inside Microsoft’s Quest to Make Quantum Computing Scalable

The company’s researchers are building a system that’s unlike any other quantum computer being developed today.
There’s no shortage of big tech companies building quantum computers, but Microsoft claims its approach to manufacturing qubits will make its quantum computing systems more powerful than others’. The company’s researchers are pursuing “topological” qubits, which store data in the path of moving exotic Majorana particles. This is different from storing it in the state of electrons, which is fragile.

That’s according to Krysta Svore, research manager in Microsoft’s Quantum Architectures and Computation group. The Majorana particle paths -- with a fractionalized electron appearing in many places along them -- weave together like a braid, which makes for a much more robust and efficient system, she said in an interview with Data Center Knowledge. These qubits are called “topological cubits,” and the systems are called “topological quantum computers.”

With other approaches, it may take 10,000 physical qubits to create a logical qubit that’s stable enough for useful computation, because the state of the qubits storing the answer to your problem “decoheres” very easily, she said. It’s harder to disrupt an electron that’s been split up along a topological qubit, because the information is stored in more places.

In quantum mechanics, particles are represented by wavelengths. Coherence is achieved when waves that interfere with each other have the same frequency and constant phase relation. In other words, they don’t have to be in phase with each other, but the difference between the phases has to remain constant. If it does not, the particle states are said to decohere.

“We’re working on a universally programmable circuit model, so any other circuit-based quantum machine will be able to run the same class of algorithms, but we have a big differentiator,” Svore said. “Because the fidelity of the qubit promises to be several orders of magnitude better, I can run an algorithm that’s several orders of magnitude bigger. If I can run many more operations without decohering, I could run a class of algorithm that in theory would run on other quantum machines but that physically won’t give a good result. Let’s say we’re three orders of magnitude better; then I can run three orders of magnitude more operations in my quantum circuit.”

Theoretically, that could mean a clear advantage of a quantum computer over a classical one. “We can have a much larger circuit which could theoretically be the difference between something that shows quantum advantage or not. And for larger algorithms, where error corrections are required, we need several orders of magnitude less overhead to run that algorithm,” she explained.

A Hardware and Software System that Scales

Microsoft has chosen to focus on topological qubits because the researchers believe it will scale, and the company is also building a complete hardware stack to support the scaling. “We’re building a cryogenic computer to control the topological quantum chip; then we're building a software system where you can compile millions of operations and beyond.”

The algorithms running on the system could be doing things like quantum chemistry – looking for more efficient fertilizer or a room temperature semiconductor – or improving machine learning. Microsoft Research has already shown that deep learning trains faster with a quantum computer. With the same deep learning models in use today, Svore says, the research shows “quadratic speedups” even before you start adding quantum terms to the data model, which seems to improve performance even further.

Redesigning a Programming Language

To get developers used to the rather different style of quantum programming, Microsoft will offer a new set of tools in preview later this year (which doesn’t have a name yet) that’s a superset built on what it learned from the academics, researchers, students, and developers who used Liquid, an embedded domain specific language in F# that Microsoft created some years ago.

The language itself has familiar concepts like functions, if statements, variables, and branches, but it also has quantum-specific elements and a growing set of libraries developers can call to help them build quantum apps.

“We’ve almost completely redesigned the language; we will offer all the things Liquid had, but also much more, and it’s not an embedded language. It’s really a domain-specific language designed upfront for scalable quantum computing, and what we’ve tried to do is raise the level of abstraction in this high-level language with the ability to call vast numbers of libraries and subroutines.”

Some of those are low-level subroutines like an adder, a multiplier, and trigonometry functions, but there are also higher-level functions that are commonly used in quantum computing. “Tools like phase estimation, amplitude amplification, amplitude estimation -- these are very common frameworks for your quantum algorithms. They’re the core framework for setting up your algorithm to measure and get the answer out at the end [of the computation], and they’re available in a very pluggable way.”

A key part of making the language accessible is the way it’s integrated into Visual Studio, Microsoft’s IDE. “I think this is a huge step forward,” Svore told us. “It makes it so much easier to read the code because you get the syntax coloring and the debugging; you can set a breakpoint, you can visualise the quantum state.”

Being able to step through your code to understand how it works is critical to learning a new language or a new style of programming, and quantum computing is a very different style of computing.

“As we’ve learned about quantum algorithms and applications, we’ve put what we’ve learned into libraries to make it easier for a future generation of quantum developers,” Svore said. “Our hope is that as a developer you’re not having to think at the lower level of circuits and probabilities. The ability to use these higher-level constructs is key.”

Hybrid Applications

The new language will also make it easier to develop hybrid applications that use both quantum and classical computing, which Svore predicts will be a common pattern. “With the quantum computer, many of the quantum apps and algorithms are hybrid. You're doing pre and post-processing or in some algorithms you’ll even be doing a very tight loop with a classical supercomputer.”

How Many Qubits Can You Handle?Microsoft, she says, is making progress with its topological qubits, but, as it’s impossible to put any kind of date on when a working system might emerge from all this work, the company will come out with a quantum simulator to actually run the programs you write, along with the other development tools.

Depending on how powerful your system is, you’ll be able to simulate between 30 and 33 qubits on your own hardware. For 40 qubits and more, you can do the simulation on Azure.

“At 30 qubits, it takes roughly 16GB of classical memory to store that quantum state, and each operation takes a few seconds,” Svore explains. But as you simulate more qubits, you need a lot more resources. Ten qubits means adding two to the power of 10, or 16TB of memory and double that to go from 40 to 41 qubits. Pretty soon, you’re hitting petabytes of memory. “At 230 qubits, the amount of memory you need is 10^80 bytes, which is more bytes than there are particles in the physical universe, and one operation takes the lifetime of the universe,” Svore said. “But in a quantum computer, that one operation takes 100 nanoseconds.”

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: FJB who wrote (322)10/11/2017 3:38:39 PM
From: FJB
   of 420
Commercial Quantum Computing Pushes On
News & Analysis
10/11/2017 Post a comment
Looking to speed the arrival of commercial quantum computers, Intel has prototyped a 17-qubit superconducting chip, which research partner QuTech will test on a suite of quantum algorithms.

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen10/23/2017 9:20:36 PM
   of 420
A Washing Machine That Tells the Future

How smart appliances are revolutionizing commerce.

By Adam Davidson
The New Yorker
October 23, 2017 Issue

Illustration by Golden Cosmos

The Smoot-Hawley Tariff Act of 1930 is perhaps the single most frequently mocked act of Congress in history. It sparked a trade war in the early days of the Great Depression, and has become shorthand for self-destructive protectionism. So it’s surprising that, while the law’s tariffs have been largely eliminated, some of its absurd provisions still hold. The other week, the American appliance-maker Whirlpool successfully employed a 1974 amendment to the act to persuade the United States government to impose as yet unidentified protections against its top Korean competitors, LG and Samsung. Whirlpool’s official argument was that these firms have been bolstering their market share by offering fancy appliances at low prices. In other words, Whirlpool is getting beat and wants the government to help it win.

This decision is more than a throwback. It shows that Whirlpool and its supporters in government have failed to understand the shift occurring in the business world as a result of the so-called Internet of Things—appliances that send and receive data. It’s easy to miss the magnitude of the change, since many of these Things seem like mere gimmicks. Have you ever wanted to change the water temperature in the middle of a wash cycle when you’re not at home, or get second-by-second reports on precisely how much energy your dryer is consuming? Probably not, but now you can. And it’s not just washing machines. There are at least two “smart” toasters and any number of Wi-Fi-connected coffeemakers, refrigerators, ovens, dishwashers, and garbage cans, not to mention light bulbs, sex toys, toilets, pet feeders, and a children’s thermos.

But this is just the early, land-rush phase of the Internet of Things, comparable to the first Internet land rush, in the late nineties. That era gave us notorious failures—cue obligatory mention of—but it also gave us Amazon, the company that, more than any other, suggests how things will play out. For most of its existence, Amazon has made little or no profit. In the early days, it was often ridiculed for this, but the company’s managers and investors quickly realized that its most valuable asset was not individual sales but data—its knowledge about its loyal, habit-driven customer base. Amazon doesn’t evaluate customers by the last purchase they made; instead, customers have a lifetime value, a prediction of how much money each one will spend in the years to come. Amazon can calculate this with increasing accuracy. Already, it likely knows which books you read, which movies you watch, what data you store, and what food you eat. And since the introduction of Alexa, the voice-operated device, Amazon has been learning when some customers wake up, go to work, listen to the news, play with their kids, and go to sleep.

This is the radical implication of the Internet of Things—a fundamental shift in the relationship between customers and companies. In the old days, you might buy a washing machine or a refrigerator once a decade or so. Appliance-makers are built to profit from that one, rare purchase, focussing their marketing, customer research, and internal financial analysis on brief, sporadic, high-stakes interactions. The fact that you bought a particular company’s stove five years ago has no value today. But, when an appliance is sending a constant stream of data back to its maker, that company has continuous relationships with the owners of its products, and can find all sorts of ways to make money from those relationships. If a company knows, years after you bought its stove, exactly how often you cook, what you cook, when you shop, and what you watch (on a stove-top screen) while you cook, it can continuously monetize your relationship: selling you recipe subscriptions, maybe, or getting a cut of your food orders. Appliances now order their own supplies when they are about to run out. My printer orders its own ink; I assume my next fridge will order milk when I’m running low.

Whirlpool makes smart appliances, just like Samsung and LG. The president of Whirlpool North America, Joseph Liotine, e-mailed me to say that the firm has “led the way in developing cutting-edge innovations and solutions.” He pointed out that its appliances connect to various services, such as Amazon Dash, which can automatically order laundry detergent, and Yummly, a recipe app that Whirlpool owns. But having the right products isn’t the same as having the right strategy. Unlike its Korean competitors, Whirlpool hasn’t embraced the Amazon lesson: that the way to win in a data-driven business is to push prices as low as possible in order to build your customer base, enhance data flow, and cash in in the long-term. Douglas Irwin, an economist at Dartmouth and the author of “Peddling Protectionism,” a book about Smoot-Hawley, told me, “Whirlpool is putting their resources into stopping competition. Maybe they should put their resources into serving their consumers better. This may just delay the reckoning.”

Irwin points out that Whirlpool’s trade complaint was first filed under the Obama Administration, which had imposed tariffs on LG and Samsung in two related cases. But most of the tariffs were small and easy for the companies to get around. President Trump, of course, views free trade more skeptically, and may well impose huge tariffs on all laundry-machine imports. Irwin suspects that this will produce a flood of trade-protection complaints from other American firms. That would be bad for anyone who wants to buy a laundry machine, but, in the long run, it will be even worse for American business.

This article appears in the print edition of the October 23, 2017, issue, with the headline “Cleaning Up.”\

Adam Davidson is a staff writer at The New Yorker.

Share RecommendKeepReplyMark as Last Read

From: roger wilco11/2/2017 12:16:32 PM
   of 420
Nasdaq company entering into the cryptocurrency business

Marathon Patent Group $MARA to Acquire Global Bit Ventures Inc., a Digital Asset Technology Company

Marathon Patent Group, Inc. (MARA), an IP licensing and management company, today announced that it has entered into a definitive purchase agreement to acquire 100% ownership of Global Bit Ventures Inc. (“GBV”), a digital asset technology company that mines cryptocurrencies.

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen11/13/2017 3:08:45 PM
   of 420
Can Carbon-Dioxide Removal Save the World?

CO2 could soon reach levels that, it’s widely agreed, will lead to catastrophe.

By Elizabeth Kolbert
The New Yorker
November 20, 2017 Issue

Carbon-dioxide removal could be a trillion-dollar enterprise, because it not only slows the rise in CO2 but reverses it.
Photo-Illustration by Thomas Albdorf for The New Yorker

Carbon Engineering, a company owned in part by Bill Gates, has its headquarters on a spit of land that juts into Howe Sound, an hour north of Vancouver. Until recently, the land was a toxic-waste site, and the company’s equipment occupies a long, barnlike building that, for many years, was used to process contaminated water. The offices, inherited from the business that poisoned the site, provide a spectacular view of Mt. Garibaldi, which rises to a snow-covered point, and of the Chief, a granite monolith that’s British Columbia’s answer to El Capitan. To protect the spit against rising sea levels, the local government is planning to cover it with a layer of fill six feet deep. When that’s done, it’s hoping to sell the site for luxury condos.

Adrian Corless, Carbon Engineering’s chief executive, who is fifty-one, is a compact man with dark hair, a square jaw, and a concerned expression. “Do you wear contacts?” he asked, as we were suiting up to enter the barnlike building. If so, I’d have to take extra precautions, because some of the chemicals used in the building could cause the lenses to liquefy and fuse to my eyes.

Inside, pipes snaked along the walls and overhead. The thrum of machinery made it hard to hear. In one corner, what looked like oversized beach bags were filled with what looked like white sand. This, Corless explained over the noise, was limestone—pellets of pure calcium carbonate.

Corless and his team are engaged in a project that falls somewhere between toxic-waste cleanup and alchemy. They’ve devised a process that allows them, in effect, to suck carbon dioxide out of the air. Every day at the plant, roughly a ton of CO2 that had previously floated over Mt. Garibaldi or the Chief is converted into calcium carbonate. The pellets are subsequently heated, and the gas is forced off, to be stored in cannisters. The calcium can then be recovered, and the process run through all over again.

“If we’re successful at building a business around carbon removal, these are trillion-dollar markets,” Corless told me.

This past April, the concentration of carbon dioxide in the atmosphere reached a record four hundred and ten parts per million. The amount of CO2 in the air now is probably greater than it’s been at any time since the mid-Pliocene, three and a half million years ago, when there was a lot less ice at the poles and sea levels were sixty feet higher. This year’s record will be surpassed next year, and next year’s the year after that. Even if every country fulfills the pledges made in the Paris climate accord—and the United States has said that it doesn’t intend to—carbon dioxide could soon reach levels that, it’s widely agreed, will lead to catastrophe, assuming it hasn’t already done so.

Carbon-dioxide removal is, potentially, a trillion-dollar enterprise because it offers a way not just to slow the rise in CO2 but to reverse it. The process is sometimes referred to as “negative emissions”: instead of adding carbon to the air, it subtracts it. Carbon-removal plants could be built anywhere, or everywhere. Construct enough of them and, in theory at least, CO2 emissions could continue unabated and still we could avert calamity. Depending on how you look at things, the technology represents either the ultimate insurance policy or the ultimate moral hazard.

Carbon Engineering is one of a half-dozen companies vying to prove that carbon removal is feasible. Others include Global Thermostat, which is based in New York, and Climeworks, based near Zurich. Most of these owe their origins to the ideas of a physicist named Klaus Lackner, who now works at Arizona State University, in Tempe, so on my way home from British Columbia I took a detour to visit him. It was July, and on the day I arrived the temperature in the city reached a hundred and twelve degrees. When I got to my hotel, one of the first things I noticed was a dead starling lying, feet up, in the parking lot. I wondered if it had died from heat exhaustion.

Lackner, who is sixty-five, grew up in Germany. He is tall and lanky, with a fringe of gray hair and a prominent forehead. I met him in his office at an institute he runs, the Center for Negative Carbon Emissions. The office was bare, except for a few New Yorker cartoons on the theme of nerd-dom, which, Lackner told me, his wife had cut out for him. In one, a couple of scientists stand in front of an enormous whiteboard covered in equations. “The math is right,” one of them says. “It’s just in poor taste.”

In the late nineteen-seventies, Lackner moved from Germany to California to study with George Zweig, one of the discoverers of quarks. A few years later, he got a job at Los Alamos National Laboratory. There, he worked on fusion. “Some of the work was classified,” he said, “some of it not.”

Fusion is the process that powers the stars and, closer to home, thermonuclear bombs. When Lackner was at Los Alamos, it was being touted as a solution to the world’s energy problem; if fusion could be harnessed, it could generate vast amounts of carbon-free power using isotopes of hydrogen. Lackner became convinced that a fusion reactor was, at a minimum, decades away. (Decades later, it’s generally agreed that a workable reactor is still decades away.) Meanwhile, the globe’s growing population would demand more and more energy, and this demand would be met, for the most part, with fossil fuels.

“I realized, probably earlier than most, that the claims of the demise of fossil fuels were greatly exaggerated,” Lackner told me. (In fact, fossil fuels currently provide about eighty per cent of the world’s energy. Proportionally, this figure hasn’t changed much since the mid-eighties, but, because global energy use has nearly doubled, the amount of coal, oil, and natural gas being burned today is almost two times greater.)

One evening in the early nineties, Lackner was having a beer with a friend, Christopher Wendt, also a physicist. The two got to wondering why, as Lackner put it to me, “nobody’s doing these really crazy, big things anymore.” This led to more questions and more conversations (and possibly more beers).

Eventually, the two produced an equation-dense paper in which they argued that self-replicating machines could solve the world’s energy problem and, more or less at the same time, clean up the mess humans have made by burning fossil fuels. The machines would be powered by solar panels, and as they multiplied they’d produce more solar panels, which they’d assemble using elements, like silicon and aluminum, extracted from ordinary dirt. The expanding collection of panels would produce ever more power, at a rate that would increase exponentially. An array covering three hundred and eighty-six thousand square miles—an area larger than Nigeria but, as Lackner and Wendt noted, “smaller than many deserts”—could supply all the world’s electricity many times over.

This same array could be put to use scrubbing carbon dioxide from the atmosphere. According to Lackner and Wendt, the power generated by a Nigeria-size solar farm would be enough to remove all the CO2 emitted by humans up to that point within five years. Ideally, the CO2 would be converted to rock, similar to the white sand produced by Carbon Engineering; enough would be created to cover Venezuela in a layer a foot and a half deep. (Where this rock would go the two did not specify.)

Lackner let the idea of the self-replicating machine slide, but he became more and more intrigued by carbon-dioxide removal, particularly by what’s become known as “direct air capture.”

“Sometimes by thinking through this extreme end point you learn a lot,” he said. He began giving talks and writing papers on the subject. Some scientists decided he was nuts, others that he was a visionary. “Klaus is, in fact, a genius,” Julio Friedmann, a former Principal Deputy Assistant Secretary of Energy and an expert on carbon management, told me.

In 2000, Lackner received a job offer from Columbia University. Once in New York, he pitched a plan for developing a carbon-sucking technology to Gary Comer, a founder of Lands’ End. Comer brought to the meeting his investment adviser, who quipped that Lackner wasn’t looking for venture capital so much as “adventure capital.” Nevertheless, Comer offered to put up five million dollars. The new company was called Global Research Technologies, or G.R.T. It got as far as building a small prototype, but just as it was looking for new investors the financial crisis hit.

“Our timing was exquisite,” Lackner told me. Unable to raise more funds, the company ceased operations. As the planet continued to warm, and carbon-dioxide levels continued to climb, Lackner came to believe that, unwittingly, humanity had already committed itself to negative emissions.

“I think that we’re in a very uncomfortable situation,” he said. “I would argue that if technologies to pull CO2 out of the environment fail then we’re in deep trouble.”

Lackner founded the Center for Negative Carbon Emissions at A.S.U. in 2014. Most of the equipment he dreams up is put together in a workshop a few blocks from his office. The day I was there, it was so hot outside that even the five-minute walk to the workshop required staging. Lackner delivered a short lecture on the dangers of dehydration and handed me a bottle of water.

In the workshop, an engineer was tinkering with what looked like the guts of a foldout couch. Where, in the living-room version, there would have been a mattress, in this one was an elaborate array of plastic ribbons. Embedded in each ribbon was a powder made from thousands upon thousands of tiny amber-colored beads. The beads, Lackner explained, could be purchased by the truckload; they were composed of a resin normally used in water treatment to remove chemicals like nitrates. More or less by accident, Lackner had discovered that the beads could be repurposed. Dry, they’d absorb carbon dioxide. Wet, they’d release it. The idea was to expose the ribbons to Arizona’s thirsty air, and then fold the device into a sealed container filled with water. The CO2 that had been captured by the powder in the dry phase would be released in the wet phase; it could then be piped out of the container, and the whole process re-started, the couch folding and unfolding over and over again.

Lackner has calculated that an apparatus the size of a semi trailer could remove a ton of carbon dioxide per day, or three hundred and sixty-five tons a year. The world’s cars, planes, refineries, and power plants now produce about thirty-six billion tons of CO2 annually, so, he told me, “if you built a hundred million trailer-size units you could actually keep up with current emissions.” He acknowledged that the figure sounded daunting. But, he noted, the iPhone has been around for only a decade or so, and there are now seven hundred million in use. “We are still very early in this game,” he said.

The way Lackner sees things, the key to avoiding “deep trouble” is thinking differently. “We need to change the paradigm,” he told me. Carbon dioxide should be regarded the same way we view other waste products, like sewage or garbage. We don’t expect people to stop producing waste. (“Rewarding people for going to the bathroom less would be nonsensical,” Lackner has observed.) At the same time, we don’t let them shit on the sidewalk or toss their empty yogurt containers into the street.

“If I were to tell you that the garbage I’m dumping in front of your house is twenty per cent less this year than it was last year, you would still think I’m doing something intolerable,” Lackner said.

One of the reasons we’ve made so little progress on climate change, he contends, is that the issue has acquired an ethical charge, which has polarized people. To the extent that emissions are seen as bad, emitters become guilty. “Such a moral stance makes virtually everyone a sinner, and makes hypocrites out of many who are concerned about climate change but still partake in the benefits of modernity,” he has written. Changing the paradigm, Lackner believes, will change the conversation. If CO2 is treated as just another form of waste, which has to be disposed of, then people can stop arguing about whether it’s a problem and finally start doing something.

Carbon dioxide was “discovered,” by a Scottish physician named Joseph Black, in 1754. A decade later, another Scotsman, James Watt, invented a more efficient steam engine, ushering in what is now called the age of industrialization but which future generations may dub the age of emissions. It is likely that by the end of the nineteenth century human activity had raised the average temperature of the earth by a tenth of a degree Celsius (or nearly two-tenths of a degree Fahrenheit).

As the world warmed, it started to change, first gradually and then suddenly. By now, the globe is at least one degree Celsius (1.8 degrees Fahrenheit) warmer than it was in Black’s day, and the consequences are becoming ever more apparent. Heat waves are hotter, rainstorms more intense, and droughts drier. The wildfire season is growing longer, and fires, like the ones that recently ravaged Northern California, more numerous. Sea levels are rising, and the rate of rise is accelerating. Higher sea levels exacerbated the damage from Hurricanes Harvey, Irma, and Maria, and higher water temperatures probably also made the storms more ferocious. “Harvey is what climate change looks like,” Eric Holthaus, a meteorologist turned columnist, recently wrote.

Meanwhile, still more warming is locked in. There’s so much inertia in the climate system, which is as vast as the earth itself, that the globe has yet to fully adjust to the hundreds of billions of tons of carbon dioxide that have been added to the atmosphere in the past few decades. It’s been calculated that to equilibrate to current CO2 levels the planet still needs to warm by half a degree. And every ten days another billion tons of carbon dioxide are released. Last month, the World Meteorological Organization announced that the concentration of carbon dioxide in the atmosphere jumped by a record amount in 2016.

No one can say exactly how warm the world can get before disaster—the inundation of low-lying cities, say, or the collapse of crucial ecosystems, like coral reefs—becomes inevitable. Officially, the threshold is two degrees Celsius (3.6 degrees Fahrenheit) above preindustrial levels. Virtually every nation signed on to this figure at a round of climate negotiations held in Cancún in 2010.

Meeting in Paris in 2015, world leaders decided that the two-degree threshold was too high; the stated aim of the climate accord is to hold “the increase in the global average temperature to well below 2°C” and to try to limit it to 1.5°C. Since the planet has already warmed by one degree and, for all practical purposes, is committed to another half a degree, it would seem impossible to meet the latter goal and nearly impossible to meet the former. And it is nearly impossible, unless the world switches course and instead of just adding CO2 to the atmosphere also starts to remove it.

The extent to which the world is counting on negative emissions is documented by the latest report of the Intergovernmental Panel on Climate Change, which was published the year before Paris. To peer into the future, the I.P.C.C. relies on computer models that represent the world’s energy and climate systems as a tangle of equations, and which can be programmed to play out different “scenarios.” Most of the scenarios involve temperature increases of two, three, or even four degrees Celsius—up to just over seven degrees Fahrenheit—by the end of this century. (In a recent paper in the Proceedings of the National Academy of Sciences, two climate scientists—Yangyang Xu, of Texas A. & M., and Veerabhadran Ramanathan, of the Scripps Institution of Oceanography—proposed that warming greater than three degrees Celsius be designated as “catastrophic” and warming greater than five degrees as “unknown??” The “unknown??” designation, they wrote, comes “with the understanding that changes of this magnitude, not experienced in the last 20+ million years, pose existential threats to a majority of the population.”)

When the I.P.C.C. went looking for ways to hold the temperature increase under two degrees Celsius, it found the math punishing. Global emissions would have to fall rapidly and dramatically—pretty much down to zero by the middle of this century. (This would entail, among other things, replacing most of the world’s power plants, revamping its agricultural systems, and eliminating gasoline-powered vehicles, all within the next few decades.) Alternatively, humanity could, in effect, go into hock. It could allow CO2 levels temporarily to exceed the two-degree threshold—a situation that’s become known as “overshoot”—and then, via negative emissions, pull the excess CO2 out of the air.

The I.P.C.C. considered more than a thousand possible scenarios. Of these, only a hundred and sixteen limit warming to below two degrees, and of these a hundred and eight involve negative emissions. In many below-two-degree scenarios, the quantity of negative emissions called for reaches the same order of magnitude as the “positive” emissions being produced today.

“The volumes are outright crazy,” Oliver Geden, the head of the E.U. research division of the German Institute for International and Security Affairs, told me. Lackner said, “I think what the I.P.C.C. really is saying is ‘We tried lots and lots of scenarios, and, of the scenarios which stayed safe, virtually every one needed some magic touch of a negative emissions. If we didn’t do that, we ran into a brick wall.’ ”

Pursued on the scale envisioned by the I.P.C.C., carbon-dioxide removal would yield at first tens of billions and soon hundreds of billions of tons of CO2, all of which would have to be dealt with. This represents its own supersized challenge. CO2 can be combined with calcium to produce limestone, as it is in the process at Carbon Engineering (and in Lackner’s self-replicating-machine scheme). But the necessary form of calcium isn’t readily available, and producing it generally yields CO2, a self-defeating prospect. An alternative is to shove the carbon back where it came from, deep underground.

“If you are storing CO2 and your only purpose is storage, then you’re looking for a package of certain types of rock,” Sallie Greenberg, the associate director for energy, research, and development at the Illinois State Geological Survey, told me. It was a bright summer day, and we were driving through the cornfields of Illinois’s midsection. A mile below us was a rock formation known as the Eau Claire Shale, and below that a formation known as the Mt. Simon Sandstone. Together with a team of drillers, engineers, and geoscientists, Greenberg has spent the past decade injecting carbon dioxide into this rock “package” and studying the outcome. When I’d proposed over the phone that she show me the project, in Decatur, she’d agreed, though not without hesitation.

“It isn’t sexy,” she’d warned me. “It’s a wellhead.”

“I have to know how it’s done.”

Our first stop was a building shaped like a ski chalet. This was the National Sequestration Education Center, a joint venture of the Illinois geological survey, the U.S. Department of Energy, and Richland Community College. Inside were classrooms, occupied that morning by kids making lanyards, and displays aimed at illuminating the very dark world of carbon storage. One display was a sort of oversized barber pole, nine feet tall and decorated in bands of tan and brown, representing the various rock layers beneath us. A long arrow on the side of the pole indicated how many had been drilled through for Greenberg’s carbon-storage project; it pointed down, through the New Albany Shale, the Maquoketa Shale, and so on, all the way to the floor.

The center’s director, David Larrick, was on hand to serve as a guide. In addition to schoolkids, he said, the center hosted lots of community groups, like Kiwanis clubs. “This is very effective as a visual,” he told me, gesturing toward the pole. Sometimes farmers were concerned about the impact that the project could have on their water supply. The pole showed that the CO2 was being injected more than a mile below their wells.

“We have had overwhelmingly positive support,” he said. While Greenberg and Larrick chatted, I wandered off to play an educational video game. A cartoon figure in a hard hat appeared on the screen to offer factoids such as “The most efficient method of transport of CO2 is by pipeline.”

“Transport CO2 to earn points!” the cartoon man exhorted.

After touring the center’s garden, which featured grasses, like big bluestem, that would have been found in the area before it was plowed into cornfields, Greenberg and I drove on. Soon we passed through the gates of an enormous Archer Daniels Midland plant, which rose up out of the fields like a small city.

Greenberg explained that the project we were visiting was one of seven funded by the Department of Energy to learn whether carbon injected underground would stay there. In the earliest stage of the project, initiated under President George W. Bush, Greenberg and her colleagues sifted through geological records to find an appropriate test site. What they were seeking was similar to what oil drillers look for—porous stone capped by a layer of impermeable rock—only they were looking not to extract fossil fuels but, in a manner of speaking, to stuff them back in. The next step was locating a ready source of carbon dioxide. This is where A.D.M. came in; the plant converts corn into ethanol, and one of the by-products of this process is almost pure CO2. In a later stage of the project, during the Obama Administration, a million tons of carbon dioxide from the plant were pumped underground. Rigorous monitoring has shown that, so far, the CO2 has stayed put.

We stopped to pick up hard hats and went to see some of the monitoring equipment, which was being serviced by two engineers, Nick Malkewicz and Jim Kirksey. It was now lunchtime, so we made another detour, to a local barbecue place. Finally, Greenberg and I and the two men got to the injection site. It was, indeed, not sexy—just a bunch of pipes and valves sticking out of the dirt. I asked about the future of carbon storage.

“I think the technology’s there and it’s absolutely viable,” Malkewicz said. “It’s just a question of whether people want to do it or not. It’s kind of an obvious thing.”

“We know we can meet the objective of storing CO2,” Greenberg added. “Like Nick said, it’s just a matter of whether or not as a society we’re going to do it.”

When work began on the Decatur project, in 2003, few people besides Klaus Lackner were thinking about sucking CO2 from the air. Instead, the goal was to demonstrate the feasibility of an only slightly less revolutionary technology—carbon capture and storage (or, as it is sometimes referred to, carbon capture and sequestration).

With C.C.S., the CO2 produced at a power station or a steel mill or a cement plant is drawn off before it has a chance to disperse into the atmosphere. (This is called “post-combustion capture.”) The gas, under very high pressure, is then injected into the appropriate package of rock, where it is supposed to remain permanently. The process has become popularly—and euphemistically—known as “clean coal,” because, if all goes according to plan, a plant equipped with C.C.S. produces only a fraction of the emissions of a conventional coal-fired plant.

Over the years, both Republicans and Democrats have touted clean coal as a way to save mining jobs and protect the environment. The coal industry has also, nominally at least, embraced the technology; one industry-sponsored group calls itself the American Coalition for Clean Coal Electricity. Donald Trump, too, has talked up clean coal, even if he doesn’t seem to quite understand what the term means. “We’re going to have clean coal, really clean coal,” he said in March.

Currently, only one power plant in the U.S., the Petra Nova plant, near Houston, uses post-combustion carbon capture on a large scale. Plans for other plants to showcase the technology have been scrapped, including, most recently, the Kemper County plant, in Mississippi. This past June, the plant’s owner, Southern Company, announced that it was changing tacks. Instead of burning coal and capturing the carbon, the plant would burn natural gas and release the CO2.

Experts I spoke to said that the main reason C.C.S. hasn’t caught on is that there’s no inducement to use it. Capturing the CO2 from a smokestack consumes a lot of power—up to twenty-five per cent of the total produced at a typical coal-burning plant. And this, of course, translates into costs. What company is going to assume such costs when it can dump CO2 into the air for free?

“If you’re running a steel mill or a power plant and you’re putting the CO2 into the atmosphere, people might say, ‘Why aren’t you using carbon capture and storage?’ ” Howard Herzog, an engineer at M.I.T. who for many years ran a research program on C.C.S., told me. “And you say, ‘What’s my financial incentive? No one’s saying I can’t put it in the atmosphere.’ In fact, we’ve gone backwards in terms of sending signals that you’re going to have to restrict it.”

But, although C.C.S. has stalled in practice, it has become ever more essential on paper. Practically all below-two-degree warming scenarios assume that it will be widely deployed. And even this isn’t enough. To avoid catastrophe, most models rely on a yet to be realized variation of C.C.S., known as BECCS.

BECCS, which stands for “bio-energy with carbon capture and storage,” takes advantage of the original form of carbon engineering: photosynthesis. Trees and grasses and shrubs, as they grow, soak up CO2 from the air. (Replanting forests is a low-tech form of carbon removal.) Later, when the plants rot or are combusted, the carbon they have absorbed is released back into the atmosphere. If a power station were to burn wood, say, or cornstalks, and use C.C.S. to sequester the resulting CO2, this cycle would be broken. Carbon would be sucked from the air by the green plants and then forced underground. BECCS represents a way to generate negative emissions and, at the same time, electricity. The arrangement, at least as far as the models are concerned, could hardly be more convenient.

BECCS is unique in that it removes carbon and produces energy,” Glen Peters, a senior researcher at the Center for International Climate Research, in Oslo, told me. “So the more you consume the more you remove.” He went on, “In a sense, it’s a dream technology. It’s solving one problem while solving the other problem. What more could you want?”

The Center for Carbon Removal doesn’t really have an office; it operates out of a co-working space in downtown Oakland. On the day I visited, not long after my trip to Decatur, someone had recently stopped at Trader Joe’s, and much of the center’s limited real estate was taken up by tubs of treats.

“Open anything you want,” the center’s executive director, Noah Deich, urged me, with a wave of his hand.

Deich, who is thirty-one, has a broad face, a brown beard, and a knowing sort of earnestness. After graduating from the University of Virginia, in 2009, he went to work for a consulting firm in Washington, D.C., that was advising power companies about how to prepare for a time when they’d no longer be able to release carbon into the atmosphere cost-free. It was the start of the Obama Administration, and that time seemed imminent. The House of Representatives had recently approved legislation to limit emissions. But the bill later died in the Senate, and, as Deich put it, “It’s no fun to model the impacts of climate policies nobody believes are going to happen.” He switched consulting firms, then headed to business school, at the University of California, Berkeley.

“I came into school with this vision of working for a clean-tech startup,” he told me. “But I also had this idea floating around in the back of my head that we’re moving too slowly to actually stop emissions in time. So what do we do with all the carbon that’s in the air?” He started talking to scientists and policy experts at Berkeley. What he learned shocked him.

“People told me, ‘The models show this major need for negative emissions,’ ” he recalled. “ ‘But we don’t really know how to do that, nor is anyone really thinking about it.’ I was someone who’d been in the business and policy world, and I was, like, wait a minute—what?”

Business school taught Deich to think in terms of case studies. One that seemed to him relevant was solar power. Photovoltaic cells have been around since the nineteen-fifties, but for decades they were prohibitively expensive. Then the price started to drop, which increased demand, which led to further price drops, to the point where today, in many parts of the world, the cost of solar power is competitive with the cost of power from new coal plants.

“And the reason that it’s now competitive is that governments decided to do lots and lots of research,” Deich said. “And some countries, like Germany, decided to pay a lot for solar, to create a first market. And China paid a lot to manufacture the stuff, and states in the U.S. said, ‘You must consume renewable energy,’ and then consumers said, ‘Hey, how can I buy renewable energy?’ ”

"Thanks, I’ll write that down.”

As far as he could see, none of this—neither the research nor the creation of first markets nor the spurring of consumer demand—was being done for carbon removal, so he decided to try to change that. Together with a Berkeley undergraduate, Giana Amador, he founded the center in 2015, with a hundred-and-fifty-thousand-dollar grant from the university. It now has an annual budget of about a million dollars, raised from private donors and foundations, and a staff of seven. Deich described it as a “think-and-do tank.”

“We’re trying to figure out: how do we actually get this on the agenda?” he said.

A compelling reason for putting carbon removal on “the agenda” is that we are already counting on it. Negative emissions are built into the I.P.C.C. scenarios and the climate agreements that rest on them.

But everyone I spoke with, including the most fervent advocates for carbon removal, stressed the huge challenges of the work, some of them technological, others political and economic. Done on a scale significant enough to make a difference, direct air capture of the sort pursued by Carbon Engineering, in British Columbia, would require an enormous infrastructure, as well as huge supplies of power. (Because CO2 is more dilute in the air than it is in the exhaust of a power plant, direct air capture demands even more energy than C.C.S.) The power would have to be generated emissions-free, or the whole enterprise wouldn’t make much sense.

“You might say it’s against my self-interest to say it, but I think that, in the near term, talking about carbon removal is silly,” David Keith, the founder of Carbon Engineering, who teaches energy and public policy at Harvard, told me. “Because it almost certainly is cheaper to cut emissions now than to do large-scale carbon removal.”

BECCS doesn’t make big energy demands; instead, it requires vast tracts of arable land. Much of this land would, presumably, have to be diverted from food production, and at a time when the global population—and therefore global food demand—is projected to be growing. (It’s estimated that to do BECCS on the scale envisioned by some below-two-degrees scenarios would require an area larger than India.) Two researchers in Britain, Naomi Vaughan and Clair Gough, who recently conducted a workshop on BECCS, concluded that “assumptions regarding the extent of bioenergy deployment that is possible” are generally “unrealistic.”

For these reasons, many experts argue that even talking (or writing articles) about negative emissions is dangerous. Such talk fosters the impression that it’s possible to put off action and still avoid a crisis, when it is far more likely that continued inaction will just produce a larger crisis. In “The Trouble with Negative Emissions,” an essay that ran last year in Science, Kevin Anderson, of the Tyndall Centre for Climate Change Research, in England, and Glen Peters, of the climate-research center in Oslo, described negative-emissions technologies as a “high-stakes gamble” and relying on them as a “moral hazard par excellence.”

We should, they wrote, “proceed on the premise that they will not work at scale.”

Others counter that the moment for fretting about the hazards of negative emissions—moral or otherwise—has passed.

“The punch line is, it doesn’t matter,” Julio Friedmann, the former Principal Deputy Assistant Energy Secretary, told me. “We actually need to do direct air capture, so we need to create technologies that do that. Whether it’s smart or not, whether it’s optimized or not, whether it’s the lowest-cost pathway or not, we know we need to do it.”

“If you tell me that we don’t know whether our stuff will work, I will admit that is true,” Klaus Lackner said. “But I also would argue that nobody else has a good option.”

One of the peculiarities of climate discussions is that the strongest argument for any given strategy is usually based on the hopelessness of the alternatives: this approach must work, because clearly the others aren’t going to. This sort of reasoning rests on a fragile premise—what might be called solution bias. There has to be an answer out there somewhere, since the contrary is too horrible to contemplate.

Early last month, the Trump Administration announced its intention to repeal the Clean Power Plan, a set of rules aimed at cutting power plants’ emissions. The plan, which had been approved by the Obama Administration, was eminently achievable. Still, according to the current Administration, the cuts were too onerous. The repeal of the plan is likely to result in hundreds of millions of tons of additional emissions.

A few weeks later, the United Nations Environment Programme released its annual Emissions Gap Report. The report labelled the difference between the emissions reductions needed to avoid dangerous climate change and those which countries have pledged to achieve as “alarmingly high.” For the first time, this year’s report contains a chapter on negative emissions. “In order to achieve the goals of the Paris Agreement,” it notes, “carbon dioxide removal is likely a necessary step.”

As a technology of last resort, carbon removal is, almost by its nature, paradoxical. It has become vital without necessarily being viable. It may be impossible to manage and it may also be impossible to manage without.

This article appears in the print edition of the November 20, 2017, issue, with the headline “Going Negative.”

Elizabeth Kolbert has been a staff writer at The New Yorker since 1999. She won the 2015 Pulitzer Prize for general nonfiction for “The Sixth Extinction: An Unnatural History.”

Share RecommendKeepReplyMark as Last ReadRead Replies (1)
Previous 10 Next 10