Technology StocksDrones, Autonomous Vehicles and Flying Cars

Previous 10 Next 10 
From: kidl9/21/2017 4:53:01 AM
1 Recommendation   of 1840
Commercial drone use to expand tenfold by 2021

  • The FAA expects the U.S. commercial drone fleet to grow from 42K at the end of 2016 to about 442K aircraft by 2021 (but there could also be as many as 1.6M commercial drones in use by then).

  • The key difference in its growth estimates is in “how quickly the regulatory environment will evolve, enabling more widespread routine uses of (drones) for commercial purposes.”

    Since August, the FAA has approved more than 300 waivers for drone use without some restrictions for firms including Union Pacific (NYSE: UNP), BNSF Railway ( BRK.A, BRK.B), Intel (NASDAQ: INTC), Walt Disney (NYSE: DIS) and Time Warner (NYSE: TWX).

    Share RecommendKeepReplyMark as Last Read

    From: Glenn Petersen9/22/2017 5:56:02 AM
       of 1840
    China's Baidu launches $1.5 billion autonomous driving fund

    Reuters Staff
    September 21, 2017 / 1:51 AM

    FILE PHOTO: A woman is silhouetted against the Baidu logo at a new product launch from Baidu, in Shanghai, China, November 26, 2015. REUTERS/Aly Song/File Photo

    BEIJING (Reuters) - Chinese search engine Baidu Inc ( BIDU.O) announced a 10 billion yuan ($1.52 billion) autonomous driving fund on Thursday as part of a wider plan to speed up its technical development and compete with U.S. rivals.

    The “Apollo Fund” will invest in 100 autonomous driving projects over the next three years, Baidu said in a statement.

    The fund’s launch coincides with the release of Apollo 1.5, the second generation of the company’s open-source autonomous vehicle software.

    After years of internal development, Baidu in April decided to open its autonomous driving technology to third parties, a move it hopes will accelerate development and help it compete with U.S. firms Tesla Inc ( TSLA.O) and Google project Waymo.

    In the latest update to its platform, Baidu says partners can access new obstacle perception technology and high-definition maps, among other features.

    FILE PHOTO: A employee uses his mobile phone as he walks past the company logo of Baidu at its headquarters in Beijing, August 5, 2010. REUTERS/Barry Huang/File Photo

    It comes amid a wider reshuffle of Baidu’s corporate strategy as it looks for new profit streams outside its core search business, which lost a large chunk of ad revenue in 2016 following strict new government regulations on medical advertising.[

    Baidu’s Apollo project - named after the NASA moon landing - aims to create technology for completely autonomous cars, which it says will be ready for city roads in China by 2020.

    It now has 70 partners across several fields in the auto industry, up from 50 in July, it says. Existing partners include microprocessors firm Nvidia Corp ( NVDA.O) and mapping service TomTom NV.

    Despite the rapid growth of its partner ecosystem, Baidu has faced challenges negotiating local Chinese regulations, which have previously stopped the company from testing on highways.

    In July local Beijing police said it was investigating whether the company had broken city traffic rules by testing a driverless car on public roads as part of a demonstration for a press event.

    Reporting by Cate Cadell; Editing by Stephen Coates

    Share RecommendKeepReplyMark as Last Read

    From: Glenn Petersen9/25/2017 5:47:13 AM
       of 1840
    Finally, a Driverless Car with Some Common Sense

    A startup called iSee thinks a new approach to AI will make self-driving cars better at dealing with unexpected situations.

    by Will Knight
    MIT Technology Review
    September 20, 2017

    Boston’s notoriously unfriendly drivers and chaotic roads may be the perfect testing ground for a fundamentally different kind of self-driving car.

    An MIT spin-off called iSee is developing and testing the autonomous driving system using a novel approach to artificial intelligence. Instead of relying on simple rules or machine-learning algorithms to train cars to drive, the startup is taking inspiration from cognitive science to give machines a kind of common sense and the ability to quickly deal with new situations. It is developing algorithms that try to match the way humans understand and learn about the physical world, including interacting with other people. The approach could lead to self-driving vehicles that are much better equipped to deal with unfamiliar scenes and complex interactions on the road.

    “The human mind is super-sensitive to physics and social cues,” says Yibiao Zhao, cofounder of iSee. “Current AI is relatively limited in those domains, and we think that is actually the missing piece in driving.”

    Zhao’s company doesn’t look like a world beater just yet. A small team of engineers works out of a modest lab space at the Engine, a new investment company created by MIT to fund innovative local tech companies. Located just a short walk from the MIT campus, the Engine overlooks a street on which drivers jostle for parking spots and edge aggressively into traffic.

    With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart.

    The desks inside iSee’s space are covered with sensors and pieces of hardware the team has put together to take control of its first prototype, a Lexus hybrid SUV that originally belonged to one of the company’s cofounders. Several engineers sit behind large computer monitors staring intently at lines of code.

    iSee might seem laughably small compared to the driverless-car efforts at companies like Waymo, Uber, or Ford, but the technology it’s developing could have a big impact on many areas where AI is applied today. By enabling machines to learn from less data, and to build some form of common sense, their technology could make industrial robots smarter, especially about new situations. Spectacular progress has already been made in AI recently thanks to deep learning, a technique that employs vast data-hungry neural networks (see “ 10 Breakthrough Technologies 2013: Deep Learning”).

    When fed large amounts of data, very large or deep neural networks can recognize subtle patterns. Give a deep neural network lots of pictures of dogs, for instance, and it will figure out how to spot a dog in just about any image. But there are limits to what deep learning can do, and some radical new ideas may well be needed to bring about the next leap forward. For example, a dog-spotting deep-learning system doesn’t understand that dogs typically have four legs, fur, and a wet nose. And it cannot recognize other types of animals, or a drawing of a dog, without further training.

    Driving involves considerably more than just pattern recognition. Human drivers rely constantly on a commonsense understanding of the world. They know that buses take longer to stop, for example, and can suddenly produce lots of pedestrians. It would be impossible to program a self-driving car with every possible scenario it might encounter. But people are able to use their commonsense understanding of the world, built up through lifelong experience, to act sensibly in all sorts of new situations.

    “Deep learning is great, and you can learn a lot from previous experience, but you can’t have a data set that includes the whole world,” Zhao says. “Current AI, which is mostly data-driven, has difficulties understanding common sense; that’s the key thing that’s missing.” Zhao illustrates the point by opening his laptop to show several real-world road situations on YouTube, including complex traffic-merging situations and some hairy-looking accidents.

    A lack of commonsense knowledge has certainly caused some problems for autonomous driving systems. An accident involving a Tesla driving in semi-autonomous mode in Florida last year, for instance, occurred when the car’s sensors were temporarily confused as a truck crossed the highway (see “ Fatal Tesla Crash Is a Reminder Autonomous Cars Will Sometimes Screw Up”). A human driver would have likely quickly and safely figured out what was going on.

    A death behind the wheel with Tesla’s Autopilot on raises the question of how safe automated cars must be.

    Zhao and Debbie Yu, one of his cofounders, show a clip of an accident involving a Tesla in China, in which the car drove straight into a street-cleaning truck. “The system is trained on Israel or Europe, and they don’t have this kind of truck,” Zhao says. “It’s only based on detection; it doesn’t really understand what’s going on,” he says.

    iSee is built on efforts to understand how humans make sense of the world, and to design machines that mimic this. Zhao and other founders of iSee come from the lab of Josh Tenenbaum, a professor in the department of brain and cognitive science at MIT who now serves as an advisor to the company.

    Tenenbaum specializes in exploring how human intelligence works, and using that insight to engineer novel types of AI systems. This includes work on the intuitive sense of physics exhibited even by young children, for instance. Children’s ability to understand how the physical world behaves enables them to predict how unfamiliar situations may unfold. And, Tenenbaum explains, this understanding of the physical world is intimately connected with an intuitive understanding of psychology and the ability to infer what a person is trying to achieve, such as reaching for a cup, by watching his or her actions.

    The ability to transfer learning between situations is also a hallmark of human intelligence, and even the smartest machine-learning systems are still very limited by comparison. Tenenbaum’s lab combines conventional machine learning with novel “probabilistic programming” approaches. This makes it possible for machines to learn to infer things about the physics of the world as well as the intentions of others despite uncertainty.

    Software that learns to recognize written characters from just one example may point the way towards more powerful, more humanlike artificial intelligence.

    Trying to reverse-engineer the ways in which even a young baby is smarter than the cleverest existing AI system could eventually lead to many smarter AI systems, Tenenbaum says. In 2015, together with researchers from New York University and Carnegie Mellon University, Tenenbaum used some of these ideas to develop a landmark computer program capable of learning to recognize handwriting from just a few examples (see “ This AI Algorithm Learns Simple Tasks As Fast As We Do”).

    A related approach might eventually give a self-driving car something approaching a rudimentary form of common sense in unfamiliar scenarios. Such a car may be able to determine that a driver who’s edging out into the road probably wants to merge into traffic.

    When it comes to autonomous driving, in fact, Tenenbaum says the ability to infer what another driver is trying to achieve could be especially important. Another of iSee’s cofounders, Chris Baker, developed computational models of human psychology while at MIT. “Taking engineering-style models of how humans understand other humans, and being able to put those into autonomous driving, could really provide a missing piece of the puzzle,” Tenenbaum says.

    Tenenbaum says he was not initially interested in applying ideas from cognitive psychology to autonomous driving, but the founders of iSee convinced him that the impact would be significant, and that they were up to the engineering challenges.

    “This is a very different approach, and I completely applaud it,” says Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, a research institute created by Microsoft cofounder Paul Allen to explore new ideas in AI, including ones inspired by cognitive psychology.

    Etzioni says the field of AI needs to explore ideas beyond deep learning. He says the main issue for iSee will be demonstrating that the techniques employed can perform well in critical situations. “Probabilistic programming is pretty new,” he notes, “so there are questions about the performance and robustness.”

    Those involved with iSee would seem to agree. Besides aiming to shake up the car industry and perhaps reshape transportation in the process, Tenenbaum says, iSee has a chance to explore how a new AI approach works in a particularly unforgiving practical situation.

    “In some sense, self-driving cars are going to be the first autonomous robots that interact with people in the real world,” he says. “The real challenge is, how do you take these models and make them work robustly?”

    Share RecommendKeepReplyMark as Last Read

    From: Glenn Petersen9/25/2017 7:05:20 AM
       of 1840
    Former Apple Engineers Working on New Eyes for Driverless Cars

    New York Times
    SEPT. 20, 2017

    Soroush Salehian and Mina Rezk, of Aeva, a Silicon Valley start-up making new guidance systems for driverless vehicles. Credit Jason Henry for The New York Times

    PALO ALTO, Calif. — Soroush Salehian raised both arms and spun in circles as if celebrating a touchdown.

    Across the room, perched on a tripod, a small black device monitored this little dance and streamed it to a nearby laptop. Mr. Salehian appeared as a collection of tiny colored dots, some red, some blue, some green. Each dot showed the precise distance to a particular point on his body, while the colors showed the speed of his movements. As his right arm spun forward, it turned blue. His left arm, spinning away, turned red.

    “See how the arms are different?” said his business partner, Mina Rezk, pointing at the laptop. “It’s measuring different velocities.”

    Messrs. Salehian and Rezk are the founders of a new Silicon Valley start-up called Aeva, and their small black device is designed for self-driving cars. The veterans of Apple’s secretive Special Projects Group aim to give these autonomous vehicles a more complete, detailed and reliable view of the world around them — something that is essential to their evolution.

    A sign warns that a laser is in use at Aeva. Credit Jason Henry for The New York Times

    Today’s driverless cars under development at companies like General Motors, Toyota, Uber and the Google spinoff Waymo track their surroundings using a wide variety of sensors, including cameras, radar, GPS antennas and lidar (short for “light detection and ranging”) devices that measure distances using pulses of light.

    But there are gaps in the way these sensors operate, and combining their disparate streams of data is difficult. Aeva’s prototype — a breed of lidar that measures distances more accurately and also captures speed — aims to fill several of these sizable holes.

    “I don’t even think of this as a new kind of lidar,” said Tarin Ziyaee, co-founder and chief technology officer at the self-driving taxi start-up Voyage, who has seen the Aeva prototype. “It’s a whole different animal.”

    Founded in January and funded by the Silicon Valley venture capital firm Lux Capital, among others, Aeva joins a widespread effort to build more effective sensors for autonomous vehicles, a trend that extends from start-ups like Luminar, Echodyne and Metawave to established hardware makers like the German multinational Robert Bosch.

    The company’s name, Aeva, is a play on “Eve,” the name of the robot in the Pixar movie “WALL-E.”

    Mr. Rezk in Palo Alto. He and his business partner Mr. Salehian are veterans of the secretive Special Projects Group at Apple, which they left late in 2016. Credit Jason Henry for The New York Times

    The market for autonomous vehicles will grow to $42 billion by 2025, according to research by the Boston Consulting Group. But for that to happen, the vehicles will need new and more powerful sensors. Today’s autonomous cars are ill prepared for high-speed driving, bad weather and other common situations.

    The recent improvements in self-driving cars coincided with the improvements offered by new lidar sensors from a Silicon Valley company called Velodyne. These sensors gave cars a way of measuring distances to nearby vehicles, pedestrians and other objects. They also provided Google and other companies with a way of mapping urban roadways in three dimensions, so that cars will know exactly where they are at any given moment — something GPS cannot always provide.

    But these lidar sensors have additional shortcomings. They can gather information only about objects that are relatively close to them, which limits how fast the cars can travel. Their measurements aren’t always detailed enough to distinguish one object from another. And when multiple driverless cars are close together, their signals can become garbled.

    Other devices can pick up some of slack. Cameras are a better way of identifying pedestrians and street signs, for example, and radar works over longer distances. That’s why today’s self-driving cars track their surroundings through so many different sensors. But despite this wide array of hardware — which can cost hundreds of thousands of dollars per vehicle — even the best autonomous vehicles still have trouble in so many situations that humans can navigate with ease.

    With their new sensor, Messrs. Salehian and Rezk are working to change that. Mr. Rezk is an engineer who designed optical hardware for Nikon, and presumably, he was among those who handled optical sensors for Apple’s driverless car project, though he and Mr. Salehian declined to say which “special project” they worked on at the company. They left Apple late last year.

    New devices are hidden under a black sheet in a research and development room at Aeva in Palo Alto, Calif. Credit Jason Henry for The New York Times

    Where current lidar sensors send out individual pulses, Aeva’s device sends out a continuous wave of light. By reading the way this far more complex signal bounces off surrounding objects, Mr. Rezk said, the device can capture a far more detailed image while also tracking velocity. You can think of it as a cross between lidar, which is so good at measuring depth, and radar, which is so good at measuring speed.

    Mr. Rezk also said the device’s continuous wave would provide greater range and resolution than existing lidar devices, deal better with weather and highly reflective objects like bridge railings, and avoid interference with other optical sensors.

    Cars will continue to use multiple kinds of sensors, in part because redundancy helps ensure that these cars are safe. But Aeva aims to give these cars a better view of the world from a smaller and less expensive set of sensors.

    Researchers at the University of California, Berkeley, have built similar hardware, and companies like Velodyne and the start-ups Oryx Vision and Quanergy say they are exploring similar ideas. Like these efforts, the Aeva prototype is still under development, and the company plans to sell devices next year. But it shows how autonomous car sensors need to evolve — and that they are indeed evolving.

    Ultimately, new sensors will allow cars to make better decisions. “With autonomous cars, 90 percent of the time, you are trying to infer what is happening,” Mr. Ziyaee said. “But what if you can just measure it?”

    Follow Cade Metz on Twitter: @CadeMetz

    A version of this article appears in print on September 21, 2017, on Page B2 of the New York edition with the headline: Seeking Keener Vision for Driverless Cars.

    Share RecommendKeepReplyMark as Last Read

    From: Savant9/26/2017 4:14:14 PM
       of 1840
    Drones in archaeology...
    60's drone photography found Alexander the Great's lost city

    Share RecommendKeepReplyMark as Last ReadRead Replies (1)

    To: Savant who wrote (1439)9/26/2017 7:18:31 PM
    From: Savant
    1 Recommendation   of 1840
    Turns out a more through article sez>>

    Spy satellite footage in '60s, and drone use since then and now

    a much better article

    Share RecommendKeepReplyMark as Last Read

    From: Glenn Petersen9/28/2017 11:07:39 PM
    2 Recommendations   of 1840
    A Field Farmed Only by Drones

    By Nicola Twilley
    The New Yorker
    September 28, 2017

    The experience of the Hands Free Hectare team suggests that drone agriculture offers some substantial benefits.
    Photograph by Debbie Heeks Courtesy Harper Adams University

    Across the United Kingdom, the last of the spring barley has been brought in from the fields, the culmination of an agricultural calendar whose rhythm has remained unchanged for millennia. But when the nineteenth-century poet John Clare wrote, in his month-by-month description of the rural year, that in September “harvest’s busy hum declines,” it seems unlikely that he was imagining the particular buzz—akin to an amplified mosquito—of a drone.

    “The drone barley snatch was actually the thing that made it for me,” Jonathan Gill, a robotics engineer at Harper Adams University, told me recently. Gill is one of three self-described “lads” behind a small, underfunded initiative called Hands Free Hectare. Earlier this month, he and his associates became the first people in the world to grow, tend, and harvest a crop without direct human intervention. The “snatch” occurred on a blustery Tuesday, when Gill piloted his heavy-duty octocopter out over the middle of a field, and, as the barley whipped from side to side in the propellers’ downdraft, used a clamshell dangling from the drone to take a grain sample, which would determine whether the crop was ready for harvesting. (It was.) “Essentially, it’s the grab-the-teddy-with-the-claw game on steroids,” Gill’s colleague, the agricultural engineer Kit Franklin, said. “But it had never been done before. And we did it.”

    The idea for the project came about over a glass of barley’s best self: beer. Gill and Franklin were down the pub, lamenting the fact that, although big equipment manufacturers such as John Deere likely have all the technology they need to farm completely autonomously, none of them seem to actually be doing it. Gill knew that drones could be programmed, using open-source code, to move over a field on autopilot, changing altitude as needed. What if you could take the same software, he and Franklin wondered, and make it control off-the-shelf agricultural machinery? Together Gill, Franklin, and Martin Abell, a recent Harper Adams graduate, rustled up just over a quarter million dollars in grant money. Then they got hold of some basic equipment—a small Japanese tractor designed for use in rice paddies, a similarly undersized twenty-five-year-old combine harvester, a sprayer boom, and a seed drill—and connected the drone software to a series of motors, which, with a little tinkering, made it capable of turning the tractor’s steering wheel, switching the spray nozzles on and off, raising and lowering the drill, and choreographing the complex mechanized ballet of the combine.

    “There were lots of people who thought the project wasn’t going to work,” Gill said. “Lots.” Hands Free Hectare’s budget was so small that the team had no test field or backup machinery; indeed, they didn’t secure the tractor until last December, just a few months before the barley was due to be sown. This left little time for the necessary trial and error; often, Gill and his colleagues would be midway through configuring their setup to perform one task, only to have to take it apart again to accomplish another. When they finally managed to get the crop in the ground, their rows looked wobbly. “It turns out that the autopilots in these drone systems weren’t designed to travel in a very straight line,” Gill said. “There’s no need for it—it’s just designed to get from point A to point B as efficiently as possible.” When the software hit a rock, it would navigate around the obstruction, following the path of least resistance rather than plowing through. Gill adjusted the code for straighter steering, regardless of the terrain, but not in time for the initial planting, which meant that subsequent tractor runs crushed hundreds of precious barley seedlings.

    In order to live up to the name Hands Free Hectare, the team had decided that no one would set foot in the field until the harvest was brought in. This posed a problem for Kieran Walsh, its crop-health adviser, who was accustomed to collecting soil and plant samples manually and scrutinizing them for signs of infestation and illness. Walsh was aware that robotic weed detectors were already commercially available, but, to the best of his knowledge, there was nothing that could provide all the information he needed. “My initial thought was, Gosh, this is exciting,” he told me. “And then I thought, Right, actually, this is going to be quite tricky.” In the end, Gill flew drones over the field on a weekly basis, gathering spectral data that Walsh could use to measure the barley’s photosynthetic activity and assess soil moisture. With Abell and Franklin, he also built a robotic sample collector. “Ninety-five per cent of the information I wanted, they got for me,” Walsh said. “But there was that five per cent where I had to make an educated guess.”

    Hands Free Hectare’s final yield was a couple of metric tons lower than the average from conventionally farmed land—and the costs in both time and money were, unsurprisingly for a pilot project, stratospherically higher. Nevertheless, the team’s experience suggests that drone agriculture offers some substantial benefits. “For starters,” Abell said, “the opportunity for doing the right thing at the right time is much higher with automated machines.” Many of a farmer’s duties are weather-dependent; an autonomous tractor could, for instance, tap into live forecast data and choose to go out and apply fungicide when conditions are ideal, even if it’s four o’clock in the morning.

    More important, once the machinery no longer requires a person to sit on top of it, a farmer could deploy a fleet of small tractors to do the same work that he currently does riding one of today’s state-of-the-art, two-story-tall tractors. “That has massive, massive implications,” Walsh said. For one thing, Abell explained, it would make the application of fertilizers and herbicides far more precise. “All the boom sprayers in this country these days are at least twenty-four metres wide,” he said. “So you’ve essentially got a twenty-four-metre paintbrush to apply chemicals to a field that you’ve surveyed at two- or three- centimetre resolution.” Smaller tractors can spray crops over a smaller area, coming closer to matching the pixel-by-pixel picture of crop health provided by drone imagery. Abell, who comes from a farming family, added that transitioning away from big, heavy machinery would be better for the land, too. “All that weight is knocking the life out of the soil,” he said. “Our yields have plateaued, and that’s a big part of why.”

    The Hands Free Hectare team envisions a future in which farmers are fleet managers, programming their vehicles from a central mission control and using the time saved to focus on areas that need extra attention. “The actual driving of a tractor—I didn’t miss that at all,” Abell said. “And, by not spending all your time going in a straight line on auto-steer, it gives you more time to learn about your crop and hopefully manage it better.” So far, he said, the response among farmers has been generally enthusiastic; most of the reticence appears to come from the younger, early-career crowd. “The older guys that have been sat on a tractor for the last thirty years, seeing that it’s not the best use of their time—they appreciate it more,” Abell said.

    Self-driving tractors face many of the same safety issues as self-driving cars, in terms of cybersecurity and liability for accidents, so a good deal more work remains to be done before they will enter widespread use. Gill predicted that the first adopters will be in Japan, where the average farmer is seventy years old. Abell expects that commercial farmers in the U.K. will be automating at least some aspects of their operations within the next five years. The team’s focus, however, is on the even shorter term: first, a much needed vacation; then a new crop (winter wheat) in the ground by the end of October; and, finally, a special beer brewed from their hands-free harvest. “I’m hoping for a festive pint,” Gill said. “We’ll probably sell the rest to fund the project.”

    Share RecommendKeepReplyMark as Last Read

    From: Savant9/29/2017 5:53:10 PM
    1 Recommendation   of 1840
    RT...Outlaw Drones no match for Athena..

    also, Spider-Man Drone

    *Hmm, no doubt they'll employ it against people, too...

    Share RecommendKeepReplyMark as Last ReadRead Replies (1)

    To: Savant who wrote (1442)9/29/2017 9:01:02 PM
    From: FUBHO
       of 1840
    Bold Eagles: Angry Birds Ripping $80,000 Droids Out of Sky...
    SYDNEY— Daniel Parfitt thought he’d found the perfect drone for a two-day mapping job in a remote patch of the Australian Outback. The roughly $80,000 machine had a wingspan of 7 feet and resembled a stealth bomber.

    There was just one problem. His machine raised the hackles of one prominent local resident: a wedge-tailed eagle.

    Wedge-tailed eagle

    Swooping down from above, the eagle used its talons to punch a hole in the carbon fiber and Kevlar fuselage of Mr. Parfitt’s drone, which lost control and plummeted to the ground.

    “I had 15 minutes to go on my last flight on my last day, and one of these wedge-tailed eagles just dive-bombed the drone and punched it out of the sky,” said Mr. Parfitt, who believed the drone was too big for a bird to damage. “It ended up being a pile of splinters.” ...

    Share RecommendKeepReplyMark as Last Read

    From: Glenn Petersen10/1/2017 4:28:48 PM
    1 Recommendation   of 1840
    The security threat we’ve been ignoring: Terrorist drones

    By David Von Drehle Columnist
    Washington Post
    September 29 at 7:48 PM

    A man watches a drone. (Petros Karadjias/AP)

    Terrorist drones.

    Two years ago, you would have had a tough time getting a meeting with a junior staffer in Washington to discuss the subject. A year ago, people had begun furrowing brows. Now, this is Topic A for an entire “community of experts that has emerged inside the federal government,” as National Counterterrorism Center chief Nicholas Rasmussen told a panel of senators Wednesday. “It’s a real problem,” he said.

    How real? Islamic State fighters in Iraq and Syria, using off-the-shelf aircraft modified to drop grenades, have repeatedly menaced U.S. Special Operations forces. If they can do it in Raqqa, surely someone will try to do it here.

    FBI Director Christopher A. Wray, testifying to the same panel, said the threat is palpable and immediate: “The expectation is it’s coming here imminently.” Drones are “relatively easy to acquire, relatively easy to operate, and quite difficult to disrupt and monitor.”

    Most drones on sale in the United States are small, short-range birds aimed at the hobbyist market and unsuited to carrying cargo. But for the price of a flat-screen TV, a would-be terrorist can go online and purchase a commercial model heavy enough to deliver a small package. You don’t need to be a Hollywood screenwriter to imagine what might come next: A nearly silent, low-altitude little helicopter bearing a small bomb or supply of toxic material hums over metal detectors and barriers and bodyguards to strike a public gathering or senior official or hallowed landmark. In 2015, a hobbyist’s drone landed on the White House lawn.

    Awakening to the threat, the Trump administration drafted legislation this year to enhance police powers in tracking civilian drones and their payloads — including by codifying the authority to destroy drones in flight that appear menacing. Among the fast-growing number of hobbyists and entrepreneurs who fly drones for fun or profit, many have expressed alarm at the proposed legislation. But perhaps they should take their complaints to al-Qaeda.

    The hard fact is that today’s danger will soon be followed by a much larger threat. While Rasmussen and Wray were on Capitol Hill testifying about the problem of radio-controlled and GPS-directed drones of relatively small size, many of the top scientists in the field of autonomous flight were gathering in Vancouver, B.C., to share details of their progress on building entirely self-guided aircraft.

    One such scientist shared his concerns with me about where this is headed. He says with confidence that off-the-shelf drone technology will move rapidly from grenade-size payloads to cargoes of 10 or 12 pounds. As belt-wearing suicide bombers have shown, a tremendous amount of destruction can be packed into a vessel about that size.

    More important, though, is the rapid progress being made toward truly autonomous navigation. Encouraged by Pentagon planners who envision helicopter rescues and resupply missions in combat zones that don’t put human crews at risk, the scientists at the Vancouver meeting are well on their way to building drones that fly independently of radio guidance or GPS. These aircraft can “see” and survey terrain to know exactly where they are and steer themselves accordingly to avoid traffic, trees and power lines on the way to their destinations.

    Or their targets.

    If you’re excited by the idea of one day stepping into a personal aircraft, tapping an endpoint into the control panel and whisking to work high above snarled traffic, the promise of truly autonomous flight is exciting. Deep-pocketed innovators such as Google founder Larry Page and Airbus are pursuing the idea of self-flying personal aircraft.

    But if you are a counterterrorism expert, this stuff worries you. Our best defenses against terrorist drones fit into three categories. The first is radar, but radar performs poorly against small, low-flying craft. Alternatively, we can jam radio signals, but the autonomous drones now in development won’t need a radio controller. Our third defense is to interrupt GPS. But a drone that can “read” terrain and react to obstacles will be able to fly without such guidance.

    “The vehicles we are developing [for the Pentagon] are specifically designed to foil all three prevention systems,” the worried scientist told me.

    One thing we know about technology: Today’s improbable experiment is tomorrow’s inexpensive gadget. You likely have more computing power in your car than NASA had on the space shuttle. If autonomous drones are able to execute complex simulated combat missions today — and they are — you can bet that technology will eventually be widely and cheaply available in smaller forms to ordinary buyers.

    And one thing we know about terrorists: They love gadgets. They’ve learned to detonate bombs using cellphones, toy cars, garage door openers and so on. Once they have access to nearly undetectable flying machines big enough to carry substantial payloads, we can be confident they will try to use them to wreak havoc. Which is a threat worth all the attention the community of experts can give to it.

    Perhaps Washington has been slow to see this new threat. But it’s not too late. Yet.

    Share RecommendKeepReplyMark as Last ReadRead Replies (1)
    Previous 10 Next 10 

    Copyright © 1995-2018 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.