We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor. We ask that you disable ad blocking while on Silicon
Investor in the best interests of our community. If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Illustration by Alex Castro ------------------------------------------------------- Right now, a minivan with no one behind the steering wheel is driving through a suburb of Phoenix, Arizona. And while that may seem alarming, the company that built the “brain” powering the car’s autonomy wants to assure you that it’s totally safe. Waymo, the self-driving unit of Alphabet, is the only company in the world to have fully driverless vehicles on public roads today. That was made possible by a sophisticated set of neural networks powered by machine learning about which very is little is known — until now.
For the first time, Waymo is lifting the curtain on what is arguably the most important (and most difficult-to-understand) piece of its technology stack. The company, which is ahead in the self-driving car race by most metrics, confidently asserts that its cars have the most advanced brains on the road today. That’s thanks to a head start in AI investment, some strategic acquisitions by sister company Google, and a close working relationship with the tech giant’s in-house team of AI researchers.
Anyone can buy a bunch of cameras and LIDAR sensors, slap them on a car, and call it autonomous. But training a self-driving car to behave like a human driver, or, more importantly, to drive better than a human, is on the bleeding edge of artificial intelligence research. Waymo’s engineers are modeling not only how cars recognize objects in the road, for example, but how human behavior affects how cars should behave. And they’re using deep learning to interpret, predict, and respond to data accrued from its 6 million miles driven on public roads and 5 billion driven in simulation.
Anca Dragan, one of Waymo’s newest employees, is at the forefront of this project. She just joined the company in January after running the InterACT Lab at the University of California Berkeley, which focuses on human-robot interactions. ( A photo on the Berkeley website features Dragan smiling broadly while a robot arm pours her a steaming cup of coffee.) Her role is to ensure our interactions with Waymo’s self-driving cars — as pedestrians, as passengers, as fellow drivers — are wholly positive. Or to put it another way: she’s our backstop against the inevitable robot revolution.
Dragan has to strike a balance. While we don’t want robot overlords, neither do we want milquetoast robot drivers. For instance, if you’re barreling down a busy highway at 65 mph and you want to merge into the left lane, you may just nudge your way in until the other drivers eventually make space for you. A self-driving car that’s been trained to follow the rules of the road may struggle to do that. A video recently appeared on Twitter showing one of Waymo’s minivans trying to merge onto a busy highway and pretty much failing at it.
“How can we make it adapt to the drivers that it’s sharing the road with?” Dragan says. “How do you tailor it to be more comfortable or drive more naturally? Those are the subtle improvements that if you want those to work, you really need a system that fricking works.”
Photo by Amelia Holowaty Krales / The Verge --------------------------------------------------------
For an innovation that’s supposed to save us from traffic fatalities, it’s been an extremely discouraging few months. In March, a 49-year-old woman was struck and killed by a self-driving Uber vehicle while crossing the street in Tempe, Arizona. A few weeks later, the owner of a Tesla Model X died in a gruesome crash while using Autopilot, the automaker’s semi-autonomous driver assist system. And just last week, a self-driving Waymo minivan was T-boned by a Honda sedan that had swerved into oncoming traffic.
Meanwhile, the public is growing increasingly skeptical. Regulators are starting to rethink the free pass they were considering giving to companies to build and test fully driverless vehicles. In the midst of all this uncertainty, Waymo invited me out to its headquarters in Mountain View, California, for a series of in-depth interviews with the company’s top humans in artificial minds.
Waymo is housed within X, Google’s high-risk research and development laboratory, which is located a few miles from the main Googleplex campus. (In 2015, when Google restructured itself into a conglomerate called Alphabet, X dropped the Google from its name.) A year later, Google’s self-driving car project “graduated” and became an independent company called Waymo. The self-driving team is still housed in the mother ship, though, alongside the employees working on delivery drones and internet balloons.
The building, a former shopping mall, is Bay Area bland. The only thing to distinguish it is the pair of self-driving Chrysler Pacifica minivans tooling around in the parking lot that occasionally pull over so employees can take selfies in front of them. In Googleland, the celebrities are the cars.
Waymo already has a huge lead over its competitors in the field of autonomous driving. It has driven the most miles — 6 million on public roads, and 5 billion in simulation — and has collected vast stores of valuable data in the process. It has partnerships with two major automakers, Fiat Chrysler and Jaguar Land Rover, with several more in the pipeline. Its test vehicles are on the road in Texas, California, Michigan, Arizona, Washington, and Georgia. And it plans to launch a fully driverless commercial taxi service in Arizona later this year.
Now, the company wants its advantages in the still-growing field of AI to be more widely known. Waymo CEO John Krafcik gave a presentation at the company’s annual I/O developer conference this week. And the message was clear: our cars can see further, perceive better, and make snap decisions faster than anyone else.
“It’s a really hard problem if you are working on a fully self-driving vehicle... because of capability requirements and accuracy requirements,” Dmitri Dolgov, Waymo’s chief technology officer and vice president of engineering, tells me. “And experience really matters.”
Deep learning, which is a type of machine learning that uses lots of layers in a neural network to analyze data at different abstractions, is the perfect tool for improving the perception and behavior of self-driving cars, Dolgov says. “And we started pretty early on it ... just as the revolution was happening right here, next door.”
AI specialists from the Google Brain team regularly collaborate with Dolgov and his fellow engineers at Waymo on methods to improve the accuracy of its self-driving cars. Lately, they’ve been working together on some of buzzier elements of AI research like “automated machine learning,” in which neural nets are used to train other neural nets. Waymo may be its own company, but when it comes to projecting an aura of invulnerability, it helps to have your older and much tougher brother at your back.
The sudden interest by Waymo in burnishing its AI credentials is tied with its high-stakes effort to deploy vehicles that don’t require someone in the driver’s seat. To date, Waymo is the only company to take on this risk. The rest of the industry is rushing to catch up, buying up tiny startups in an effort to jump-start their own autonomy efforts. Moreover, key members of Google’s self-driving team have left to hang their own shingle, lured by big possibilities and lots of money, and leaving the tech giant to wrestle with news stories about “attrition” and “brain drain.”
Former members of Google’s self-driving team and outside experts concede that Waymo indeed appears to have a major head start in the field, but acknowledge that its competitors are likely to catch up eventually. After all, Waymo doesn’t have a monopoly on machines with brains.
“As strong as Google is,” says Dave Ferguson, a former lead engineer on Google’s self-driving team who has since left to start his own company called Nuro, “the field is stronger.”
This wasn’t always the case. Back in the early 2000s, the field was fairly weak.
Neural networks, a type of machine learning where programmers build models that sift through vast stores of data and look for together patterns, weren’t hot yet. A big shift was going from neural nets that were quite shallow (two or three layers) to the deep nets (double-digit layers). While the concept dates back to the 1950s — at the birth of AI research — most computers weren’t powerful enough to process all the data needed. All that changed with the ImageNet competition in 2009.
ImageNet started out as a poster from Princeton University researchers, displayed at a 2009 conference on computer vision and pattern recognition in Florida. (Posters are a typical way of sharing information at these types of machine learning conferences.) From there, it grew into an image dataset, then a competition to see who could create an algorithm that could identify the most images with lowest error rate. The dataset was “trimmed” from around 10,000 images to just a thousand image categories or “classes,” including plants, buildings, and 90 of the 120 dog breeds. Around 2011, the error rate was about 25 percent, meaning one in four images were being identified incorrectly by the teams’ algorithms.
Help came from an unexpected place: powerful graphics processing units (GPUs) typically found in the video game world. “People started realizing that those devices could actually be used to do machine learning,” says Vincent Vanhoucke, a former voice researcher at Google who now serves as the company’s technical lead for AI. “And they were particularly well-suited to run neural networks.”
The biggest breakthrough was in 2012, when AI researcher Geoffrey Hinton and his two graduate students, Ilya Sutskever and Alex Krizhevsky, showed a new way to attack the problem: a deep convolutional neural network to the ImageNet Challenge that could detect pictures of everyday objects. Their neural net embarrassed the competition — reducing the error rate on image recognition to 16 percent, from 25 percent the other methods produced.
“I believe that was the first time that a deep learning, neural net-based approach beat the pants off more standard approach,” says Ferguson, the former Google engineer. “And since then, we’ve never looked back.”
Krizhevsky takes a more circumspect approach to his role in the 2012 ImageNet Challenge. “I guess we were at the right place at the right time,” he tells me. He attributes their success to his hobby of programming GPUs to run code for the team’s neural net, enabling them to run experiments that would normally take months in just a matter of days. And Sutskever made the connection to apply the technique to the ImageNet competition, he says.
Hinton and his team’s success “triggered a snowball effect,” Vanhoucke says. “A lot of innovation came from that.” An immediate result was Google acquiring Hinton’s company DNNresearch, which included Sutskever and Krizhevsky, for an undisclosed sum. Hinton stayed in Toronto, and Sutskever and Krizhevsky moved to Mountain View. Krizhevsky joined Vanhoucke’s team at Google Brain. “And that’s when we started thinking about applying those things to Waymo,” Vanhoucke says.
Another Google researcher, Anelia Angelova, was the first to reach out to Krizhevsky about applying their work to Google’s car project. Neither officially worked on that team, but the opportunity was too good to ignore. They created an algorithm that could teach a computer to learn what a pedestrian looked like — by analyzing thousands of street photos — and identify the visual patterns that define a pedestrian. The method was so effective that Google began applying the technique to other parts of the project, including prediction and planning.
Problems emerged almost immediately. The new system was making too many errors, mislabeling cars, traffic signals, and pedestrians. It also wasn’t fast enough to run in real time. So Vanhoucke and his team combed through the images, where they discovered most of the errors were mistakes made by human labelers. Google brought them in to provide a baseline, or “ground truth,” to measure the algorithm’s success rate — and they’d instead added mistakes. The problem with autonomous cars, it turned out, was still people.
After correcting for human error, Google still struggled to modify the system until it could recognize images instantly. Working closely with Google’s self-driving car team, the AI researchers decided to incorporate more traditional machine learning approaches, like decision trees and cascade classifiers, with the neural networks to achieve “the best of both worlds,” Vanhoucke recalls.
“It was a very, very exciting time for us to actually show that those techniques that have been used to find cat pictures and interesting things on the web,” he says. “Now, they were actually being used for improving safety in driverless cars.”
Krizhevsky left Google several years later, saying he “lost interest” in the work. “I got depressed for a while,” he admits. His departure baffled his colleagues at Google, and he has since taken on a bit of a mythical status. (Ferguson called him an “AI whisperer.”) Today, Krizhevsky wonders whether these early successes will be enough to give Google an insurmountable lead in the field of autonomy. Other car and tech companies have already caught on to the importance of machine learning, and Waymo’s data may be too specific to extrapolate to a global scale.
“I think Tesla has the unique advantage of being able to collect data from a very wide variety of environments because there are Tesla owners with self-driving hardware all over the world,” he tells me. “This is very important for machine learning algorithms to generalize. So I would guess that at least from the data side, if not the algorithmic side, Tesla might be ahead.”
AI and machine learning are essential to self-driving cars. But some of Waymo’s competitors — which include former members of Google’s self-driving team — wonder how much longer the company’s advantages will last.
Sterling Anderson is the ex-director of Autopilot at Tesla and co-founder of Aurora Innovation, which he started with the former head of Google’s self-driving car program, Chris Urmson. He says that a natural consequence of improvements in AI is that big head-starts like Waymo’s are “less significant as they have been.” In other words, everyone working on self-driving cars in 2018 is already using deep learning and neural nets from the outset. The shine is off. And like an old piece of fruit, a lot of that data from the early days has grown mushy and inedible. A mile driven in 2010 is not a mile driven in 2018.
“Data gets left on the floor after a number of years,” Anderson says. “It becomes useful for learning and becomes useful for evolving the architecture and evolving the approach. But at some point, claiming that I’ve got X million miles or X billion miles, or whatever it is, becomes less significant.”
Waymo’s engineers agree. “For the sake of machine learning specifically there’s such a thing as a point of diminishing return,” says Sacha Arnoud, head of the company’s machine learning and perception division. “Driving 10X more will not necessarily give you much greater datasets because what matters is the uniqueness of the examples you find.”
In other words, each additional mile that Waymo accrues needs to be interesting for it to be relevant to the process of training the company’s neural networks. When the cars encounter edge cases or other unique scenarios, like jaywalkers or parallel-parking cars, those are filtered through Waymo’s simulator to be transformed into thousands of iterations that can be used for further training.
Robots can also be tricked. Adversarial images, or pictures engineered to fool machine vision software, can be used to undermine or even crash self-driving cars. Stickers can be applied to a stop sign to confuse a machine vision system into thinking it’s a 45 mph sign.
A neural network trained by Google to identify everyday objects was recently tricked into thinking a 3D-printed turtle was actually a gun. Waymo’s engineers say they are building redundancy into their system to address these possibilities. Add this to the long list of concerns surrounding self-driving cars, which includes hacking, ransomware, and privacy breaches.
Jaywalk with median. Image: Waymo Jaywalk without median. Image: Waymo Construction worker in manhole. Image: Waymo Parallel parking. Image: Waymo -------------------------------------------------------------------------------
Dolgov is sitting in one of X’s conference rooms, whiteboard marker in hand, MacBook Pro splayed before him, asking me to describe to him the difference between Garfield and Odie.
Before I can stammer out a reply, Dolgov keeps going: “If I give you a picture, and I ask you “is it a cat or a dog,” you will know very quickly, right? But if I ask you to describe to me how you came to that conclusion, it would not trivial. You think it has something to do with the size of the thing, the number of legs is the same, the number of tails is the same, usually same number of ears. But it’s not obvious.”
This type of question is really well-suited for deep learning algorithms, Dolgov says. It’s one thing to come up with a bunch of basic rules and parameters, like red means stop, green means go, and teaching a computer to distinguish between different types of traffic signs. Teaching a computer to pick a pedestrian out of an ocean of sensor data is easier than describing the difference, or even encoding it.
Waymo uses an automated process and human labelers to train its neural nets. After they’ve been trained, these giant datasets also need to be pruned and shrunk so they can be deployed in the real world in Waymo’s vehicles. This process, akin to compressing a digital image, is key building the infrastructure to scale to a global system.
If you look at images captured by the cars’ cameras and place those alongside the same scene built from the vehicle’s laser sensor data, you start to see the enormity of the problem that Waymo is trying to address. If you’ve never seen a LIDAR rendering, the best way to describe it is Google’s Street View as a psychedelic blacklight poster.
These images provide a birds-eye view of the self-driving car and what it “sees” around it. Pedestrians are depicted as yellow rectangles, other vehicles are purple boxes, and so forth. Waymo has categories for “dog cat” and “bird squirrel,” among others, that it uses for animals. (Turns out, the differences between a dog and a cat aren’t entirely relevant to autonomous vehicles.) But behind that, Waymo is training its algorithms to perceive atypical actors in the environment: a construction worker waist deep in a manhole, someone in a horse costume, a man standing on the corner spinning an arrow-shaped sign.
To take a human driver out of the equation, the car needs to be adaptive to the weirder elements of a typical drive. “Rare events really, really matter,” Dolgov tells me, “especially if you are talking about removing a driver.”
Programming the car to respond to someone crossing the street at daytime in one thing, but getting it to perceive and react to a jaywalker is entirely different. What if that jaywalker stops at a median? Waymo’s self-driving cars will react cautiously since pedestrians often walk up to a median and wait. What if there is no median? The car recognizes this as unusual behavior and slows down enough to allow the pedestrian to cross. Waymo has built models using machine learning to recognize and react to both normal and unusual behavior.
Neural nets need a surfeit of data to train. That means Waymo has amassed “hundreds of millions” of vehicle labels alone. To help put that in context, Waymo’s head of perception Arnoud estimated that a person labeling a car every second would take 20 years to reach 100 million. Operating every hour of every day of every week, and hitting 10 labels a second, it still takes Waymo’s machines four months to scroll through that entire dataset during its training process, Arnoud says.
It takes more than a good algorithm to break free of the geofenced test sites of the Phoenix suburbs. If Waymo wants its driverless cars to be smart enough to operate in any environment and under any conditions — defined as Level 5 autonomy — it needs a powerful enough infrastructure to scale its self-driving system. Arnoud calls this the “industrialization” or “productionization” of AI.
As part of Alphabet, Waymo uses Google’s data centers to train its neural nets. Specifically, it uses a high-powered cloud computing hardware system called “tensor processing units,” which underpins some of the company’s most ambitious and far-reaching technologies. Typically, this work is done using commercially available GPUs, often from Nvidia. But over the last few years, Google has opted to build some of this hardware itself and optimize for its own software. TPUs are “orders of magnitude” faster than CPUs, Arnoud says.
The future of AI at Waymo isn’t sentient vehicles. (Sorry, Knight Rider fans.) It’s in cutting-edge research like automated machine learning, in which the process of building machine learning models is automated. “Essentially, the idea that you have AI machine learning that’s creating other AI models that actually solve the problem you’re trying to solve,” Arnoud says.
This becomes extremely useful for driving in areas with unclear lane markings. These days, the most challenging driving environments require self-driving cars to make guidance decisions without white lines, Botts Dots, or clear demarcations at the edge of the road. If Waymo can build machine learning models to train its neural nets to drive on streets with unclear markings, than Waymo’s self-driving cars can put the Phoenix suburbs in its rear view and eventually hit the open road.
An AI learns to spot tree species, with help from a drone
MIT Technology Review May 10, 2018
A consumer-grade drone can take photos of trees from above that are good enough to train a deep learning algorithm to tell different species apart.
Details: The team behind the project flew their drone over a forest in Kyoto, Japan, then took some of the photos and divided them into seven categories: six types of trees and one called “others” for images that captured bare land or buildings.
Results: After some fiddling, the algorithm (which was on an earth-bound computer) achieved an overall 89 percent accuracy.
Why it matters: Forest surveys typically use expensive systems outfitted with lidar or specialized cameras. This commercially-available setup could be a cheap way to automate tree surveys, and the algorithm could be re-trained to aid in disaster response, check pipelines for leaks, or other monitoring efforts that need to quickly cover a large area.
"This commercially-available setup could be a cheap way to automate tree surveys, and the algorithm could be re-trained to aid in disaster response, check pipelines for leaks, or other monitoring efforts that need to quickly cover a large area."
Or to find Sasquatch or see what that bear's doing in the woods...lol.
Drones, Autonomous Vehicles and Flying Cars | Stock Discussion ForumsShare
A scorecard breaking down everyone from Alphabet’s Waymo to Zoox.
By David Welch and Elisabeth Behrmann Bloomberg May 7, 2018
Illustration: Andrea Chronopoulos ----------------------------------------------
In the race to start the world’s first driving business without human drivers, everyone is chasing Alphabet Inc.’s Waymo.
The Google sibling has cleared the way to beat its nearest rivals, General Motors Co. and a couple of other players, by at least a year to introduce driverless cars to the public. A deal reached in January to buy thousands of additional Chrysler Pacifica minivans, which get kitted out with sensors that can see hundreds of yards in any direction, puts Waymo’s lead into stark relief. No other company is offering for-hire rides yet, let alone preparing to carry passengers in more than one city this year.
GM plans to start a ride-hailing service with its Chevrolet Bolt—the one with no steering wheel or pedals, the ultimate goal in autonomous technology—late next year, assuming the U.S. government has protocols in place by then. Most of the others trying solve the last remaining self-driving puzzles are more cautious, targeting 2020 or later.
The road to autonomy is long and exceedingly complicated. It can also be dangerous: Two high-profile efforts, from Uber Technologies Inc. and Tesla Inc., were involved in recent crashes that caused the death of a pedestrian (in the first known case of a person killed by a self-driving vehicle) and a driver using an assistance program touted as a precursor to autonomy. One of Waymo’s autonomous vans was involved in a collision just last week. But the perceived stakes are so enormous, with the promise of transport businesses needing little in labor costs, that many players are racing to master the technology and put it to work.
Without drivers, operating margins could be ... more than twice what carmakers generate right now
In the next three years, almost all of these contenders will be able show off cars capable of navigating city streets at casual speeds along firmly fixed routes. Most of the companies now building autonomous vehicles can already handle basic driving at low speeds. This can give an impression of parity and sameness. Yet despite being in its infancy, autonomous driving has leaders starting to emerge.
“Waymo has developed a phenomenal system and is ahead of the pack,” said Brian Collie, head of Boston Consulting Group’s U.S. automotive practice, who singled out the top two. “But that’s very different from being able to manufacture an autonomous vehicle. You have to look at GM. In Europe, Daimler is leading the pack.”
The finish line isn’t just reaching Level 4 on the five-step scale of autonomous driving. That’s the threshold at which a car can drive on pre-mapped routes and handle anything on its planned course without the intervention of a driver. Only Waymo has tested Level 4 vehicles on passengers who aren’t its employees—and those people volunteered to be test subjects. No one has yet demonstrated at Level 5, where the car is so independent that there’s no steering wheel.
The victors will also need to pioneer businesses around the technology. Delivery and taxi services capable of generating huge profits is the end game for all.
Goldman Sachs Group Inc. predicts that robo-taxis will help the ride-hailing and -sharing business grow from $5 billion in revenue today to $285 billion by 2030. There are grand hopes for this business. Without drivers, operating margins could be in the 20 percent range, more than twice what carmakers generate right now. If that kind of growth and profit come to pass—very big ifs—it would be almost three times what GM makes in a year. And that doesn’t begin to count the money to be made in delivery.
Why does it matter who gets there first? To make a driverless business work takes a big fleet to establish service in major markets, as well as a brand name that becomes as synonymous with getting a ride as Uber is today. Observers expect the field to narrow.
For now, investors are throwing money at possible winners
“There won’t be a ton of companies doing this,” Collie said. “There will be a select few. Being there first establishes consumer trust. Brand value matters.”
For now, investors are throwing money at possible winners. Tesla’s valuation soared in 2016 after an analyst from Morgan Stanley, also its lead underwriter, speculated that the company’s electric cars would spawn a self-driving fleet. GM shares are up 20 percent since a June 2017 announcement that a plant to build driverless vehicles was up and running. Zoox Inc. has already raised $360 million, a huge sum for a startup with no revenue.
Of course, the era when most people ditch their driver’s licenses and rely on self-driving taxis remains far off. The technology costs more than the cars, and with few players actually testing the cars for the public, widespread adoption is years away. Even Waymo is still in the pilot stage.
The most aggressive forecasts have the majority of people driving their own cars for at least the next decade. Chris Urmson, founder of Aurora Innovation INC. and one of the pioneers of the field, counts as one of the optimists. “I can see these on the road in real numbers in five to 10 years,” he said. That means even today’s laggards have time to catch up.
After interviewing executives and technology experts and reviewing announced plans, Bloomberg has taken a snapshot of the race to develop the self-driving car. Our estimated time of autonomy is based on Level 4, the prerequisite for launching businesses with self-driving tech.
The Clear Leaders
Waymo has run self-driving cars over 5 million road miles in 25 cities and done billions of miles in computer simulation, which it uses to update its self-driving software on a weekly basis. The Google-launched company has a fleet of Chrysler Pacifica minivans that can navigate city streets in San Francisco and reach full speed on highways.
A pilot program of driverless vans will begin commercial service later this year, picking up paying passengers in Phoenix and branching out from there. Waymo Chief Executive John Krafcik recently announced a deal to add 20,000 Jaguar I-Pace SUVs to the fleet and signaled that an in-the-works alliance with Honda Motor Co. could focus on delivery and logistics.
The company also has by far the lowest rate of disengagement—times when an engineer needs to grab the wheel because the bot couldn’t handle it—among all companies testing cars in California, a hub of autonomous research that also requires detailed disclosures. It also reported fewer accidents while testing in California last year: Waymo had three collisions over more than 350,000 miles, while GM had 22 over 132,000 miles.
GM’s Chevy Bolt can navigate the busy streets of San Francisco at speeds up to 25 miles per hour. The Detroit automaker is so confident that it plans to run a ride-hailing pilot next year in a car with no steering wheel or pedals, something only Waymo has done in road testing.
After Waymo, a handful of major players have demonstrated similar driving capabilities. It’s hard to say anyone has an edge. One advantage for GM: There's a factory north of Detroit that can crank out self-driving Bolts. That will help GM get manufacturing right and lower costs without relying on partners. Right now, an autonomous version of the car costs around $200,000 to build, compared to a sticker price of $35,000 for an electric Bolt for human drivers.
Where GM lags Waymo is speed. GM doesn’t test faster than 25 miles per hour, deeming that the safest top speed. Kyle Vogt, founder and chief executive of GM’s Cruise Automation unit, said his program will soon be using new Lidar developed by Strobe, which the automaker acquired last year. Lidar sends out laser beams to map the road ahead and guide the car, and Strobe’s version is smaller, cheaper and can see farther ahead than GM’s existing equipment. That will enable faster driving.
The new equipment will also cut costs. Lidar alone on the current generation of autonomous Bolts costs about $30,000 a car, Vogt said in November. When GM starts using Strobe, Vogt said, the cost will drop to “hundreds of dollars.”
GM plans to spend $1 billion of its $8 billion annual capital expenditure budget to develop self-driving cars and mobility services. That money will allow GM the option of developing its own ride-hailing business. GM has not decided whether to run its ride-share pilot, slated for late 2019, on its own or to join forces with an established player. It’s worth noting that the automaker already has a stake in Lyft Inc.
There’s a big caveat with GM: It leads all companies that test in California when it comes to fender benders. Last year, Cruise had 22 of the 27 accidents in the state involving driverless cars, and it experienced five of the seven incidents reported this year. The accidents have mostly been minor and not the fault of GM’s car. In an interview, GM President Dan Ammann attributed the higher incident rate to the greater number of miles traveled in San Francisco’s busy streets.
Staying Close
Mercedes-Benz started selling an adaptive cruise-control system in the late 1990s on its flagship S-class sedan. The system could sense when the car was bearing down too quickly on someone’s rear bumper up ahead.
Today, Mercedes models with Intelligent Drive get closer to real self-driving because the system can help steer clear of pedestrians and avoid other accidents. It’s one reason why Navigant Research, which studies auto technology, ranked parent company Daimler third behind Waymo and GM.
Those systems help today's drivers. For the cars of tomorrow, Daimler works closely with Robert Bosch Gmbh and will be using a system from Silicon Valley intelligent computing company Nvidia Corp. The test cars can drive at Level 4 autonomy or even Level 5, which means the car doesn’t need a steering wheel or pedals to operate.
The company has been testing V-Class vans around the roads of Boeblingen, near Stuttgart, where Mercedes-Benz has a research center. The automated vans run through purposefully challenging situations such as morning traffic. The technology is already at Level 5, Daimler’s head of development, Ola Kaellenius, said in an interview, although a recent report by Bloomberg New Energy Finance put the target date for the company after 2020.
Before those systems are on the road, Kaellenius said Mercedes will offer Level 3 autonomy as an option in the cars it sells by 2021. This means that the car can handle most driving while prompting the driver to take over in certain situations that the computer can’t handle.
Fully self-driving cars will be on the road at the same time, he said, but would be used for ride sharing services, because they would be too expensive for retail customers to buy. “The logical business case there is a mobility service, a robo-taxi type of thing,” Kaellenius said. “You amortize the cost through the saving on the driver.”
No one would have imagined a decade ago that a vestige of bankrupt GM parts unit Delphi would be a player in the self-driving revolution. But Aptiv Plc, the former Delphi Automotive that split out its powertrain business, has emerged as a player to be watched, said Grayson Brulte, co-founder of Brulte & Co., a consulting firm that specializes in autonomous strategy.
Aptiv has invested heavily in self-driving technology, buying software maker Ottomatika along with stakes in Lidar makers Innoviz, Leddertech and Quanergy Systems. Its biggest deal was buying NuTonomy, which has been running tests of driverless cars in Boston and Singapore at city speeds. The company also ran a robo-taxi demo in Las Vegas during CES.
The company has been testing ride-hailing services in Singapore since 2016 and will have them operational in 2021, according to Navigant. Aptiv has been working with Audi AG and Bayerische Motoren Werke AG cars to develop its technology.
The same day in late November that GM showed off its self-driving Bolt in San Francisco, Zoox Inc. had its own car driving through the city’s winding streets and heavy traffic. Zoox has about 250 engineers working to develop it. Its self-driving Toyota Highlander SUVs run on the same busy streets that GM uses to test the Bolt. But Zoox’s car can also drive at highway speeds, said Bert Kaufman, head of corporate and regulatory affairs for Zoox.
The company plans to have its car ready for passengers in 2020, Kaufman said, and then will work on getting passengers in the car shortly after.
The challenge for Zoox is getting more funding to build its car. The company has raised more than $280 million but needs an additional cash to finish its car, Kaufman said. It can cost $1 billion for car companies to finish a new model. Established carmakers have their own vehicles, and Waymo has partnerships with manufacturers.
Renault-Nissan Alliance Chairman Carlos Ghosn brags that the company has sold more cars with adaptive safety than anyone. Nissan’s ProPilot system stops the car if a vehicle ahead stops quickly and it keeps the car in its lane.
That system was developed on the way to a full autonomous system, Ghosn said in an interview earlier this year. Right now, Nissan is testing a fully-autonomous car in Palo Alto, California. Renault recently showed off a long, sleek, copper-colored concept car called the Symbioz that can go 80 miles per hour in full self-drive mode.
The car still requires a driver to turn on autonomous mode, at which point the steering wheel retracts. With electric motors in front and back and measuring a lane-hogging six feet in width and 16 feet in length, Symbioz isn’t exactly the car that will go on sale.
In March, Nissan tested an electric Leaf in a ride-hailing pilot in Yokohama, and Renault will do the same later this year in suburban Paris and Rouen with the electric Renault Zoe.
While the alliance's technology is impressive, Ghosn sounds cautious. The French-Japanese conglomerate plans to test a self-driver on the road around 2020. That car will be on highways requiring only occasional driver intervention. By 2022, Renault-Nissan will have fully autonomous cars in the road, according to the Alliance 2022 plan.
“We will all be coming to market with this by 2022,” Ghosn said. “You’ll see all of the carmakers with some level of autonomy.”
Audi, the luxury brand owned by Volkswagen AG, already has the most advanced autonomous car for sale in the A8. The car’s Traffic Jam Pilot uses Lidar to see the road and lets drivers go completely hands-free at speeds up to 37 miles per hour.
The company’s future work promises to be much more advanced. Audi, which is working with Nvidia, is targeting a fully autonomous car in 2020; the report from BNEF put the date to reach Level 4 at 2021. The company hasn’t said whether it will be tested in a service or by its own engineers.
Volkswagen also has an agreement with Aurora, the startup whose founders have serious cred in the world of self-driving software. Its technical leaders are Urmson, a founder of Google’s self-driving effort, Sterling Anderson, who ran Tesla’s Autopilot program, and Drew Bagnell, formerly a leader on Uber’s autonomy team. The company has kept mum as to how it will go to market.
Following the Pack
BMW has a fleet of about 40 cars that can drive at Level 4 autonomy. The cars are driving around Munich and in California.
The maker of Ultimate Driving Machines doesn’t see selling the ultimate riding machine soon. The company is testing completely self-driving cars that they have developed with partner Intel Corp., which acquired sensor maker Mobileye, and with German parts maker Continental AG. Fiat Chrysler Automobiles NV recently joined the partnership, which plans to have self-driving technology in production vehicles by 2021.
The self-driving BMWs aren’t ready for the highways, BMW Chief Finance Officer Nicolas Peter said at a press event in Detroit. “This technology requires, from our perspective, some more time to have really fully automated cars on the road,” Peter said. There are currently about 1,000 people on the company’s research and development team.
No one can count Toyota Motor Corp. out. The company started developing self-parking technology in 1999 and installed it in the Prius in Japan in 2003, enabling the car to park with no input from the driver.
Toyota kept mum about capabilities until CES in January, when the company showed off a boxy shuttle concept called e-Palette. The Japanese automaker can make the self-driving shuttle in three sizes and it will debut publicly at the Tokyo Olympics in 2020 as a ride-hailing shuttle, said Gill Pratt, who runs Toyota Research Institute.
Still, Toyota’s message was one of patience. When Toyota tests its self-driving car in 2020, it may not have a driver—or it may still have two people minding the front seats and the controls, Pratt said.
He thinks a lot of carmakers and tech companies are hyping the true state of self-driving vehicles. “We will get there,” Pratt said, “but I can’t tell you when.”
Ford Motor Co. has been considered a laggard, especially since former CEO Mark Fields was fired last year, in part for not having a cohesive vision for autonomy and future mobility. But it’s not fair to say Ford is flat-footed. The company gets its technology from Argo AI, the artificial intelligence company in Pittsburgh that Ford paid $1 billion to take a significant stake last year. That investment brought in very good capabilities, said Sam Abuelsamid, an analyst with Navigant Research.
The Argo team has a strong lineage. The startup is the brainchild of Bryan Salesky, who was director of hardware development of what is now Waymo, and Peter Rander, who was engineering lead at the Uber Advanced Technologies Group. Salesky’s experience dates back to the beginning of self-driving cars: He was senior software engineer on the winning team in the 2007 autonomous vehicle challenge funded by the Defense Advanced Research Projects Agency (Darpa).
Ford is now testing its third-generation Fusion sedan with Argo’s technology. Even with Argo, however, Ford got a late start. When Ford bought the startup in February 2017, the company had few employees and Salesky spent a year staffing up.
The plan is to have self-driving cars with Level 4 capability in 2021, said Sherif Markaby, Ford’s vice president of autonomous vehicles and electrification. The car will be purpose-built for autonomy that has no steering wheel or pedals. While Ford is a couple of years behind GM and Waymo, the company is experimenting with Domino’s Pizza to deliver pies and with Postmates to deliver other cargo. Ford is also preparing a Michigan factory to make autonomous vehicles.
Volvo Cars AB has a goal of eliminating all injuries to passengers in its cars by 2020. That looks unlikely, but the company has 500 people developing its own self-driving technology. Right now, its Pilot Assist gives a driver 15 seconds with hands off the wheel, keeping the car in lane and managing the distance to a vehicle ahead.
The company is testing its technology with a few families in Gothenburg, Sweden. The tests will start with driver assistance technology and move up to more advanced systems over time.
The automaker, owned by China’s Zhejiang Geely Holding Group, is developing more autonomous technology but won’t be ready to go to market until 2021, according to a report from Navigant. Volvo is also working with Uber to develop autonomous systems for the XC90 SUVs.
If you’re coming from behind, might as well find a partner to usher things along. Korea’s Hyundai Motors Co. will have an advanced safety system on the road this month that allows drivers to take their hands off the wheel for 15 seconds.
The company isn’t ready to test truly self-driving cars, said Jinwoo Lee, vice president of Hyundai’s Intelligent Safety Technology Center in Korea. To get there, Hyundai decided to work with Aurora, the technology startup that is working with VW, as well as with prolific partner Nvidia, maker of artificial intelligence computing systems.
Hyundai plans to test its autonomous system in a small city in 2021. “We take very conservative steps,” Lee said in an interview. “We want to really test it and validate it.” There are no current plans to test autonomous technology on public roads, and the company said it doesn’t think it will be ready for market until 2025.
Unusual Cases and Dark Horses
Most traditional carmakers rushed to get a self-driving vehicle program once Waymo and Uber started working on it.
Automakers feared that low-priced self-driving taxi services would replace car ownership and that they would just supply the hardware, just as Foxconn Technology Co. makes the phone for Apple Inc.—and Apple makes the real money selling content and services.
Enter Fiat Chrysler. The automaker supplies the minivans to Waymo and helps integrate the technology, yet has little development of its own. The company has started working with Intel and BMW but will not try to establish leadership alone.
Ride-hailing giant Uber Technologies Inc. placed two huge bets on autonomous vehicles, first hiring top employees from Carnegie Robotics in 2015 and then acquiring the self-driving trucking startup Otto in 2016. But the program has been mired in controversy after a high-profile lawsuit and a then fatal collision.
Throughout 2015, Uber recruited top robotics talent from Carnegie Mellon as it built its Advanced Technologies Group in Pittsburgh, Pennsylvania. That group, led today by former Carnegie Robotics co-founder Eric Meyhofer, has spearheaded Uber’s self-driving car program. In an effort to catapult Uber to the front of the autonomous-vehicle arms race, Uber acquired Otto Trucking in August 2016, buying a team filled with former employees of Alphabet’s self-driving car unit.
Less than a year later, Alphabet retaliated, filing a trade secrets lawsuit against Uber. The lawsuit revealed that Anthony Levandowski, who co-founded Otto after working on Google's self-driving car, then headed Uber's driverless-car development effort, had downloaded copies of work emails and sensitive files at Google. Levandowski, along with Otto’s other three co-founders, have all since left Uber. The ride-hailing company settled the lawsuit this year for $245 million in Uber equity, but not before the lawsuit distracted its leaders and placed a black mark on its autonomous program.
Then in March, bad turned to tragic when a self-driving Uber struck and killed a pedestrian in Tempe, Arizona. Uber quickly suspended all of its public autonomous-vehicle testing as it awaits the results of that investigation.
If Tesla Chief Executive Elon Musk can get the world’s most powerful rocket off the ground with his company SpaceX, maybe he can also get cars to drive themselves. Tesla’s Model S and X both have Autopilot, which can pass other cars and change lanes with no hands on the wheel. While it’s not a fully autonomous system, it has given Tesla a lot of data about how its cars perform when driver-assistance software is engaged. Tesla has been under fire lately, after another person died in an accident while using Autopilot.
Where things get murky is that Musk eschews the Lidar systems that most carmakers and tech companies are using. He says he wants to develop more advanced imaging to give his cars a much better pair of eyes.
Musk wants to use cameras and develop image-recognition capabilities so cars can read signs and truly see the road ahead. He has said Tesla is taking the more difficult path, but if he can come up with a better system, he will have mastered true autonomy without the bulky and expensive hardware that sits on top of rival self-driving cars.
“They’re going to have a whole bunch of expensive equipment, most of which makes the car expensive, ugly and unnecessary,” Musk told analysts in February. “And I think they will find themselves at a competitive disadvantage.”
Analysts from BNEF project that Tesla will be able to field Level 4 cars in 2020, although that timetable could be subject to change now that the company entered into a public spat with federal safety investigators over the fatal crash involving Autopilot.
China’s largest search engine has been developing self-driving software for five years. Its Apollo software system for autonomous vehicles is open-source, and the company has invited all takers to work together to test cars and collect data. Baidu started testing the first version of the software in late 2017 on public roads and showed off version 2.0 at CES in Las Vegas in January.
The Chinese government in March gave Baidu permission to test cars on 33 public roads in the suburbs of Beijing, making it first on the roads in China. The company’s goal is to test the system in buses made by Chinese manufacturer King Long later this year and, by 2020, to have autonomous vehicles capable of Level 3, meaning the car controls itself at highway speeds and tells the driver to take over in complex situations. Baidu’s initial self-driving cars will be developed with China’s Chery Automobile Co.
Baidu also has a 2021 target to produce Level 4 autonomous cars in partnership with Chinese automaker BAIC Group.
Waymo forecast to capture 60% of driverless market
Group’s dominance by 2030 will force carmakers to adopt its technology, says report
Peter Campbell, Motor Industry Correspondent Financial Times May 10, 2018
Waymo will own 60 per cent of the global self-driving taxi market by 2030, a level of dominance that forces many of the world’s carmakers to adopt its technology or become obsolete, a report predicts.
Investment bank UBS estimated global revenues from self-driving technology by 2030 will be up to $2.8tn, with Alphabet’s Waymo unit the global leader.
Only a select handful of carmakers, such as Daimler and General Motors, will be able to operate their own systems and compete with Waymo’s technology, the group forecasts, following interviews with self-driving car developers, technology groups and academics as well as its own analysis.
Many carmakers are racing to develop self-driving systems in order to get into the market of driverless car-booking, a new segment that is expected to offset car ownership in the world’s major cities in the coming decades.
UBS expects 12 per cent of cars sold in 2030 will be for driverless taxi fleets, with a total of 26m robotaxis in operation. Private car sales will fall by 5 per cent as a result, it expects.
The report predicts that demand for self-driving taxis will take off around 2026, depending on public acceptance of the technology in the face of recent crashes, and regulatory approval, although it will develop at different speeds in different markets.
Sooner or later carmakers that are unable to compete will have to give in and take the Waymo system
The largest revenue pools will be in operating the car-booking networks and monetising time spent by passengers in the cars, it predicts, with building the cars and other services such as mapping or sensors taking a smaller portion of the driverless pie.
But the costs of building a self-driving system from scratch, as well as the challenges of deploying it in cities across the world, will prohibit all but a handful of carmakers from competing in the most lucrative part of the market, it said.
“Unlike most auto players, Google focused on [full self-driving technology] from the very beginning — more than five years before the auto industry started working on it,” the report said.
Waymo has notched up more than 5m miles of physical testing in California, as well as 5bn miles of virtual testing on its computers, putting it far ahead of rivals, it added.
While Waymo is not building its own cars, it is developing the brain of a driverless vehicle, then installing its system on existing cars. Jaguar has agreed to sell 20,000 electric cars to Waymo, while it currently uses Chrysler minivans in its fleet.
Analyst Patrick Hummel, who led the report, said: “Very few players other than Waymo will be able to succeed with having an autonomous vehicle brain in the marketplace.
Mr Hummel said this will lead to “a shake-out in the industry over time and a reduction in the “Sooner or later carmakers that are unable to compete will have to give in and take the Waymo system.”
Those that do provide vehicles for driverless fleets may see their brands disappear from the cars, and be relegated to “white label” providers, it added. number” of carmakers.
“We have dozens of carmakers around the world, it’s a fairly fragmented industry. The majority will be in the loser category.”
Since 2011, farmer Sean Stratman has grown kale, cauliflower, broccoli and squash in Carnation, Washington. Then, a few years ago, he added a new crop to his bounty: knowledge, using drones and the intelligent edge to get near-real-time information about issues like soil moisture and pests. It’s the kind of information that is not only helping him, but could benefit farmers around the world.
“The more data I get, the more I can correlate it to what I’m experiencing in the field, and the greater that understanding becomes,” says Stratman, whose grandfathers were also vegetable farmers. “I’m really optimistic and excited about how our knowledge will continue to grow. I have a feeling that it will become exponential at one point.”
Sean Stratman’s grandfather was also a farmer who, “taught me how to value putting time and energy into a piece of land, and seeing what it could bring in return for you.” (Photo by Michael Victor) ------------------------------------------------------
The need is critical. In a little more than three decades, by the year 2050, the current world population of 7.6 billion is expected to reach 9.8 billion. Food production will have to increase dramatically to keep up with that growth, according to several studies. But there is a limited amount of additional arable land available for farming.
A new partnership between Microsoft and leading drone maker DJI builds on the work both companies are doing with data and agriculture that could make it easier and more affordable for farmers like Stratman to quickly get the information they need to make crucial decisions about soil moisture and temperature, pesticides and fertilizer. Hours and days spent walking or driving the fields to try to detect problems can be eliminated.
Microsoft’s FarmBeats program sends large amounts of data from ground-based sensors, tractors and cameras to a computer on the farm using TV white spaces, a type of internet connectivity similar to Wi-Fi but with a range of a few miles. TV white spaces are unused TV broadcast spectrum, which is plentiful in rural areas where most farms are located, and where standard internet connections are often spotty.
DJI’s Matrice 200 drone equipped with the Slantrange sensor for agriculture. (Photo courtesy of DJI) ---------------------------------------------------
New machine learning algorithms process and analyze the data, and run on the Azure Internet of Things (IoT) Edge, which delivers cloud intelligence locally, on the edge, if you will, of a larger computing network. In Stratman’s case, the “edge” is in the barn, a seemingly old-fashioned setting for a high-tech solution to help solve serious problems.
Using Azure IoT Edge, “You don’t have to send all the data to the cloud; it sits on the farm, and is able to ingest a lot of the data, apply the intelligence on top of it to generate actionable insights for the farmer,” says Ranveer Chandra, the Microsoft principal researcher who leads the FarmBeats program.
“It also makes sure that all of this data, along with the analysis, will sync with the cloud so that we have a complete, end-to-end IoT system that is able to gather a lot of data and make sense of it.”
Ranveer Chandra, Microsoft principal researcher and FarmBeats project lead, checks the soil at Dancing Crow Farm. (Photo by Michael Victor) --------------------------------------------------------------------
DJI’s PC Ground Station Pro software provides on-the-fly generation of orthomosaics, or stitched aerial imagery, which are used by the FarmBeats machine learning algorithms running on the Azure IoT Edge to create detailed heatmaps. Those heatmaps enable farmers to quickly identify crop stress and disease, pest infestation or other issues that may reduce yield. The maps are transmitted using TV white space technology to the Azure IoT edge device located on the farm.
Jan Gasparic. (Photo courtesy of DJI) _____________________________________________ | Jan Gasparic, DJI’s head of enterprise partnerships, says the Microsoft partnership means both companies, “can go a lot further together because we can leverage information that might be drone-based, but also in conjunction with ground-based or edge-based processes.”
The FarmBeats program uses DJI’s commercially available Phantom 4 Pro drone for the project at the farm Stratman manages, at other FarmBeats locations in the United States – New York, California and North Dakota – and in India. For larger farms, where there is a need for increased battery life and advanced payloads like multispectral sensors, the team is using DJI’s Matrice 200.
“This is the first time where we’ve really taken all these different components, using some of the software we’ve developed, some of the algorithms we’ve developed, our drones and the third-party sensors – and integrated all of that into a wider solution with a partner,” Gasparic says. “In terms of the complexity of what’s involved, in order to leverage all of these unique aspects, this is a first – and it’s really exciting.”
Stratman grew up in Florida, and when one of his grandfathers moved there, he commandeered his grandson’s time and help to tend a large garden.
“He inspired me in the desire for physical work,” Stratman says. “At the time, I hated it. Now that I look back on it, he actually taught me how to value putting time and energy into a piece of land, and seeing what it could bring in return for you. I think that’s when I first got a little bit of an itch for farming, and I didn’t even realize it at the time.”
With a degree in anthropology, Stratman worked as an archaeologist for a few years, focusing on indigenous agricultural techniques, how civilizations, “could have lasted for thousands of years in one area utilizing similar plots of land, and the strategies that they were applying in order to make that work. I found that really fascinating early on,” he says.
It wasn’t surprising, then, that when an opportunity arose to manage a small farm and garden owned by a five-star restaurant in Florida, Stratman eagerly signed on.
| Farmer Sean Stratman on a tractor at his farm in Carnation, Washington. (Photo by Michael Victor) ---------------------------------------------------------------
Drawn to the Pacific Northwest for its climate and growing conditions, Stratman moved to Washington in 2001, working in agriculture before establishing his own farm, Dancing Crow Farm, about 30 miles east of Seattle, in 2011. He also manages a program for the nonprofit Sno-Valley Tilth, helping others launch their own farm businesses.
Stratman is as grounded as they come – literally and personally. He isn’t drawn to shiny tech for tech’s sake, but he becomes animated when he talks about what Microsoft and DJI are bringing to agriculture.
He’s used the heatmaps to help with everything from his planting strategy – for example, whether the soil temperature is ripe for seed germination – to learning where beavers had created dams along a lengthy drainage ditch, creating flooding in some of his fields.
“This next year, I’m looking at identifying the soil humidity levels that are ideal for various soil working paths rather than putting an implement on my tractor and going out and saying, ‘The conditions are less than ideal or not right for that particular tractor implement,’” he says.
Zerina Kapetanovic, FarmBeats hardware researcher, checks the FarmBeats ground sensors at Dancing Crow Farm. (Photo by Michael Victor) ---------------------------------------------------------------
“It will be really great to look at my FarmBeats program with Azure IoT Edge, and results from the drone and say, ‘Hey, look, my soil humidity is at 40 percent; it’s time to put on the tiller.’ It’s going to be beneficial, saving me time and trouble over doing something the old-fashioned way, the hard way.”
Chandra says such precision agriculture techniques are what will enable farmers to use their resources – water, land, fertilizer – even more wisely in the future.
“It will help them become more profitable; they’ll be able to grow more crops because their yields will go up,” he says. “Whatever farming they do will be more sustainable, and it’s better for the environment. And it will help feed the world.”
Miguel Sotomayor | Getty Images ---------------------------------------- It's been an uncharacteristically busy week for Eric Jackson, a public information officer at the Lee County Mosquito Control District in southwestern Florida.
U.S. Transportation Secretary Elaine Chao on Wednesday afternoon named Lee County's mosquito control outfit one of only 10 state, local and tribal government entities to be selected for the Federal Aviation Administration's Unmanned Aircraft Systems Integration Pilot Program.
That means Lee County's mosquito control operations will be able to incorporate drone technology under more relaxed standards than they would otherwise be required to adhere to under current law.
It also means an unlikely spotlight has been shined on the county's pest control efforts.
"I'm hoping after today it'll quiet down. We're getting a lot of media requests, but it's also I'm getting inundated with drone operators that want jobs," Jackson says. "I'm thinking, 'Man, this really made it around the country.'"
The FAA in a statement this week indicated it had received 149 formal proposals from across the U.S. The pilot program is expected to run over the next two and a half years, and feedback from those selected will likely help the federal government decide whether to loosen restrictions on commercial and governmental drone use in the future.
Among the other finalists, the Choctaw Nation of Oklahoma has been granted broader drone approval to monitor crops and livestock herds. Alaskan pipeline inspection efforts are likely to see greater unmanned aircraft use in the months and years ahead. But Lee County was the only finalist recognized that intends to use broader drone approval for pest control.
"We've been doing this for 60 years with aircraft dealing with mosquito issues, so I'm thinking that might have played a part (in our selection)," Jackson says, describing the county's mosquito populations as a potential public health problem. "Our district relies heavily on aerial operations."
Lee County's mosquito control efforts – which span an area that Jackson says includes "a lot of mangrove habitat" – already involve helicopters and small aircraft that monitor and in some cases treat areas with particularly high pest activity.
Jackson says his department has also partnered with the Lee County Hyacinth Control District in the past, making limited use of unmanned drones to "take images of aquatic bodies to see where there's a lot of vegetation."
"Because where you have vegetation crowding out water, sometimes mosquitoes can grow in those," he says.
But with its new recognition from the FAA, the department hopes to make use of a much larger 1,500-pound drone in its monitoring and pest treatment operations. Its status in the program will allow drone operators to fly at night, beyond a visible line of sight and directly over people, potentially at lower altitudes. All three of those stipulations would not be permitted under current law.
"We potentially could be using it more for surveillance and in more isolated areas for treatment missions," Jackson says. "We're trying to be as innovative as we can and as efficient as we can. And if this can be used safely, we're open to anything."
But details of how and when an unmanned drone weighing more than an adult grizzly bear will be flying over Lee County are still up in the air. Jackson notes the control district's location in the Sunshine State subjects it to particularly stringent public records laws. He says mosquito control officials will hold a special commissioners meeting later this month to discuss how and when they will take their next steps with the program.
But he says he and his colleagues are honored and excited to have been selected for such a competitive program – even if that means his phone continues to ring off the hook for the time being.
"Really, the whole point of this program is to be able to expand beyond the current regulations to see how this can be used," he says. "We have pilots in the air, and as the airspace becomes more crowded and people start flying above 400 feet and out of line of sight, [we asked ourselves] how can we make sure we have a seat at the table to where we can help draft these regulations to keep our pilots safe."
Too bad the cars equipped with these features are expensive and therefore exclusive. It’ll take years for this tech to filter down to cheaper cars and the used market, and decades to find its way onto all of the 260 million vehicles already on US roads.
But don’t be too envious of your wealthy fellow drivers. In fact, consider thanking them. By rolling down the highway with their hands in their laps, they may be doing you a favor. New research from the University of Michigan shows that the presence of a single automated and connected car can make driving better for everyone.
It’s all about avoiding the “phantom traffic jams” where everyone gets bunched together. “We found that they’re related to our human behavior,” says Gabor Orosz, who led the research. If one driver hits the brakes for whatever reason, the driver behind them does the same—likely harder, to make up for the time it took him to notice the brake lights and move his foot to the left. “That can lead to cascading effects where everyone is braking a little harder, eventually all traffic comes to a halt.” If a sole driver hits the brakes a little too aggressively, the person 10 cars behind him is forced to a complete stop.
Cruising to the rescue is the connected robo-driver, which uses a 5G connection or short-range radio to chat with the cars or infrastructure up ahead to know things are slowing down well before an eyeball-reliant human driver might.
For this experiment, published in the journal Transportation Research Part C: Emerging Technologies, Orosz and team took eight cars out onto the quiet roads of southeast Michigan. The vehicles were a mix of unremarkable sedans but with the ability to broadcast their position and velocity (meaning speed and direction). One car was picked to act as the autonomous car, and the onboard computer was wired into its brakes, with the ability to apply them just as much as necessary, as early as possible.
Then the team drove around as a convoy, cruising at 55 mph until one driver braked, stomping the pedal harder each time. The humans behind that car hit the brakes hard enough to throw them against their seat belts. But the connected car in the pack got advanced notification that a car several ahead was slowing, and it started slowing more gently, not even hard enough to spill a cup of coffee. The human drivers behind that car were also able to brake more gently—and they didn’t get bunched up.
Driving more smoothly saved energy too, by as much as 19 percent in the connected car, and 7 percent for the human-driven vehicles behind it. That’s useful for cutting gas consumption, or increasing the range of electric vehicles.
A similar experiment at the University of Illinois in May 2017 showed that if one in 20 cars was at least partially automated, it could eliminate these stop-and-go waves of traffic. Mixed in among the commuting masses, it would act like a Formula 1 pace car, keeping everyone in check. That study showed that even already common technologies like adaptive cruise control, which maintains a set distance from the car in front, benefit the broader driving public.
Orosz’s research shows the added benefits that location broadcasting brings: it gives cars superhuman powers of being able to see over a horizon, or through the truck in front, whereas a car with radar-based adaptive cruise control can usually only react reliably to the vehicle directly in front. (Some of the more sophisticated systems can see two cars ahead by picking up radar signals reflected off the road under other vehicles.)
“You don’t need to see everyone,” Orosz says. He found that looking three or four cars ahead was plenty—after that the effects don’t get any better.
The Trump administration isn’t mandating vehicle-to-vehicle communications in all new cars, as Barack Obama’s DOT proposed, but some automakers are pushing ahead. General Motors is already putting it into the Cadillac CTS, Mercedes is adding it to the S-Class and E-Class. Several Audi models use vehicle-to-infrastructure systems to tell a driver when a light will turn green, in cities like Las Vegas that have installed the necessary tech. Israeli company Autotalks is developing a communications system for connected motorbikes.
So people who opt to buy connected, automated, cars will save hassle and energy, but the people who come behind them will too. Finally, a version of trickle down economics that really works.
Drones, Autonomous Vehicles and Flying Cars | Stock Discussion ForumsShare
JERUSALEM (Reuters) - Mobileye, Intel Corp’s Israel-based autonomous driving unit, has signed a contract to supply eight million cars at a European automaker with its self-driving technologies, a company official told Reuters.
A general view of a Mobileye autonomous driving test vehicle, at the Mobileye headquarters in Jerusalem, May 15, 2018. Picture taken May 15, 2018. REUTERS/Ronen Zvulun ---------------------------------------------------
Financial terms of the deal and the identity of the automaker were not disclosed.
The deal, one of the largest yet for Mobileye, is a sign of how carmakers and suppliers are accelerating the introduction of features that automate certain driving tasks – such as highway driving and emergency braking – to generate revenue while technology to enable fully automated driving in all conditions is still years away from mass-market deployment.
The deal for the advanced driver assisted systems will begin in 2021, when Intel’s EyeQ5 chip, which is designed for fully autonomous driving, is launched as an upgrade to the EyeQ4 that will be rolled out in the coming weeks, said Erez Dagan, senior vice president for advanced development and strategy at Mobileye.
Intel and Mobileye are competing with several rival chip and machine vision system manufacturers, including Nvidia Corp., to provide the brains and eyes of automated cars.
The future system will be available on a variety of the automaker’s car models that will have partial automation - where the car is automatically driven but the driver must stay alert - as well as models integrating a more advanced system of conditional automation.
A general view of a Mobileye autonomous driving test vehicle, at the Mobileye headquarters in Jerusalem, May 15, 2018. Picture taken May 15, 2018. REUTERS/Ronen Zvulun -------------------------------------------------------
Mobileye, bought by Intel last year for $15.3 billion, says there are some 27 million cars on the road from 25 automakers that use some sort of driver assistance system and Mobileye has a market share of more than 70 percent.
“By the end of 2019, we expect over 100,000 Level 3 cars with Mobileye installed,” said Amnon Shashua, Mobileye’s chief executive.
In Level 3, the car is self-driving but the driver has about 10 seconds to take over if the system is unable to continue.
Mobileye is working with a number of automakers, such as General Motors - for its Super Cruise system - Nissan, Audi, BMW, Honda, Fiat Chrysler and China’s Nio, to supply its Level 3 technologies by next year.
At its Jerusalem headquarters, Mobileye is also testing a more advanced Level 4 technology in Ford Fusion hybrids with 12 small cameras installed and four of the soon-to-be-released EyeQ4 chips in the trunk. In a test witnessed by Reuters reporters, these cars are able to drive on Jerusalem highways in midday traffic with no driver interference.
ROBO-TAXIS
Mobileye says that while it Level 4 systems will start production in 2021, many of its technologies are relevant to creating systems that may soon be purchased by consumers.
Shashua said that based on commitments from automakers, self-driving taxis - called robo-taxis - should start hitting roads around 2021.
“When designing our system we are looking at all what can be used today, in a year, in two years and then the robo-taxi,” Shashua said.
He noted that about that time, some of the more expensive luxury cars for personal use, and possibly some medium-priced vehicles, will use the same technologies - for an extra cost of about $12,000 per car.
As a result, in a few year’s time, roads will be comprised of both human drivers and self-driving cars - which is why safety is paramount, Shashua said. He added that while there are 40,000 fatalities on U.S. roads each year, society won’t accept that number from self-driving cars, although perhaps about 40.
As such, Shashua said, autonomous cars cannot rely on just cameras. To prevent accidents and for the system to make the best driving decisions, it needs to process data from a combination of cameras, high-definition maps, radar and laser scanners called lidar, he said.
Shashua said test vehicles were made to drive like humans, and in Jerusalem they were assertive, given the “driving culture is very assertive.”
“On one hand you want to be safe but on the other hand assertive,” he said, noting that being too hesitant can cause impatience from other drivers and lead to accidents. “In the future, the system will observe other drivers on the road and after a certain amount of time adapts to driving conditions ... It’s not unlike a human experience.”
One issue in designing self-driving cars is how to define what is a dangerous situation. “When you look at driving laws, they are comprehensive but not formally defined,” Shashua said, adding that may ultimately be resolved by courts. “We would like to formalize these things in advance to allow machines not to get into dangerous situations to begin with.”
Reporting by Steven Scheer; Additional reporting by Joe White in Detroit; Editing by Mark Potter