|Alphabet's Dream of an 'Everyday Robot' Is Just Out of Reach|
Google's parent is infusing robots with artificial intelligence so they can help with tasks like lending a supporting arm to the elderly, or sorting trash.
November 21, 2019
These robots sort trash all day, every day, at Alphabet's X lab, in a project aimed at creating machines that help out at home.Photograph: Lauryn A. Hill
During a recent visit to Alphabet’s X lab, I drained my coffee and left the compostable cup on a tray marked “Cans & Bottles.” The transgression was soon mended. Twenty minutes later, a wheeled, one-armed, chest-high robot whirred along and inspected the cup with the 3D cameras inside its flattened head. Its arm reached out and used two sturdy yellow fingers to move the misplaced cup onto the adjacent green tray labeled “Compostables.”
The trash-literate robot—part of a project called Everyday Robot—has been in development for years, but X just began discussing it publicly. A few of the machines make the rounds of trash stations used by staff on the second floor of X’s home in a converted mall in Mountain View, practicing their navigation skills and sorting recycling from compostables and landfill waste. Other robots of the same design work at a second Alphabet building nearby.
Trash sorting is not the project’s final goal. “We're going to try to build robots that can, you know, live amongst us and help us out in our daily lives,” says Hans Peter Brondmo, the Norwegian executive with tousled iron-colored hair leading the project. That’s the project’s moonshot, to use the lab’s self-mythologizing lingo for projects such as stratospheric internet balloons, the ill-fated face computer Google Glass, and flying wind turbines.
Sorting trash was chosen as a convenient challenge to test the project’s approach to creating more capable robots. It’s using artificial intelligence software developed in collaboration with Google to make robots that learn complex tasks through on-the-job experience. The hope is to make robots less reliant on human coding for their skills, and capable of adapting quickly to complex new tasks and environments.
The robot that moved WIRED’s misplaced coffee cup used a control system honed by five months of experience from dozens of robots sorting trash five days a week. X says its moonshot-hunting employees usually put around 20 percent of trash in the wrong place. The robots can reduce that down to less than 4 percent, helping Alphabet meet city of Mountain View recycling goals.
“We haven't solved the whole problem, but we've made enough progress that we have high confidence that we're onto something,” says Brondmo. As he speaks, robots occasionally buzz past his office on their rounds between trash stations—illustrating both progress and their limits; each is accompanied by at least one X employee, on hand to hit the red stop button on the robot’s neck if something goes wrong.
The Everyday Robot project originated from a multimillion-dollar mess. In 2013, Google executive Andy Rubin stepped down from leading the company’s Android mobile software division and went on a robot spending spree with the company checkbook. Google acquired a gaggle of startups with technologies ranging from full humanoids to industrial robotic arms to the prancing legged creations of MIT spinout Boston Dynamics.
Rubin never publicly articulated a clear strategy for that mechanical menagerie. The problem was left to others when he departed Google in late 2014, an exit later reported to have been caused by sexual assault allegations against him.
Brondmo joined X in 2016, after Alphabet’s leaders decided the lab was the best home for much of its disjointed robotics talent and technology. (Boston Dynamics was sold to Japanese conglomerate SoftBank in 2017.) X’s leadership created multiple moonshot projects from Google’s robot leftovers. Everyday Robot, led by Brondmo, is the first to become public.
The project’s heart, on the second floor of the X building, could be seen as a satire on office life. Mixed in with the desks of X engineers, in a prime spot near a window overlooking turning foliage, nearly 30 gray, one-armed robots toil at individual workstations. Each stands before three trays filled with trash, and spends the day sorting it into trays for recycling, compost, and landfill. When a robot has put everything is in its place, it lifts a handle on each tray to tip the sorted trash into a bin below and a human supervisor places a new collection of refuse to sort. X engineers call it the playpen.
The Sisyphean trash sorting is a test of X’s plan to make robots useful by having them learn from experience. Robots traditionally follow specific instructions written by human coders. That works in controlled environments such as factories, but a robot assisting people in a home or office faces too many varying circumstances for coders to anticipate and respond to them all. “It just becomes this game of whack-a-mole,” says Benjie Holson, a bearded software engineer with a tin robot on his shirt, as the robots sort away in the playpen. “Our big bet is to write programs that have the robot play whack a mole by practicing in the field.”
Google’s artificial intelligence research group helped stake that bet. It specializes in machine learning—algorithms that pick up skills from example data—and began applying it to robot control around five years ago. X engineers collaborated on the project and hosted the hardware.
The first fruit of the collaboration was dubbed the arm farm: Fourteen industrial robot arms with simple grippers in front of trays of miscellaneous items such as pens, plush toys, and paint brushes. Researchers wrote some initial code to direct the robots to grasp the objects and set them doing it over and over. Data from their successes and failures fed machine learning algorithms that gradually refined the robots’ abilities. After two months and 800,000 attempts to grab things, it succeeded in grasping objects more than 80 percent of the time.
X and Google later added a technique called reinforcement learning, used in the company’s historic defeat of a champion at the board game Go, and combined data from the arm farm with the experience earned by digital doubles of the robots in a simulated lab. That reduced the amount of real data needed to get results. When combined with data from simulations, less than a day’s work by seven real robots provided enough data for the system to successfully grasp objects more than 90 percent of the time.
The robots in X’s playpen now power a refined version of that approach. Each day, they sort and grasp trash over and over. Each night, virtual robots in simulated buildings, including a double of X’s lab, gather more experience. The results from both efforts are used to tweak the control system’s algorithms each night. After quality control checks to avoid rogue robots, the fleet gets an upgraded control system every week or two.
Since June, the robots have reduced misplaced trash in the items they sort to 3.5 percent. The process has seen the robots develop surer ways to place their fingers on items like cups and cans, and the ability to recover if they knock an item over. Spending time at the playpen shows some of their techniques have surprising sophistication. Robots in the playpen sometimes use swiping or stirring motions to move items around and make them easier to see and grab.
You don’t have to spend long with X’s robots to know that they’re not ready for everyday service. In the playpen, one grasps thin air instead of the bowl it appeared to be aiming for; undeterred, it goes through the motions of putting it down. Others bump at the trays' edges, or fumble objects. An engineer supervising the work jumps up wielding a screwdriver after one robot loses a finger.
The robot was custom-made for the project and combines high-end components like a 3D laser scanner or lidar developed by Alphabet’s self-driving unit Waymo with broad use of plastic to make future commercial versions more affordable. It’s also still a work in progress. “Because we're early in the process, they don't always work the way we want them to,” says Justine Rembisz, a robot designer whose workshop one floor below the playpen is strewn with disembodied heads, and collections of lone plastic fingers.
X has a robot forensics team that works full time on puzzling out the machines’ failures. One recent case required figuring out why the robots refused to move when they were introduced to the second Alphabet building for testing. It turned out light from the building’s skylights caused the machines’ sensors to hallucinate holes in the floor. “The robot often has symptoms that are kind of confusing,” says Sarah Coe, a lead on the forensics team, over the faint clunking of trash from the playpen nearby.
The biggest puzzle of all is whether the machine learning that X is betting on really can make robots capable of many different everyday tasks. “Everyone has the same intuition,” says Pieter Abbeel, a professor at UC Berkeley and cofounder of startup Covariant that seeks to apply robot learning to industrial and commercial settings. “You learn to sort trash and now you’re faster at learning the next thing, maybe setting the table.”
Despite promising results from X and others, no one has yet proven that intuition is true. “There is no hard evidence for a lot of transfer between tasks in robotics,” Abbeel says. “Maybe people haven’t set up a big enough experiment to make it materialize.”
Brondmo says working to demonstrate that months spent learning to sort trash will help his robots learn to perform other tasks more quickly is one of his team’s priorities for 2020. Asked how much longer after that it might be before Everyday robots can be useful helpers, he talks about how such machines might one distant day help people like his mother, who recently turned 81 and depends on four visits a day from caregivers.
“The first thing she'll say when I call her is ‘When are the robots coming,’” Brondmo says. The question is in jest and so is his answer. “I say, ‘Well it’ll probably be a few more years.’”