SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Technology Stocks : Alphabet Inc. (Google)
GOOGL 1,466-1.2%Jan 24 4:00 PM EST

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
From: Glenn Petersen11/21/2019 11:12:17 AM
   of 15330
 
Google Wants to Do Business With the Military—Many of Its Employees Don’t

By Joshua Brustein and Mark Bergen
Bloomberg
November 21, 2019

In early November several dozen experts from the American military industrial complex—including senior officers, defense contracting executives, and think tank advisers—gathered at a hotel a few blocks from the Capitol to discuss artificial intelligence software. While everyone ate lunch, General Jack Shanahan, head of the Joint Artificial Intelligence Center—JAIC, or the “Jake,” as it’s known—sat onstage in his dress uniform and chatted with two civilians in suits: Eric Schmidt, Google’s former chief executive officer, and Kent Walker, its chief legal officer.

The appearance with a high-ranking military officer was a coup for Google. Over the previous two years, the company and its parent, Alphabet Inc., have been criticized relentlessly for being insufficiently patriotic. Its perceived infractions: One, the 2017 decision to open an artificial intelligence lab in Beijing when many in the U.S. had come to see the development of AI as a national priority on par with the Manhattan Project. Two, the 2018 decision, in the face of pressure from employees, to withdraw from Project Maven, a secret government program to use commercial AI software to analyze images from military drones.

In March, General Joseph Dunford Jr., the chairman of the Joint Chiefs of Staff, complained to a Senate panel that Google was “ indirectly benefiting the Chinese military.” Then in July, Peter Thiel, President Trump’s most prominent Silicon Valley supporter, called Google “ seemingly treasonous” and suggested it had been infiltrated by Chinese spies. The following day, Trump more or less endorsed this view, praising Thiel on Twitter and promising an investigation. (The administration walked this back later, saying it had no concerns about Google’s work in China.)

There was a time when Google might have worn its unpopularity in Washington as a badge of honor. But the company is hitting middle age now, with $140 billion in annual revenue and a desire to expand into new lines of business. That’s made military contracts enticing to Google’s leadership, which sees defense work as an important stepping stone to more business in the $200 billion market for cloud services. Google’s more idealistic employees are alarmed by this and see the company drifting from its old “don’t be evil” ethos.

Several months after walking away from Maven, Google declined to bid on a $10 billion contract, JEDI. (The gratuitous Star Wars reference stands for Joint Enterprise Defense Infrastructure.) But it was working furiously to repair its relationship with the Defense Department. This spring executives from its cloud computing division held a series of dinner parties in Washington, inviting current and former employees from national security agencies. “Their message was, ‘We’ve got a bad rap. We want to work with you,’?” says James Lewis of the Center for Strategic & International Studies and a dinner party guest. Google also ran a Super Bowl ad highlighting a search engine feature that can help veterans find jobs.

Sitting next to General Shanahan onstage, Walker continued the charm offensive, recounting his experience growing up on military bases and expressing frustration that anyone would question his employer’s commitment to national security. “That was a decision focused on a discrete contract,” he said, referring to Google’s pulling out of Project Maven. It was “not a broader statement about our willingness or history of working with the Department of Defense.” Google declined to make Walker or other executives available for this article, which is based on interviews with a dozen current and former Google employees and 20 people close to the military’s work on AI, as well as other military contractors and activists at other companies.

Shanahan professed himself—and, by extension, the Pentagon as a whole—satisfied with Google, a message echoed privately by military figures. A senior Defense official says the company is actively pursuing contracts issued by the JAIC. But mistrust remains. Portions of the company’s employee base are in a state approaching open rebellion, and senior military officials worry that Google is susceptible to pressure. In passing conversation, officers joke about canceling their Gmail accounts to avoid aiding the enemy. “I don’t know who they’d put on a defense project,” says a Senate aide, expressing a concern that Google employees aren’t supportive enough of the U.S. government to be reliable. “Frankly, I don’t trust them.”


Dunford



Walker


Shanahan
____________

It’s easy to trace any novel political controversy to Trump, but Google reached a subtler turning point a year and a half before the 2016 election. On April 23, 2015, Amazon.com Inc. first disclosed the financial performance of its cloud computing division. In a quarterly financial report, the company said Amazon Web Services brought in $1.6 billion in revenue, was growing at a 50% annual rate, and was much more profitable than Amazon’s retail business.

At the time, almost all of Google’s revenue came from advertising. Its futuristic forays into self-driving cars and smart cities had yet to produce much revenue, and its wilder projects—like the one to cure death—were basically glorified science experiments. Some veteran Googlers described Amazon’s cloud computing announcement as a reminder of how far behind the company was in a business that, by all rights, it should dominate.

Cloud computing involves building giant data centers and developing software to help big organizations automatically sort, share, and analyze data. Google had been making this kind of software since the late 1990s, when founders Larry Page and Sergey Brin began writing algorithms to create an index of web pages for their search engine. Over the years the company focused on AI—a trend that accelerated in 2015, when Sundar Pichai, who once called the technology “more profound” than fire or electricity, replaced Page as CEO.

The following year, Google formed its Cloud AI unit, hiring well-known scientists such as Fei-Fei Li, a computer vision whiz from Stanford, to promote the technology as vital for a wide range of industries. Google Cloud executives saw the U.S. Defense Department and its $700 billion annual budget as a potential marquee client—a way to signal it could build more than free search engines and web-based email software.

On the other hand, many Googlers were increasingly hostile to the idea of any kind of government contracting, especially after Trump suggested he might build a registry of Muslims, and then, after assuming office, issued an executive order denying entry to people from a handful of mostly Muslim countries. Employees signed pledges not to help build any technology to enable immigration crackdowns. And they rushed to public protests. Brin showed up at one demonstration at San Francisco International Airport, and Pichai seemed sympathetic, too. “It’s something you should never compromise on,” he told a group of 2,000 employees at a protest on Jan. 30, 2017.

Even as Google executives expressed public disapproval of the White House’s immigration policies, the company’s cloud division was redesigning its infrastructure with an eye toward winning over the military. One major challenge was that Google had integrated all its data centers into a single system. That was convenient for supporting its search engine, but it made working with classified government data impossible. The U.S. generally requires a computing architecture known as air gapping, which involves physically isolated servers with software written so it doesn’t interact with the broader network. When Amazon won a large CIA contract in 2013, it segregated agency data from that of other clients.

Retrofitting Google’s cloud to enable air gapping was a labor-intensive process that involved dozens of distinct teams, according to one person who worked on it. There was nothing inherently controversial about the changes, but some employees objected to doing so to enable military partnerships. A handful of senior Google engineers—the Group of Nine, as they came to be known—refused to work on the project, laying the groundwork for a broader revolt.

That would come when details of Project Maven started to leak throughout the company in January 2018. At first the project was discussed behind closed doors and on-employee-only Google+ pages. (Google shut down the much-maligned consumer version of its social network in April, but a version for internal business communications lives on.) Then, Liz Fong-Jones, a site reliability engineer for Google Cloud, asked in an internal post if the military might use Google’s software to help orchestrate a drone strike on a particular person or group. The outrage among employees was swift. Fong-Jones declined to comment.

Google tried to quell concerns by arguing that it was just sifting through surveillance footage, not helping with combat decisions. Executives also cited the small value of the Maven contract, about $9 million. But subsequent reports in the tech blog Gizmodo showed that Google expected revenue from Maven to eventually rise to $250 million. And engineers examining the code found lines of software intended to identify cars, which they interpreted as evidence that Google was indeed helping target combat strikes.

Google and the military have maintained that Maven isn’t a weapons program; recently, Shanahan said the drones involved weren’t even armed. But Jack Poulson, then a computer scientist at the company, says these denials are meaningless because the intelligence produced by the program could contribute to combat operations. “The line from the company was that there was no lethal implication,” Poulson says. “After that, the goal post was shifted,” he says, adding that executives then argued that better data would reduce casualties in conflict situations.

The debate over Maven brought more scrutiny to Pichai’s unbridled enthusiasm for AI. Employees began pointing out—in internal message boards and, sometimes, in public on Twitter—all the ways AI could go wrong when used to determine who should get a bank loan, to surveil the public, or to categorize digital photographs of people with different skin tones. They saw Google trading in its original, idealistic mission—“organize the world’s information”—for something more mercenary.

The conflict centered largely on the company’s cloud unit, which had financial incentives that differed from those of the consumer businesses. “The question is, Who’s your user?” says Meredith Whittaker, who worked at Google for more than a decade and became one of its fiercest critics. “Back during search, the user was an individual human, and Google built its reputation around putting the user first. Now for the infrastructure business, which is a cloud business, the user is oil and gas companies, the user is the DoD. Those are lines of revenue that are going to be hard to leave on the table.” By April 2018, 4,000 people—roughly 5% of total full-time staff—had signed a petition denouncing Maven. A smaller number resigned.

Google attempted to appease its employees without backing away from the Pentagon. Staff members and Defense Department representatives held a series of meetings. Two people who attended one of them recall a squabble breaking out when Whittaker raised ethical objections to Maven. Other Google employees began to argue energetically in defense of the program. Watching a prospective contractor argue the ethics of defense contracting was new for the Pentagon staffers. “It was super awkward for everybody,” one attendee recalls. Whittaker declined to comment on any internal meetings.

Publicly, Google surrendered to its dissidents, announcing in June 2018 that it would stop work on Maven once its contract expired. Later that year it announced it wouldn’t pursue JEDI, the $10 billion cloud computing contract. Amazon, Microsoft, and Oracle competed fiercely for the business, which Microsoft Corp. won in October 2019.

It’s not clear Google could have put forth a serious bid, because the company now says it lacked security certifications that most of its competitors had already obtained. But the official reason it gave for the decision—that JEDI might violate its ethical principles—reinforced critics’ view of Google. “They basically acquiesced to a woke segment of their workforce,” complains Republican Senator Tom Cotton of Arkansas, a U.S. Army veteran who sits on the Senate’s Committee on Armed Services.

Cotton says his office has communicated with Google since it pulled out of Maven, but he doesn’t believe the company can convincingly commit to taking on other military contracts given its internal dynamics. He argues that civilian agencies should avoid dealing with Google as well. “I’d tell them to turn around and get the hell out,” he says.

Other companies seem to have taken such threats to heart. Employee protests have become a regular occurrence on tech campuses, but most major companies have chosen to ignore any blowback rather than cancel work on politically sensitive issues. Amazon still provides facial-recognition software to law enforcement, and Microsoft hasn’t retreated from a plan to build augmented-reality headsets for soldiers. To executives at Google’s competitors, its response to the Maven protesters served as a cautionary tale of what not to do.

By this summer, Google’s protest movement was showing signs of strain. In July, Whittaker and another prominent critic, Claire Stapleton, announced they were leaving the company after each had complained publicly that it was retaliating against internal critics. (Whittaker by then was working at AI Now, a nonprofit focused on ethical questions related to technology.) Google denies the accusation and says it’s always fostered technical and ethical debates. Yet its management has taken a number of steps to counteract employee protests, and has even instituted new guidelines discouraging workers from talking about politics at the office.

This has had the effect of further damaging trust between Google and the restive parts of its workforce. In November the company said it had fired an employee for leaking details about colleagues to the media and placed two others on leave for misusing internal data. But internal sources described the disciplinary actions as a way to punish employee activists.

There’s widespread suspicion among activists within Google that it continues to do this work in secret. Employees have tried to call attention to Alphabet-backed startups that may be pursuing government contracts and have attempted to cut off certain lines of work before they begin. This summer, a Bloomberg Businessweek reporter asked an employee activist whether Google was planning to respond to a proposal by U.S. Customs and Border Protection seeking new cloud services. The person said activists at Google weren’t even aware of the CBP project. Soon after the conversation, a group of employees set up an account on Medium and posted an open letter citing the CBP proposal and demanding it not pursue such contracts. “We have only to look to IBM’s role in working with the Nazis during the Holocaust to understand the role that technology can play in automating mass atrocity,” the letter argued. About 1,500 people at Google eventually signed the letter. The company and CBP are currently in a truce period, and activists are hoping to pressure Google to reject a commercial product when the pilot expires, which they believe will happen this spring, according to one employee involved. Workers have also begun pressuring the company to stop working with the fossil fuel industry.

Google’s leadership hasn’t responded directly to those calls. Instead, the company has introduced ethical principles governing AI, including a promise not to use it on “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” It’s also set up panels to review the technology. Last month the Defense Innovation Board, a Pentagon-sponsored panel that Schmidt leads, released its own AI principles, which track closely with Google’s.

Poulson, who left the company in September 2018, says these lists of AI principles and the boards who debate them will always be insufficient, because they treat Google’s work with the military as something to be smoothed over with a few technical tweaks. Like many of the thorniest questions facing Silicon Valley today, Google’s relationship with the military doesn’t hinge on how its advanced technology is built but on the values that determine how it’s used.

There seems to be little chance that activists like Poulson will convert Google’s leaders, including Pichai, Page, and Brin, to accept their view. At the event in Washington, Walker, the chief legal officer, said Google was pursuing higher security certifications so it could work more closely with the Defense Department on other projects. “I want to be clear,” he said. “We are a proud American company.”

bloomberg.com
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext