SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  For example, here is how to disable FireFox ad content blocking while on Silicon Investor.

   Technology StocksAlphabet Inc. (Google)


Previous 10 Next 10 
To: JakeStraw who wrote (15211)5/27/2019 3:15:43 AM
From: J.F. Sebastian
2 Recommendations   of 15297
 
I deleted all Google apps from my iPhone recently, save YouTube.

After reading an article about how long Google Maps keeps data and has abused user privacy, I'd had enough.

I do use Gmail, but I use a 3rd party app for it, not the native Gmail iOS client.

Share RecommendKeepReplyMark as Last Read


From: FUBHO5/30/2019 6:27:00 AM
   of 15297
 
'A white-collar sweatshop': Google Assistant contractors allege wage theft
Julia Carrie Wong
www.theguardian.com

“Do you believe in magic?” Google asked attendees of its annual developer conference this May, playing the seminal Lovin’ Spoonful tune as an introduction. Throughout the three-day event, company executives repeatedly answered yes while touting new features of the Google Assistant, the company’s version of Alexa or Siri, that can indeed feel magical. The tool can book you a rental car, tell you what the weather is like at your mother’s house, and even interpret live conversations across 26 languages.

But to some of the Google employees responsible for making the Assistant work, the tagline of the conference – “Keep making magic” – obscured a more mundane reality: the technical wizardry relies on massive data sets built by subcontracted human workers earning low wages.

“It’s smoke and mirrors if anything,” said a current Google employee who, as with the others quoted in this story, spoke on condition of anonymity because they were not authorized to speak to the press. “Artificial intelligence is not that artificial; it’s human beings that are doing the work.”

The Google employee works on Pygmalion, the team responsible for producing linguistic data sets that make the Assistant work. And although he is employed directly by Google, most of his Pygmalion co-workers are subcontracted temps who have for years been routinely pressured to work unpaid overtime, according to seven current and former members of the team.

These employees, some of whom spoke to the Guardian because they said efforts to raise concerns internally were ignored, alleged that the unpaid work was a symptom of the workplace culture put in place by the executive who founded Pygmalion. That executive was fired by Google in March following an internal investigation.

But current and former employees also identified Google’s broad reliance on approximately 100,000 temps, vendors and contractors (known at Google as TVCs) for large amounts of the company’s work as a culprit. Google does not directly employ the workers who collect or create the data required for much of its technology, be they the drivers who capture photos for Google Maps’ Street View, the content moderators training YouTube’s filters to catch prohibited material, or the scanners flipping pages to upload the contents of libraries into Google Books.

Having these two tiers of workers – highly paid full-time Googlers and often low-wage and precarious workers contracted through staffing firms – is “corrosive”, “highly problematic”, and “permissive of exploitation”, the employees said.

“It’s like a white-collar sweatshop,” said one current Google employee. “If it’s not illegal, it’s definitely exploitative. It’s to the point where I don’t use the Google Assistant, because I know how it’s made, and I can’t support it.”

An ‘army’ of linguistsThe study of language is at the very heart of current advancements in computing. For decades, people have had to work to learn the language of computers, whether they were trying to program a VCR or writing software. Technology such as the Google Assistant reverses the equation: the computer understands natural human speech, in all its variations.


Behind the technology that makes the Google Assistant work is an army of Google-contracted linguists. Photograph: Samuel Gibbs/The GuardianTake, for example, the straightforward task of asking the Assistant to set a timer to go off in five minutes, a former employee on Pygmalion explained. There are infinite ways that users could phrase that request, such as “Set a timer for five minutes”; “Can you ring the buzzer in five minutes?”; or “Configurar una alarma para cinco minutos.” The Assistant has to be able to convert the spoken request into text, then interpret the user’s intended meaning to produce the desired outcome, all practically instantaneously.

The technology that makes this possible is a form of machine learning. For a machine learning model to “understand” a language, it needs vast amounts of text that has been annotated by linguists to teach it the building blocks of human language, from parts of speech to syntactic relationships.

Enter Pygmalion. The team was born in 2014, the brainchild of longtime Google executive Linne Ha, to create the linguistic data sets required for Google’s neural networks to learn dozens of languages. The “painstaking” nature of the labor required to create this “handcrafted” data was featured in a 2016 article about Pygmalion’s “massive team of PhD linguists” by Wired. (Ha did not respond to a request for comment.)

From the beginning, Google planned to build the team with just a handful of full-time employees while outsourcing the vast majority of the annotation work to an “army” of subcontracted linguists around the world, documents reviewed by the Guardian and interviews with staff show.

The appetite for Pygmalion’s hand-labeled data, and the size of the team, has only increased over the years. Today, it includes 40 to 50 full-time Googlers and approximately 200 temporary workers contracted through agencies, including Adecco, a global staffing firm. The contract workers include associate linguists, who are tasked with annotation, and project managers, who oversee their work.

All of the contract workers have at least a bachelor’s degree in linguistics, though many have master’s degrees and some have doctorates. In addition to annotating data, the temp workers write “grammars” for the Assistant, complex and technical work that requires considerable expertise and involves Google’s code base. Their situation is comparable to adjunct professors on US college campuses: they are highly educated and highly skilled, performing work crucial to the company’s mission, and shut out of the benefits and security that come with a tenured position.

“Imagine going from producing PhD-level research and pushing forward the state of knowledge in the world to going to an annotation type job, where all you’re doing all day is annotating data; it’s very click, click, click,” said a former project manager on Pygmalion. “Everyone was trying to prove themselves because everyone was trying to work for Google. The competitive edge that happened among colleagues as TVCs was severe.”

‘The definition of wage theft’This dynamic created the incentive for temps to perform unpaid work. Managers took advantage by making it clear they wouldn’t approve overtime for contract workers, while also assigning unrealistic amounts of work, current and former employees said.

The pressure to complete assignments was “immense”, said one Googler. “In this mixed stream of messages, I think a lot of people had to make their own calls, and given the pressure, I think people made different calls.”

The Googler described the overall effect as “gaslighting”, and recalled receiving messages from management such as, “If the TVCs want to work more, let them work more.” All seven current and former employees interviewed by the Guardian said they had either experienced or witnessed contract workers performing unpaid overtime.

“To my knowledge, no one ever said, you need to work TVCs above their contracts, but it was set up so that it was the only way to get the expected work done, and if anyone raised concerns they would be openly mocked and belittled,” said another current Googler.

“The 40-hour thing was just not respected,” said a former associate linguist. “It was made clear to us that we were never to log more than 40 hours, but we were never told not to work more than 40 hours.

“The work that they assign often takes more than hours hours,” they added. “Every week you fill out a timesheet. One person one time did submit overtime, and they were chastised. No punishment, but definitely told not to work overtime.”

A spokeswoman for Google said that it was company policy that temp workers must be paid for all hours worked, even if overtime was not approved in advance.

“Working off the clock is the very definition of wage theft,” said Beth Ross, a longtime labor and employment attorney. Ross said that both Google and Adecco could face liability for unpaid wages and damages under federal and state law.

‘They dangle that carrot’The associate linguist was one of several who said that they took the position at Google in hopes that they could eventually convert to a full-time position. Several members of Pygmalion are former contract workers, including the current head of the team, who took over when Ha, the executive who founded the team, moved on to another project.

“People did [unpaid overtime] because they were dangled the opportunity of becoming a full-time employee, which is against company policy,” a current Googler said. “There’s a particular leveraging of people’s desire to become full time,” said another.

“When I was hired, I was very explicitly told that there is no ladder,” a current contract worker said. “‘This is not a temp-to-hire position. There is no moving up’ … But the reality on the team is very much one where there is clearly a ladder. A certain percentage of the associate linguists will get project manager. A certain percentage of project managers get converted to full time. We watch it happen, and they dangle that carrot.”


Google employees enjoy perks such as free meals, on-site yoga classes, free massages and generous benefits packages. Photograph: Lucy Nicholson/REUTERSOne Googler who successfully converted to a full-time position after working as a temp on Pygmalion said that at times the bargain was even made explicit. In April 2017, they recalled, Ha attended a meeting of outsourced Pygmalion project managers in London and “explain[ed] that the position was designed for conversion and that we should be proactive in asking for more work in order to achieve this”.

The Google spokeswoman said that it is company policy not to make any commitment about employment or conversion to temps, and that Googlers who manage temps are required to take a mandatory training on this and other policies related to TVCs.

‘Why do it?’The disparity in wages and benefits between Google employees and contract workers is stark. Alphabet recently reported median pay of $246,804, and employees enjoy perks such as free meals, on-site yoga classes, free massages and generous benefits.

Amid increasing activism by Googlers and contract workers, Google recently announced improved minimum standards for US-based contract workers, including a minimum of eight paid sick days, “comprehensive” health insurance, and a minimum wage of at least $15 an hour by 2020. (A full-time job at that wage pays $31,200 a year; by comparison, Google charges its own employees $38,808 a year to place an infant in its onsite daycare facilities.)

Wages for contract workers on the Pygmalion team are well above the new minimum standard, usually starting around $25 an hour for associate linguists and going up to $35 an hour for project managers. But contractors complain about subpar benefits and other indignities.

The former project manager described Adecco’s benefits plan as “the worst health insurance I have ever had”. A current contract worker earning less than $60,000 annually said they were paying $180 each month in premiums for an individual plan with a $6,000 deductible. For families, the deductible is $12,000, according to documents reviewed by the Guardian. Google declined to comment on Adecco’s pay and benefits.


Google workers have walked out over the company’s handling of sexual misconduct claims and its treatment of temporary workers. Photograph: Noah Berger/APGooglers earn significantly more, and those on individual plans contribute between $0 and $53 for their health insurance and have a much lower deductible ($1,350), according to documents reviewed by the Guardian. Googlers with families pay up to $199 every two weeks, with a $2,700 deductible.

Others complained of a lack of trust and respect. In 2018, Google revoked the ability for contractors on Pygmalion to work while riding Google’s wifi-equipped commuter buses, creating frustration for those who spent three to four hours a day traveling to the company’s Mountain View campus and could no longer work and count that time toward their shift. Google said it works to ensure that temps, vendors and contractors do not have over-broad access to sensitive internal information for security reasons.

“Why do it?” a former associate linguist said of working unpaid overtime under these conditions. “I didn’t want to lose the job. Having Google on your résumé is important to a career … Later on, I came to find out that you can’t say ‘Google’ on your r ésumé. You have to say ‘Google by Adecco’.”

A weekend assignmentBoth Google and Adecco recently launched investigations into the allegations of unpaid overtime in Pygmalion.

“Our policy is clear that all temporary workers must be paid for any overtime worked,” said Eileen Naughton, Google’s vice-president of people operations, in a statement to the Guardian. “If we find that anyone isn’t properly paid, we make sure they are compensated appropriately and take action against any Google employee who violates this policy.”

The current investigation was initiated after the company received a report of a possible policy violation in February 2019, the Google spokeswoman said. The company will provide appropriate compensation if need be and will take action up to and including terminations if policy violations are found, she added.

The spokeswoman also acknowledged that concerns about unpaid overtime were raised to human resources in 2017, but said that the company investigated and did not find any such cases at the time.

“We are committed to ensuring all employees are compensated for all time worked,” said Mary Beth Waddill, a spokeswoman for Adecco. “Our longstanding policy is that every employee is required to report time accurately – even if that time isn’t pre-approved – and they should feel encouraged to do so by their managers. If we learn that this is not the case, we will work with Google to take appropriate action.”

On Friday 17 May, Adecco sent emails to current and former Pygmalion temps. Recipients were asked whether they reported all the hours they worked, and, if not, to estimate how many hours they worked unpaid. The emails requested a response by Monday 20 May, though a Google spokeswoman said this week that the deadline has been extended.

A Google employee reacted: “They’re asking people to work on the weekend to recall unbilled overwork. It seems like it’s designed to discourage people from responding.”

Indeed, one former contract worker who left the company many months ago said they received the email but did not bother to respond. “After I left, I didn’t keep records of the hours I worked,” they said. “Even if I wanted to report overtime now, how could I?”

  • Do you work in the tech industry? Do you have concerns about workplace issues? Contact the author: julia.wong@theguardian.com or julia.carrie.wong@protonmail.com



Since you're here...… we have a small favour to ask. More people are reading and supporting our independent, investigative reporting than ever before. And unlike many news organisations, we have chosen an approach that allows us to keep our journalism accessible to all, regardless of where they live or what they can afford.

The Guardian is editorially independent, meaning we set our own agenda. Our journalism is free from commercial bias and not influenced by billionaire owners, politicians or shareholders. No one edits our editor. No one steers our opinion. This is important as it enables us to give a voice to those less heard, challenge the powerful and hold them to account. It’s what makes us different to so many others in the media, at a time when factual, honest reporting is critical.

Every contribution we receive from readers like you, big or small, goes directly into funding our journalism. This support enables us to keep working as we do – but we must maintain and build on it for every year to come. Support The Guardian from as little as $1 – and it only takes a minute. Thank you.

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen5/31/2019 9:21:18 PM
2 Recommendations   of 15297
 
Justice Department is reportedly preparing antitrust probe of Google

Published an hour ago
Jordan Novet @jordannovet
  • Alphabet has faced antitrust probes before, but not from the U.S. Justice Department.
  • In 2018 Alphabet had $136.8 billion in revenue.

The U.S. Justice Department is planning an antitrust investigation into Alphabet’s Google subsidiary, the Wall Street Journal reported on Friday. The effort will touch on web search and other parts of Google, the report said.

The report comes amid discussion from politicians and the public about whether large technology companies should be broken up. The Justice Department launched a major antitrust case against Microsoft in 1998 that led to several rules the company had to follow for years.

Alphabet, which racked up $136.8 billion in revenue in 2018, has faced antitrust pressure in the past.

In 2010, the company received an antitrust complaint from the European Commission regarding ranking of shopping search results and ads, which resulted in Google being fined $2.7 billion in 2017, according to Alphabet’s latest annual report. In 2016, the EC complained about practices related to Google’s Android operating system, leading to a $5.1 billion charge in 2018.

And in March the European Union ordered Google to pay around $1.7 billion because of advertising behavior.

Sen. Elizabeth Warren, who announced her presidential candidacy in December, has pressed for breaking up tech companies like Google. In a widely read post published on Medium in March, Warren said she was interested in appointing regulators who would be interested in undoing what she called “anti-competitive mergers,” including Google’s DoubleClick, Nest and Waze. “Current antitrust laws empower federal regulators to break up mergers that reduce competition,” she wrote.

Google and the Justice Department didn’t immediately respond to requests for comment.

cnbc.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: FUBHO6/1/2019 9:54:56 AM
   of 15297
 
Google's PageRank patent has expired
(patents.google.com)

patents.google.com

Share RecommendKeepReplyMark as Last Read


To: Glenn Petersen who wrote (15229)6/1/2019 8:51:44 PM
From: Sr K
1 Recommendation   of 15297
 
It's the lead story in the WSJ.

Justice Department Is Preparing Antitrust Investigation of Google

Probe would closely examine Google’s practices related to search, other businesses

Despite new initiatives from Google and Facebook, messing with privacy controls is like playing a carnival game. Knock out one way for advertisers to track you, and they quickly find another way to do it. WSJ's Joanna Stern heads to Coney Island to explain. Photo: Kenny Wassus

By
Updated June 1, 2019 1:06 p.m. ET

WASHINGTON—The Justice Department is gearing up for an antitrust investigation of Alphabet Inc.’s Google, a move that could present a major new layer of regulatory scrutiny for the search giant, according to people familiar with the matter.

The department’s antitrust division in recent weeks has been laying the groundwork for the probe, the people said. The Federal Trade Commission, which shares antitrust authority with the department, previously conducted a broad investigation of Google but closed it in 2013 ...

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: Glenn Petersen6/2/2019 11:03:59 AM
   of 15297
 
DeepMind Can Now Beat Us at Multiplayer Games, Too

Chess and Go were child’s play. Now A.I. is winning at capture the flag. Will such skills translate to the real world?

By Cade Metz
New York Times
May 30, 2019



CreditCreditDeepMind
___________________

Capture the flag is a game played by children across the open spaces of a summer camp, and by professional video gamers as part of popular titles like Quake III and Overwatch.

In both cases, it’s a team sport. Each side guards a flag while also scheming to grab the other side’s flag and bring it back to home base. Winning the game requires good old-fashioned teamwork, a coordinated balance between defense and attack:

In other words, capture the flag requires what would seem to be a very human set of skills. But researchers at an artificial intelligence lab in London have shown that machines can master this game, too, at least in the virtual world.

In a paper published on Thursday in Science (and previously available on the website arXiv before peer review), the researchers reported that they had designed automated “agents” that exhibited humanlike behavior when playing the capture the flag “game mode” inside Quake III. These agents were able to team up against human players or play alongside them, tailoring their behavior accordingly.

“They can adapt to teammates with arbitrary skills,” said Wojciech Czarnecki, a researcher with DeepMind, a lab owned by the same parent company as Google.

Through thousands of hours of game play, the agents learned very particular skills, like racing toward the opponent’s home base when a teammate was on the verge of capturing a flag. As human players know, the moment the opposing flag is brought to one’s home base, a new flag appears at the opposing base, ripe for the taking.

DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.

For instance, such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic. “Games have always been a benchmark for A.I.,” said Greg Brockman, who oversees similar research at OpenAI, a lab based in San Francisco. “If you can’t solve games, you can’t expect to solve anything else.”

Until recently, building a system that could match human players in a game like Quake III did not seem possible. But over the past several years, DeepMind, OpenAI and other labs have made significant advances, thanks to a mathematical technique called “reinforcement learning,” which allows machines to learn tasks by extreme trial and error.

By playing a game over and over again, an automated agent learns which strategies bring success and which do not. If an agent consistently wins more points by moving toward an opponent’s home base when a teammate is about to capture a flag, it adds this tactic to its arsenal of tricks.

In 2016, using the same fundamental technique, DeepMind researchers built a system that could beat the world’s top players at the ancient game of Go, the Eastern version of chess. Many experts had thought this would not be accomplished for another decade, given the enormous complexity of the game.

First-person video games are exponentially more complex, particularly when they involve coordination between teammates. DeepMind’s autonomous agents learned capture the flag by playing roughly 450,000 rounds of it, tallying about four years of game experience over weeks of training. At first, the agents failed miserably. But they gradually picked up the nuances of the game, like when to follow teammates as they raided an opponent’s home base:

Since completing this project, DeepMind researchers also designed a system that could beat professional players at StarCraft II, a strategy game set in space. And at OpenAI, researchers built a system that mastered Dota 2, a game that plays like a souped-up version of capture the flag. In April, a team of five autonomous agents beat a team of five of the world’s best human players.

Last year, William Lee, a professional Dota 2 player and commentator known as Blitz, played against an early version of the technology that could play only one-on-one, not as part of a team, and he was unimpressed. But as the agents continued to learn the game and he played them as a team, he was shocked by their skill.

“I didn’t think it would be possible for the machine to play five-on-five, let alone win,” he said. “I was absolutely blown away.”

As impressive as such technology has been among gamers, many artificial-intelligence experts question whether it will ultimately translate to solving real-world problems. DeepMind’s agents are not really collaborating, said Mark Riedl, a professor at Georgia Tech College of Computing who specializes in artificial intelligence. They are merely responding to what is happening in the game, rather than trading messages with one another, as human players do. (Even mere ants can collaborate by trading chemical signals.)

Although the result looks like collaboration, the agents achieve it because, individually, they so completely understand what is happening in the game.

“How you define teamwork is not something I want to tackle,” said Max Jaderberg, another DeepMind researcher who worked on the project. “But one agent will sit in the opponent’s base camp, waiting for the flag to appear, and that is only possible if it is relying on its teammates.”

Games like this are not nearly as complex as the real world. “3-D environments are designed to make navigation easy,” Dr. Riedl said. “Strategy and coordination in Quake are simple.”

Reinforcement learning is ideally suited to such games. In a video game, it is easy to identify the metric for success: more points. (In capture the flag, players earn points according to how many flags are captured.) But in the real world, no one is keeping score. Researchers must define success in other ways.

This can be done, at least with simple tasks. At OpenAI, researchers have trained a robotic hand to manipulate an alphabet block as a child might. Tell the hand to show you the letter A, and it will show you the letter A.

At a Google robotics lab, researchers have shown that machines can learn to pick up random items, such as Ping-Pong balls and plastic bananas, and toss them into a bin several feet away. This kind of technology could help sort through bins of items in huge warehouses and distribution centers run by Amazon, FedEx and other companies. Today, human workers handle such tasks.

As labs like DeepMind and OpenAI tackle bigger problems, they may begin to require ridiculously large amounts of computing power. As OpenAI’s system learned to play Dota 2 over several months — more than 45,000 years of game play — it came to rely on tens of thousands of computer chips. Renting access to all those chips cost the lab millions of dollars, Mr. Brockman said.

DeepMind and OpenAI, which is funded by various Silicon Valley kingpins including Khosla Ventures and the tech billionaire Reid Hoffman, can afford all that computing power. But academic labs and other small operations cannot, said Devendra Chaplot, an A.I. researcher at Carnegie Mellon University. The worry, for some, is that a few well-funded labs will dominate the future of artificial intelligence.

But even the big labs may not have the computing power needed to move these techniques into the complexities of the real world, which may require stronger forms of A.I. that can learn even faster. Though machines can now win capture the flag in the virtual world, they are still hopeless across the open spaces of summer camp — and will be for quite a while.

nytimes.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Glenn Petersen who wrote (15232)6/2/2019 1:25:31 PM
From: TimF
   of 15297
 
If they don't limit its actions its winning mainly by being faster. Computers can simulate clicks faster then whole teams of people can actually click. A relatively stupid AI could still win just by speed.

If they do limit clicks and awareness (some AI's in some games see the whole map, at least the scouted part of it, all at once, rather than just a limited view of one screen at a time, with some small delay to see another part of the map on your screen), then you have a better case of the AI being good not just having a better interface than the person does.

I would think the computer would do well in 5 by 5 when its playing all 5 (assuming it gets 5 times the click limits). Gets rid of a lot of the coordination problems. Maybe 2nd best when 5 different computers are playing (don't have to deal with the type of human interactions/emotions that could happen in a multiplayer team game). It would be interesting to see how it does as 1 of the 5, esp. in a game that typically requires more complex communication.

Share RecommendKeepReplyMark as Last Read


To: Sr K who wrote (15231)6/3/2019 9:01:18 AM
From: Ron
   of 15297
 
More on the Justice Department probe of Google
DOJ takes step toward a bruising antitrust battle with Google
politico.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Ron who wrote (15234)6/5/2019 12:47:28 AM
From: Sr K
1 Recommendation   of 15297
 
from the NYT

6/5/2019

Kara Swisher

The People Screaming for Blood, Have No Idea How Tech Actually Works.

Suddenly, regulators' guns are blazing, but it looks thoughtless, and is likely to prove pointless.

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen6/5/2019 3:48:53 PM
1 Recommendation   of 15297
 
YouTube just banned supremacist content, and thousands of channels are about to be removed

YouTube is trying to reduce the prevalence of extremist content on the platform

By Casey Newton @CaseyNewton
The Verge
Jun 5, 2019, 12:00pm EDT

Illustration by Alex Castro / The Verge
________________________

YouTube is changing its community guidelines to ban videos promoting the superiority of any group as a justification for discrimination against others based on their age, gender, race, caste, religion, sexual orientation, or veteran status, the company said today. The move, which will result in the removal of all videos promoting Nazism and other discriminatory ideologies, is expected to result in the removal of thousands of channels across YouTube.

“The openness of YouTube’s platform has helped creativity and access to information thrive,” the company said in a blog post. “It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence.”

The changes announced on Wednesday attempt to improve its content moderation in three ways. First, the ban on supremacists will remove Nazis and other extremists who advocate segregation or exclusion based on age, gender, race, religion, sexual orientation, or veteran status. In addition to those categories, YouTube is adding caste, which has significant implications in India, and “well-documented violent events,” such as the Sandy Hook elementary school shooting and 9/11. Users are no longer allowed to post videos saying those events did not happen, YouTube said.

Second, YouTube said it would expand efforts announced in January to reduce the spread of what it calls “borderline content and harmful misinformation.” The policy, which applies to videos that flirt with violating the community guidelines but ultimately fall short, aims to limit the promotion of those videos through recommendations. YouTube said the policy, which affects videos including flat-earthers and peddlers of phony miracle cures, had already decreased the number of views that borderline videos receive by 50 percent. In the future, the company said, it will recommend videos from more authoritative sources, like top news channels, in its “next watch” panel.

Finally, YouTube said it would restrict channels from monetizing their videos if they are found to “repeatedly brush up against our hate speech policies.” Those channels will not be able to run ads or use Super Chat, which lets channel subscribers pay creators directly for extra chat features. The last change comes after BuzzFeed reported that the paid commenting system had been used to fund creators of videos featuring racism and hate speech.

In 2017, YouTube took a step toward reducing the visibility of extremists on the platform when it began placing warnings in front of some videos. But it has come under continued scrutiny for the way that it recruits followers for racists and bigots by promoting their work through recommendation algorithms and prominent placement in search results. In April, Bloomberg reported that videos made by far-right creators represented one of the most popular sections of YouTube, along with music, sports, and video games.

At the same time, YouTube and its parent company, Alphabet, are under growing political pressure to rein in the bad actors on the platform. The Christchurch attacks in March led to widespread criticism of YouTube and other platforms for failing to immediately identify and remove videos of the shooting, and several countries have proposed laws designed to force tech companies to act more quickly. Meanwhile, The New York Times found this week that YouTube algorithms were recommending videos featuring children in bathing suits to people who had previously watched sexually themed content — effectively generating playlists for pedophiles.

YouTube did not disclose the names of any channels that are expected to be affected by the change. The company declined to comment on a current controversy surrounding my Vox colleague Carlos Maza, who has repeatedly been harassed on the basis of his race and sexual orientation by prominent right-wing commentator Steven Crowder. (After I spoke with the company, it responded to Maza that it plans to take no action against Crowder’s channel.)

Still, the move is likely to trigger panic among right-wing YouTube channels. In the United States, conservatives have promoted the idea that YouTube and other platforms discriminate against them. Despite the fact that there is no evidence of systematic bias, Republicans have held several hearings over the past year on the subject. Today’s move from YouTube is likely to generate a fresh round of outrage, along with warnings that we are on the slippery slope toward totalitarianism.

Of course, as the Maza case has shown, YouTube doesn’t always enforce its own rules. It’s one thing to make a policy, and it’s another to ensure that a global workforce of underpaid contractors accurately understands and applies it. It will be fascinating to see how the new policy, which prohibits “videos alleging that a group is superior in order to justify ... segregation or exclusion,” will affect discussion of immigration on YouTube. The company says that political debates about the pros and cons of immigration are still allowed, but a video saying that “Muslims are diseased and shouldn’t be allowed to migrate to Europe” will be banned.

The changed policy goes into effect today, YouTube said, and enforcement will “ramp up” over the next several days.

theverge.com



Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10