SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Technology StocksAlphabet Inc. (Google)


Previous 10 Next 10 
From: FUBHO6/1/2019 9:54:56 AM
   of 15372
 
Google's PageRank patent has expired
(patents.google.com)

patents.google.com

Share RecommendKeepReplyMark as Last Read


To: Glenn Petersen who wrote (15229)6/1/2019 8:51:44 PM
From: Sr K
1 Recommendation   of 15372
 
It's the lead story in the WSJ.

Justice Department Is Preparing Antitrust Investigation of Google

Probe would closely examine Google’s practices related to search, other businesses

Despite new initiatives from Google and Facebook, messing with privacy controls is like playing a carnival game. Knock out one way for advertisers to track you, and they quickly find another way to do it. WSJ's Joanna Stern heads to Coney Island to explain. Photo: Kenny Wassus

By
Updated June 1, 2019 1:06 p.m. ET

WASHINGTON—The Justice Department is gearing up for an antitrust investigation of Alphabet Inc.’s Google, a move that could present a major new layer of regulatory scrutiny for the search giant, according to people familiar with the matter.

The department’s antitrust division in recent weeks has been laying the groundwork for the probe, the people said. The Federal Trade Commission, which shares antitrust authority with the department, previously conducted a broad investigation of Google but closed it in 2013 ...

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


From: Glenn Petersen6/2/2019 11:03:59 AM
   of 15372
 
DeepMind Can Now Beat Us at Multiplayer Games, Too

Chess and Go were child’s play. Now A.I. is winning at capture the flag. Will such skills translate to the real world?

By Cade Metz
New York Times
May 30, 2019



CreditCreditDeepMind
___________________

Capture the flag is a game played by children across the open spaces of a summer camp, and by professional video gamers as part of popular titles like Quake III and Overwatch.

In both cases, it’s a team sport. Each side guards a flag while also scheming to grab the other side’s flag and bring it back to home base. Winning the game requires good old-fashioned teamwork, a coordinated balance between defense and attack:

In other words, capture the flag requires what would seem to be a very human set of skills. But researchers at an artificial intelligence lab in London have shown that machines can master this game, too, at least in the virtual world.

In a paper published on Thursday in Science (and previously available on the website arXiv before peer review), the researchers reported that they had designed automated “agents” that exhibited humanlike behavior when playing the capture the flag “game mode” inside Quake III. These agents were able to team up against human players or play alongside them, tailoring their behavior accordingly.

“They can adapt to teammates with arbitrary skills,” said Wojciech Czarnecki, a researcher with DeepMind, a lab owned by the same parent company as Google.

Through thousands of hours of game play, the agents learned very particular skills, like racing toward the opponent’s home base when a teammate was on the verge of capturing a flag. As human players know, the moment the opposing flag is brought to one’s home base, a new flag appears at the opposing base, ripe for the taking.

DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.

For instance, such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic. “Games have always been a benchmark for A.I.,” said Greg Brockman, who oversees similar research at OpenAI, a lab based in San Francisco. “If you can’t solve games, you can’t expect to solve anything else.”

Until recently, building a system that could match human players in a game like Quake III did not seem possible. But over the past several years, DeepMind, OpenAI and other labs have made significant advances, thanks to a mathematical technique called “reinforcement learning,” which allows machines to learn tasks by extreme trial and error.

By playing a game over and over again, an automated agent learns which strategies bring success and which do not. If an agent consistently wins more points by moving toward an opponent’s home base when a teammate is about to capture a flag, it adds this tactic to its arsenal of tricks.

In 2016, using the same fundamental technique, DeepMind researchers built a system that could beat the world’s top players at the ancient game of Go, the Eastern version of chess. Many experts had thought this would not be accomplished for another decade, given the enormous complexity of the game.

First-person video games are exponentially more complex, particularly when they involve coordination between teammates. DeepMind’s autonomous agents learned capture the flag by playing roughly 450,000 rounds of it, tallying about four years of game experience over weeks of training. At first, the agents failed miserably. But they gradually picked up the nuances of the game, like when to follow teammates as they raided an opponent’s home base:

Since completing this project, DeepMind researchers also designed a system that could beat professional players at StarCraft II, a strategy game set in space. And at OpenAI, researchers built a system that mastered Dota 2, a game that plays like a souped-up version of capture the flag. In April, a team of five autonomous agents beat a team of five of the world’s best human players.

Last year, William Lee, a professional Dota 2 player and commentator known as Blitz, played against an early version of the technology that could play only one-on-one, not as part of a team, and he was unimpressed. But as the agents continued to learn the game and he played them as a team, he was shocked by their skill.

“I didn’t think it would be possible for the machine to play five-on-five, let alone win,” he said. “I was absolutely blown away.”

As impressive as such technology has been among gamers, many artificial-intelligence experts question whether it will ultimately translate to solving real-world problems. DeepMind’s agents are not really collaborating, said Mark Riedl, a professor at Georgia Tech College of Computing who specializes in artificial intelligence. They are merely responding to what is happening in the game, rather than trading messages with one another, as human players do. (Even mere ants can collaborate by trading chemical signals.)

Although the result looks like collaboration, the agents achieve it because, individually, they so completely understand what is happening in the game.

“How you define teamwork is not something I want to tackle,” said Max Jaderberg, another DeepMind researcher who worked on the project. “But one agent will sit in the opponent’s base camp, waiting for the flag to appear, and that is only possible if it is relying on its teammates.”

Games like this are not nearly as complex as the real world. “3-D environments are designed to make navigation easy,” Dr. Riedl said. “Strategy and coordination in Quake are simple.”

Reinforcement learning is ideally suited to such games. In a video game, it is easy to identify the metric for success: more points. (In capture the flag, players earn points according to how many flags are captured.) But in the real world, no one is keeping score. Researchers must define success in other ways.

This can be done, at least with simple tasks. At OpenAI, researchers have trained a robotic hand to manipulate an alphabet block as a child might. Tell the hand to show you the letter A, and it will show you the letter A.

At a Google robotics lab, researchers have shown that machines can learn to pick up random items, such as Ping-Pong balls and plastic bananas, and toss them into a bin several feet away. This kind of technology could help sort through bins of items in huge warehouses and distribution centers run by Amazon, FedEx and other companies. Today, human workers handle such tasks.

As labs like DeepMind and OpenAI tackle bigger problems, they may begin to require ridiculously large amounts of computing power. As OpenAI’s system learned to play Dota 2 over several months — more than 45,000 years of game play — it came to rely on tens of thousands of computer chips. Renting access to all those chips cost the lab millions of dollars, Mr. Brockman said.

DeepMind and OpenAI, which is funded by various Silicon Valley kingpins including Khosla Ventures and the tech billionaire Reid Hoffman, can afford all that computing power. But academic labs and other small operations cannot, said Devendra Chaplot, an A.I. researcher at Carnegie Mellon University. The worry, for some, is that a few well-funded labs will dominate the future of artificial intelligence.

But even the big labs may not have the computing power needed to move these techniques into the complexities of the real world, which may require stronger forms of A.I. that can learn even faster. Though machines can now win capture the flag in the virtual world, they are still hopeless across the open spaces of summer camp — and will be for quite a while.

nytimes.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Glenn Petersen who wrote (15232)6/2/2019 1:25:31 PM
From: TimF
   of 15372
 
If they don't limit its actions its winning mainly by being faster. Computers can simulate clicks faster then whole teams of people can actually click. A relatively stupid AI could still win just by speed.

If they do limit clicks and awareness (some AI's in some games see the whole map, at least the scouted part of it, all at once, rather than just a limited view of one screen at a time, with some small delay to see another part of the map on your screen), then you have a better case of the AI being good not just having a better interface than the person does.

I would think the computer would do well in 5 by 5 when its playing all 5 (assuming it gets 5 times the click limits). Gets rid of a lot of the coordination problems. Maybe 2nd best when 5 different computers are playing (don't have to deal with the type of human interactions/emotions that could happen in a multiplayer team game). It would be interesting to see how it does as 1 of the 5, esp. in a game that typically requires more complex communication.

Share RecommendKeepReplyMark as Last Read


To: Sr K who wrote (15231)6/3/2019 9:01:18 AM
From: Ron
   of 15372
 
More on the Justice Department probe of Google
DOJ takes step toward a bruising antitrust battle with Google
politico.com

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Ron who wrote (15234)6/5/2019 12:47:28 AM
From: Sr K
1 Recommendation   of 15372
 
from the NYT

6/5/2019

Kara Swisher

The People Screaming for Blood, Have No Idea How Tech Actually Works.

Suddenly, regulators' guns are blazing, but it looks thoughtless, and is likely to prove pointless.

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen6/5/2019 3:48:53 PM
1 Recommendation   of 15372
 
YouTube just banned supremacist content, and thousands of channels are about to be removed

YouTube is trying to reduce the prevalence of extremist content on the platform

By Casey Newton @CaseyNewton
The Verge
Jun 5, 2019, 12:00pm EDT

Illustration by Alex Castro / The Verge
________________________

YouTube is changing its community guidelines to ban videos promoting the superiority of any group as a justification for discrimination against others based on their age, gender, race, caste, religion, sexual orientation, or veteran status, the company said today. The move, which will result in the removal of all videos promoting Nazism and other discriminatory ideologies, is expected to result in the removal of thousands of channels across YouTube.

“The openness of YouTube’s platform has helped creativity and access to information thrive,” the company said in a blog post. “It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence.”

The changes announced on Wednesday attempt to improve its content moderation in three ways. First, the ban on supremacists will remove Nazis and other extremists who advocate segregation or exclusion based on age, gender, race, religion, sexual orientation, or veteran status. In addition to those categories, YouTube is adding caste, which has significant implications in India, and “well-documented violent events,” such as the Sandy Hook elementary school shooting and 9/11. Users are no longer allowed to post videos saying those events did not happen, YouTube said.

Second, YouTube said it would expand efforts announced in January to reduce the spread of what it calls “borderline content and harmful misinformation.” The policy, which applies to videos that flirt with violating the community guidelines but ultimately fall short, aims to limit the promotion of those videos through recommendations. YouTube said the policy, which affects videos including flat-earthers and peddlers of phony miracle cures, had already decreased the number of views that borderline videos receive by 50 percent. In the future, the company said, it will recommend videos from more authoritative sources, like top news channels, in its “next watch” panel.

Finally, YouTube said it would restrict channels from monetizing their videos if they are found to “repeatedly brush up against our hate speech policies.” Those channels will not be able to run ads or use Super Chat, which lets channel subscribers pay creators directly for extra chat features. The last change comes after BuzzFeed reported that the paid commenting system had been used to fund creators of videos featuring racism and hate speech.

In 2017, YouTube took a step toward reducing the visibility of extremists on the platform when it began placing warnings in front of some videos. But it has come under continued scrutiny for the way that it recruits followers for racists and bigots by promoting their work through recommendation algorithms and prominent placement in search results. In April, Bloomberg reported that videos made by far-right creators represented one of the most popular sections of YouTube, along with music, sports, and video games.

At the same time, YouTube and its parent company, Alphabet, are under growing political pressure to rein in the bad actors on the platform. The Christchurch attacks in March led to widespread criticism of YouTube and other platforms for failing to immediately identify and remove videos of the shooting, and several countries have proposed laws designed to force tech companies to act more quickly. Meanwhile, The New York Times found this week that YouTube algorithms were recommending videos featuring children in bathing suits to people who had previously watched sexually themed content — effectively generating playlists for pedophiles.

YouTube did not disclose the names of any channels that are expected to be affected by the change. The company declined to comment on a current controversy surrounding my Vox colleague Carlos Maza, who has repeatedly been harassed on the basis of his race and sexual orientation by prominent right-wing commentator Steven Crowder. (After I spoke with the company, it responded to Maza that it plans to take no action against Crowder’s channel.)

Still, the move is likely to trigger panic among right-wing YouTube channels. In the United States, conservatives have promoted the idea that YouTube and other platforms discriminate against them. Despite the fact that there is no evidence of systematic bias, Republicans have held several hearings over the past year on the subject. Today’s move from YouTube is likely to generate a fresh round of outrage, along with warnings that we are on the slippery slope toward totalitarianism.

Of course, as the Maza case has shown, YouTube doesn’t always enforce its own rules. It’s one thing to make a policy, and it’s another to ensure that a global workforce of underpaid contractors accurately understands and applies it. It will be fascinating to see how the new policy, which prohibits “videos alleging that a group is superior in order to justify ... segregation or exclusion,” will affect discussion of immigration on YouTube. The company says that political debates about the pros and cons of immigration are still allowed, but a video saying that “Muslims are diseased and shouldn’t be allowed to migrate to Europe” will be banned.

The changed policy goes into effect today, YouTube said, and enforcement will “ramp up” over the next several days.

theverge.com



Share RecommendKeepReplyMark as Last Read


From: Sr K6/6/2019 11:32:22 AM
   of 15372
 
Reuters
6/6/2019

Google to buy analytics software firm Looker for $2.6 bln

Share RecommendKeepReplyMark as Last Read


From: JakeStraw6/7/2019 8:13:09 AM
1 Recommendation   of 15372
 
Google Announces Acquisition of Looker In A Move to Support Business Intelligence
forbes.com

Share RecommendKeepReplyMark as Last Read


From: JakeStraw6/7/2019 8:13:34 AM
   of 15372
 
Google plans to press play on its Stadia cloud gaming service in November
usatoday.com
Google has shed some more clarity on its upcoming cloud-based video game service: an entry price, launch window and some of the games you will be able to play.

Google's Stadia will become available in November with an entry price of $129.99 for the Founders Edition package (pre-order on Google's Stadia site), which includes a game controller, Chromecast Ultra streaming device and a three-month subscription.

Cloud gaming promises to make it easier for consumers to play online games, as it sidesteps the need for pricey gaming PCs or console video game systems.

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10