|From: Glenn Petersen||6/2/2019 11:03:59 AM|
|DeepMind Can Now Beat Us at Multiplayer Games, Too|
Chess and Go were child’s play. Now A.I. is winning at capture the flag. Will such skills translate to the real world?
By Cade Metz
New York Times
May 30, 2019
Capture the flag is a game played by children across the open spaces of a summer camp, and by professional video gamers as part of popular titles like Quake III and Overwatch.
In both cases, it’s a team sport. Each side guards a flag while also scheming to grab the other side’s flag and bring it back to home base. Winning the game requires good old-fashioned teamwork, a coordinated balance between defense and attack:
In other words, capture the flag requires what would seem to be a very human set of skills. But researchers at an artificial intelligence lab in London have shown that machines can master this game, too, at least in the virtual world.
In a paper published on Thursday in Science (and previously available on the website arXiv before peer review), the researchers reported that they had designed automated “agents” that exhibited humanlike behavior when playing the capture the flag “game mode” inside Quake III. These agents were able to team up against human players or play alongside them, tailoring their behavior accordingly.
“They can adapt to teammates with arbitrary skills,” said Wojciech Czarnecki, a researcher with DeepMind, a lab owned by the same parent company as Google.
Through thousands of hours of game play, the agents learned very particular skills, like racing toward the opponent’s home base when a teammate was on the verge of capturing a flag. As human players know, the moment the opposing flag is brought to one’s home base, a new flag appears at the opposing base, ripe for the taking.
DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.
For instance, such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic. “Games have always been a benchmark for A.I.,” said Greg Brockman, who oversees similar research at OpenAI, a lab based in San Francisco. “If you can’t solve games, you can’t expect to solve anything else.”
Until recently, building a system that could match human players in a game like Quake III did not seem possible. But over the past several years, DeepMind, OpenAI and other labs have made significant advances, thanks to a mathematical technique called “reinforcement learning,” which allows machines to learn tasks by extreme trial and error.
By playing a game over and over again, an automated agent learns which strategies bring success and which do not. If an agent consistently wins more points by moving toward an opponent’s home base when a teammate is about to capture a flag, it adds this tactic to its arsenal of tricks.
In 2016, using the same fundamental technique, DeepMind researchers built a system that could beat the world’s top players at the ancient game of Go, the Eastern version of chess. Many experts had thought this would not be accomplished for another decade, given the enormous complexity of the game.
First-person video games are exponentially more complex, particularly when they involve coordination between teammates. DeepMind’s autonomous agents learned capture the flag by playing roughly 450,000 rounds of it, tallying about four years of game experience over weeks of training. At first, the agents failed miserably. But they gradually picked up the nuances of the game, like when to follow teammates as they raided an opponent’s home base:
Since completing this project, DeepMind researchers also designed a system that could beat professional players at StarCraft II, a strategy game set in space. And at OpenAI, researchers built a system that mastered Dota 2, a game that plays like a souped-up version of capture the flag. In April, a team of five autonomous agents beat a team of five of the world’s best human players.
Last year, William Lee, a professional Dota 2 player and commentator known as Blitz, played against an early version of the technology that could play only one-on-one, not as part of a team, and he was unimpressed. But as the agents continued to learn the game and he played them as a team, he was shocked by their skill.
“I didn’t think it would be possible for the machine to play five-on-five, let alone win,” he said. “I was absolutely blown away.”
As impressive as such technology has been among gamers, many artificial-intelligence experts question whether it will ultimately translate to solving real-world problems. DeepMind’s agents are not really collaborating, said Mark Riedl, a professor at Georgia Tech College of Computing who specializes in artificial intelligence. They are merely responding to what is happening in the game, rather than trading messages with one another, as human players do. (Even mere ants can collaborate by trading chemical signals.)
Although the result looks like collaboration, the agents achieve it because, individually, they so completely understand what is happening in the game.
“How you define teamwork is not something I want to tackle,” said Max Jaderberg, another DeepMind researcher who worked on the project. “But one agent will sit in the opponent’s base camp, waiting for the flag to appear, and that is only possible if it is relying on its teammates.”
Games like this are not nearly as complex as the real world. “3-D environments are designed to make navigation easy,” Dr. Riedl said. “Strategy and coordination in Quake are simple.”
Reinforcement learning is ideally suited to such games. In a video game, it is easy to identify the metric for success: more points. (In capture the flag, players earn points according to how many flags are captured.) But in the real world, no one is keeping score. Researchers must define success in other ways.
This can be done, at least with simple tasks. At OpenAI, researchers have trained a robotic hand to manipulate an alphabet block as a child might. Tell the hand to show you the letter A, and it will show you the letter A.
At a Google robotics lab, researchers have shown that machines can learn to pick up random items, such as Ping-Pong balls and plastic bananas, and toss them into a bin several feet away. This kind of technology could help sort through bins of items in huge warehouses and distribution centers run by Amazon, FedEx and other companies. Today, human workers handle such tasks.
As labs like DeepMind and OpenAI tackle bigger problems, they may begin to require ridiculously large amounts of computing power. As OpenAI’s system learned to play Dota 2 over several months — more than 45,000 years of game play — it came to rely on tens of thousands of computer chips. Renting access to all those chips cost the lab millions of dollars, Mr. Brockman said.
DeepMind and OpenAI, which is funded by various Silicon Valley kingpins including Khosla Ventures and the tech billionaire Reid Hoffman, can afford all that computing power. But academic labs and other small operations cannot, said Devendra Chaplot, an A.I. researcher at Carnegie Mellon University. The worry, for some, is that a few well-funded labs will dominate the future of artificial intelligence.
But even the big labs may not have the computing power needed to move these techniques into the complexities of the real world, which may require stronger forms of A.I. that can learn even faster. Though machines can now win capture the flag in the virtual world, they are still hopeless across the open spaces of summer camp — and will be for quite a while.
|RecommendKeepReplyMark as Last ReadRead Replies (1)|
|To: Glenn Petersen who wrote (15232)||6/2/2019 1:25:31 PM|
|If they don't limit its actions its winning mainly by being faster. Computers can simulate clicks faster then whole teams of people can actually click. A relatively stupid AI could still win just by speed. |
If they do limit clicks and awareness (some AI's in some games see the whole map, at least the scouted part of it, all at once, rather than just a limited view of one screen at a time, with some small delay to see another part of the map on your screen), then you have a better case of the AI being good not just having a better interface than the person does.
I would think the computer would do well in 5 by 5 when its playing all 5 (assuming it gets 5 times the click limits). Gets rid of a lot of the coordination problems. Maybe 2nd best when 5 different computers are playing (don't have to deal with the type of human interactions/emotions that could happen in a multiplayer team game). It would be interesting to see how it does as 1 of the 5, esp. in a game that typically requires more complex communication.
|RecommendKeepReplyMark as Last Read|
|To: Ron who wrote (15234)||6/5/2019 12:47:28 AM|
|From: Sr K|
|from the NYT|
The People Screaming for Blood, Have No Idea How Tech Actually Works.
Suddenly, regulators' guns are blazing, but it looks thoughtless, and is likely to prove pointless.
|RecommendKeepReplyMark as Last Read|
|From: Glenn Petersen||6/5/2019 3:48:53 PM|
|YouTube just banned supremacist content, and thousands of channels are about to be removed|
YouTube is trying to reduce the prevalence of extremist content on the platform
By Casey Newton @CaseyNewton
Jun 5, 2019, 12:00pm EDT
Illustration by Alex Castro / The Verge
YouTube is changing its community guidelines to ban videos promoting the superiority of any group as a justification for discrimination against others based on their age, gender, race, caste, religion, sexual orientation, or veteran status, the company said today. The move, which will result in the removal of all videos promoting Nazism and other discriminatory ideologies, is expected to result in the removal of thousands of channels across YouTube.
“The openness of YouTube’s platform has helped creativity and access to information thrive,” the company said in a blog post. “It’s our responsibility to protect that, and prevent our platform from being used to incite hatred, harassment, discrimination and violence.”
The changes announced on Wednesday attempt to improve its content moderation in three ways. First, the ban on supremacists will remove Nazis and other extremists who advocate segregation or exclusion based on age, gender, race, religion, sexual orientation, or veteran status. In addition to those categories, YouTube is adding caste, which has significant implications in India, and “well-documented violent events,” such as the Sandy Hook elementary school shooting and 9/11. Users are no longer allowed to post videos saying those events did not happen, YouTube said.
Second, YouTube said it would expand efforts announced in January to reduce the spread of what it calls “borderline content and harmful misinformation.” The policy, which applies to videos that flirt with violating the community guidelines but ultimately fall short, aims to limit the promotion of those videos through recommendations. YouTube said the policy, which affects videos including flat-earthers and peddlers of phony miracle cures, had already decreased the number of views that borderline videos receive by 50 percent. In the future, the company said, it will recommend videos from more authoritative sources, like top news channels, in its “next watch” panel.
Finally, YouTube said it would restrict channels from monetizing their videos if they are found to “repeatedly brush up against our hate speech policies.” Those channels will not be able to run ads or use Super Chat, which lets channel subscribers pay creators directly for extra chat features. The last change comes after BuzzFeed reported that the paid commenting system had been used to fund creators of videos featuring racism and hate speech.
In 2017, YouTube took a step toward reducing the visibility of extremists on the platform when it began placing warnings in front of some videos. But it has come under continued scrutiny for the way that it recruits followers for racists and bigots by promoting their work through recommendation algorithms and prominent placement in search results. In April, Bloomberg reported that videos made by far-right creators represented one of the most popular sections of YouTube, along with music, sports, and video games.
At the same time, YouTube and its parent company, Alphabet, are under growing political pressure to rein in the bad actors on the platform. The Christchurch attacks in March led to widespread criticism of YouTube and other platforms for failing to immediately identify and remove videos of the shooting, and several countries have proposed laws designed to force tech companies to act more quickly. Meanwhile, The New York Times found this week that YouTube algorithms were recommending videos featuring children in bathing suits to people who had previously watched sexually themed content — effectively generating playlists for pedophiles.
YouTube did not disclose the names of any channels that are expected to be affected by the change. The company declined to comment on a current controversy surrounding my Vox colleague Carlos Maza, who has repeatedly been harassed on the basis of his race and sexual orientation by prominent right-wing commentator Steven Crowder. (After I spoke with the company, it responded to Maza that it plans to take no action against Crowder’s channel.)
Still, the move is likely to trigger panic among right-wing YouTube channels. In the United States, conservatives have promoted the idea that YouTube and other platforms discriminate against them. Despite the fact that there is no evidence of systematic bias, Republicans have held several hearings over the past year on the subject. Today’s move from YouTube is likely to generate a fresh round of outrage, along with warnings that we are on the slippery slope toward totalitarianism.
Of course, as the Maza case has shown, YouTube doesn’t always enforce its own rules. It’s one thing to make a policy, and it’s another to ensure that a global workforce of underpaid contractors accurately understands and applies it. It will be fascinating to see how the new policy, which prohibits “videos alleging that a group is superior in order to justify ... segregation or exclusion,” will affect discussion of immigration on YouTube. The company says that political debates about the pros and cons of immigration are still allowed, but a video saying that “Muslims are diseased and shouldn’t be allowed to migrate to Europe” will be banned.
The changed policy goes into effect today, YouTube said, and enforcement will “ramp up” over the next several days.
|RecommendKeepReplyMark as Last Read|
|From: JakeStraw||6/7/2019 8:13:34 AM|
|Google plans to press play on its Stadia cloud gaming service in November |
Google has shed some more clarity on its upcoming cloud-based video game service: an entry price, launch window and some of the games you will be able to play.
Google's Stadia will become available in November with an entry price of $129.99 for the Founders Edition package (pre-order on Google's Stadia site), which includes a game controller, Chromecast Ultra streaming device and a three-month subscription.
Cloud gaming promises to make it easier for consumers to play online games, as it sidesteps the need for pricey gaming PCs or console video game systems.
|RecommendKeepReplyMark as Last Read|
|From: Glenn Petersen||6/8/2019 10:21:09 AM|
|Not Your Daddy’s Regulation: Tech Giants Face A Complicated Reckoning In Washington|
Old rules and moving targets create new challenges for regulators and Congress.
By Alex Kantrowitz
Posted on June 6, 2019, at 2:01 p.m. ET
As federal regulators and Congress zero in on Apple, Google, Facebook, and Amazon, they’re about to encounter one of the most difficult rulemaking challenges in US history. The tech giants don’t fit neatly into the existing model for antitrust action since many of their services are available for free, making any consumer harm they may or may not have done difficult to grasp and quantify. And perhaps more vexingly, they are constantly shifting shape, adding new business lines with regularity to keep pace with a fast-changing technology industry. In Washington, it’s going to be hard to figure out where to even begin.
“One of the things we’ve seen in the past with regulation is by the time the courts catch up to regulating the industry often new industries emerge,” Rep. Ro Khanna, who represents a large slice of Silicon Valley, told BuzzFeed News. “Of course, we need regulation, but it has to be thoughtful regulation and well-crafted regulation.”
Consider the moving target the Federal Trade Commission will encounter when it examines Amazon. The FTC won’t just be dealing with a retailer, but an outsourced logistics provider, a grocer, a cloud services clearinghouse, a hardware manufacturer, and a voice search company. And as the FTC works to get its head around these business lines, Amazon will inevitably expand into more — the company is rumored to be debuting a home robot this year.
Amazon is ferociously aggressive in many of its business lines, yet it faces fierce competition in nearly all of them. There’s Amazon.com vs. Walmart, Whole Foods vs. the broader grocery industry, AWS vs. Microsoft Azure, Amazon Echo vs. Google Home.
With such a diverse set of businesses, Amazon will make it hard for regulators to reign in the “bigness” many are hoping it will tackle. Amazon and its fellow tech giants are nothing like the Bell Telephone Company or Standard Oil, which grew dominant by finding a core advantage and defending it at all costs. They have instead built their empires through continual reinvention, and they are far more nimble than their corporate predecessors. Regulators will therefore have to comb through each business line, consider the market dynamics in each, and toe the line between policing anti-competitive behavior and picking winners and losers.
“I think there is a lot of appetite for letting the market sort lots of things out,” Robert Seamans, an associate professor at NYU’s Stern School of Business who spent a year as a senior economist at the White House Council of Economic Advisers, told BuzzFeed News. “We don’t want to pick a China model where the government decides everything that should happen.”
While some are advocating for a breakup of the tech giants, such a move is likely politically infeasible. Breaking up these companies would create more competition, but it would open up the door for Chinese companies to enter the void, a fact that worries both Silicon Valley executives and the federal government. And public servants in Washington who follow poll numbers closely know well that Big Tech is popular among the general population. “They have the most precious asset, their approval ratings are in the 70s and 80s,” Khanna said of the tech giants. “Everyone in Congress, we celebrate when we get in the 40s or 50s.”
A meek federal regulatory body has long resisted sinking its teeth into this messy situation. But now that the proceedings are underway, the most likely outcome — if any rules are made — is one in which small changes are enacted. “My sense is that the animating principle behind regulation should be very simple,” Khanna said. “A company shouldn’t be able to privilege its own products, you shouldn’t be able to have anti-competitive platform privileges.”
The FTC, according to a report by Vox, is indeed already asking how Amazon competes with third-party sellers on its platform. The agency is also expected to fine Facebook a few billion dollars for privacy violations, a sum that sent the company’s stock up when Facebook told investors about it. The Department of Justice will investigate Google’s search and ad-tech businesses, according to reports, but it would be a major surprise if the department takes a more aggressive approach than European regulators and goes beyond examining whether Google privileges its own products, as Khanna laid out. Rulemaking along these lines would be meaningful, but would do little to slow down the tech giants overall.
Much of the coverage on the new set of investigations by the FTC, DOJ, and Congress has focused on the prospect of hearings Big Tech will likely now have to sit through. These hearings would be unpleasant for the tech giants, as would fines and other restrictions. But for this new set of corporate giants, time and competition — either among themselves or from the outside — are the most probable forces to check their power.
|RecommendKeepReplyMark as Last Read|
|From: JakeStraw||6/13/2019 8:10:09 AM|
|Alphabet's stake in the 2019 IPO boom jumps to $5 billion thanks to CrowdStrike|
Alphabet owns significant stakes in Uber, Lyft and Crowdstrike, three of the most high profile tech IPOs of the year.
|RecommendKeepReplyMark as Last Read|