|From: JakeStraw||5/21/2019 10:21:44 AM|
|Google sister-company Verily is teaming with big pharma on clinical trials|
The company announced Tuesday strategic alliances with the pharmaceutical companies Novartis, Sanofi, Otsuka and Pfizer to help it move more deeply into the medical studies market. The goals for Verily, and its pharma partners, are to reach patients in new ways, make it easier to enroll and participate in trials, and aggregate data across a variety of sources, including the electronic medical record or health-tracking wearable devices.
|RecommendKeepReplyMark as Last Read|
|From: Glenn Petersen||5/25/2019 11:51:39 AM|
|Maciej Ceglowski on gdpr|
by Tyler Cowen
May 25, 2019 at 12:23 a.m.
The plain language of the GDPR is so plainly at odds with the business model of surveillance advertising that contorting the real-time ad brokerages into something resembling compliance has required acrobatics that have left essentially everybody unhappy.Here is the full Senate testimony, there are many interesting points in the piece. I thank an MR reader for the pointer.
The leading ad networks in the European Union have chosen to respond to the GDPR by stitching together a sort of Frankenstein’s monster of consent,a mechanism whereby a user wishing to visit, say, a weather forecast page 4 is first prompted to agree to share data with a consortium of 119 entities, including the aptly named “A Million Ads”network. The user can scroll through this list of intermediaries one by one, or give or withhold consent en bloc, but either way she must wait a further two minutes for the consent collection process to terminate before she is allowed to find out whether or it is going to rain.
This majestically baroque consent mechanism also hinders Europeans from using the privacy preserving features built into their web browsers, or from turning off invasive tracking technologies like third-party cookies,since the mechanism depends on their being present.
For the average EU citizen,therefore, the immediate effect of the GDPR has been to add friction to their internet browsing experience along the lines of the infamous 2011 EU Privacy Directive (“EU cookie law”) that added consent dialogs to nearly every site on the internet.
The GDPR roll out has also demonstrated to what extent the European ad market depends on Google, who has assumed the role of de facto technical regulatory authority due to its overwhelming market share. Google waited until the night before the regulation went into effect to announce its intentions, leaving ad networks scrambling.
It is significant that Google and Facebook also took advantage of the US-EU privacy shield to move 1.5billion non-EU user records out of EU jurisdiction to servers in the United States. Overall, the GDPR has significantly strengthened Facebook and Google at the expense of smaller players in the surveillance economy.
The data protection provisions of the GDPR, particularly the right to erase, imposed significant compliance costs on internet companies. In some cases,these compliance costs just show the legislation working as intended. Companies who were not keeping adequate track of personal data were forced to retrofit costly controls, and that date is now safer for it.
But in other cases, companies with a strong commitment to privacy also found themselves expending significant resources on retooling. Personally identifying information has a way of seeping into odd corners of computer systems (for example, users will sometimes accidentally paste their password into a search box), and tracking down all of these special cases can be challenging in a complex system.The requirements around erasure, particularly as they interact with backups, also impose a special burden, as most computer systems are designed with a bias to never losing data,rather than making it easy to expunge.
|RecommendKeepReplyMark as Last Read|
|To: Sr K who wrote (15205)||5/27/2019 2:17:14 AM|
|From: J.F. Sebastian|
|Facebook is trying to develop its own streaming video platform, I'm sure they don't want Reed Hastings in the boardroom being privy to any plans they might talk about there.|
Apple made the mistake of keeping then-CEO of Google, Eric Schmidt, on their board when Apple was developing the iPhone and iPad at the same time Google was working on Android.
Probably an honest mistake at that point, because Google was very secretive about Android, but I can only imagine the things Eric Schmidt found out about the iPhone that he passed on to the Android engineers.
|RecommendKeepReplyMark as Last Read|
|To: JakeStraw who wrote (15211)||5/27/2019 3:15:43 AM|
|From: J.F. Sebastian|
|I deleted all Google apps from my iPhone recently, save YouTube.|
After reading an article about how long Google Maps keeps data and has abused user privacy, I'd had enough.
I do use Gmail, but I use a 3rd party app for it, not the native Gmail iOS client.
|RecommendKeepReplyMark as Last Read|
|From: FUBHO||5/30/2019 6:27:00 AM|
|'A white-collar sweatshop': Google Assistant contractors allege wage theft|
Julia Carrie Wong
“Do you believe in magic?” Google asked attendees of its annual developer conference this May, playing the seminal Lovin’ Spoonful tune as an introduction. Throughout the three-day event, company executives repeatedly answered yes while touting new features of the Google Assistant, the company’s version of Alexa or Siri, that can indeed feel magical. The tool can book you a rental car, tell you what the weather is like at your mother’s house, and even interpret live conversations across 26 languages.
But to some of the Google employees responsible for making the Assistant work, the tagline of the conference – “Keep making magic” – obscured a more mundane reality: the technical wizardry relies on massive data sets built by subcontracted human workers earning low wages.
“It’s smoke and mirrors if anything,” said a current Google employee who, as with the others quoted in this story, spoke on condition of anonymity because they were not authorized to speak to the press. “Artificial intelligence is not that artificial; it’s human beings that are doing the work.”
The Google employee works on Pygmalion, the team responsible for producing linguistic data sets that make the Assistant work. And although he is employed directly by Google, most of his Pygmalion co-workers are subcontracted temps who have for years been routinely pressured to work unpaid overtime, according to seven current and former members of the team.
These employees, some of whom spoke to the Guardian because they said efforts to raise concerns internally were ignored, alleged that the unpaid work was a symptom of the workplace culture put in place by the executive who founded Pygmalion. That executive was fired by Google in March following an internal investigation.
But current and former employees also identified Google’s broad reliance on approximately 100,000 temps, vendors and contractors (known at Google as TVCs) for large amounts of the company’s work as a culprit. Google does not directly employ the workers who collect or create the data required for much of its technology, be they the drivers who capture photos for Google Maps’ Street View, the content moderators training YouTube’s filters to catch prohibited material, or the scanners flipping pages to upload the contents of libraries into Google Books.
Having these two tiers of workers – highly paid full-time Googlers and often low-wage and precarious workers contracted through staffing firms – is “corrosive”, “highly problematic”, and “permissive of exploitation”, the employees said.
“It’s like a white-collar sweatshop,” said one current Google employee. “If it’s not illegal, it’s definitely exploitative. It’s to the point where I don’t use the Google Assistant, because I know how it’s made, and I can’t support it.”
An ‘army’ of linguistsThe study of language is at the very heart of current advancements in computing. For decades, people have had to work to learn the language of computers, whether they were trying to program a VCR or writing software. Technology such as the Google Assistant reverses the equation: the computer understands natural human speech, in all its variations.
Behind the technology that makes the Google Assistant work is an army of Google-contracted linguists. Photograph: Samuel Gibbs/The GuardianTake, for example, the straightforward task of asking the Assistant to set a timer to go off in five minutes, a former employee on Pygmalion explained. There are infinite ways that users could phrase that request, such as “Set a timer for five minutes”; “Can you ring the buzzer in five minutes?”; or “Configurar una alarma para cinco minutos.” The Assistant has to be able to convert the spoken request into text, then interpret the user’s intended meaning to produce the desired outcome, all practically instantaneously.
The technology that makes this possible is a form of machine learning. For a machine learning model to “understand” a language, it needs vast amounts of text that has been annotated by linguists to teach it the building blocks of human language, from parts of speech to syntactic relationships.
Enter Pygmalion. The team was born in 2014, the brainchild of longtime Google executive Linne Ha, to create the linguistic data sets required for Google’s neural networks to learn dozens of languages. The “painstaking” nature of the labor required to create this “handcrafted” data was featured in a 2016 article about Pygmalion’s “massive team of PhD linguists” by Wired. (Ha did not respond to a request for comment.)
From the beginning, Google planned to build the team with just a handful of full-time employees while outsourcing the vast majority of the annotation work to an “army” of subcontracted linguists around the world, documents reviewed by the Guardian and interviews with staff show.
The appetite for Pygmalion’s hand-labeled data, and the size of the team, has only increased over the years. Today, it includes 40 to 50 full-time Googlers and approximately 200 temporary workers contracted through agencies, including Adecco, a global staffing firm. The contract workers include associate linguists, who are tasked with annotation, and project managers, who oversee their work.
All of the contract workers have at least a bachelor’s degree in linguistics, though many have master’s degrees and some have doctorates. In addition to annotating data, the temp workers write “grammars” for the Assistant, complex and technical work that requires considerable expertise and involves Google’s code base. Their situation is comparable to adjunct professors on US college campuses: they are highly educated and highly skilled, performing work crucial to the company’s mission, and shut out of the benefits and security that come with a tenured position.
“Imagine going from producing PhD-level research and pushing forward the state of knowledge in the world to going to an annotation type job, where all you’re doing all day is annotating data; it’s very click, click, click,” said a former project manager on Pygmalion. “Everyone was trying to prove themselves because everyone was trying to work for Google. The competitive edge that happened among colleagues as TVCs was severe.”
‘The definition of wage theft’This dynamic created the incentive for temps to perform unpaid work. Managers took advantage by making it clear they wouldn’t approve overtime for contract workers, while also assigning unrealistic amounts of work, current and former employees said.
The pressure to complete assignments was “immense”, said one Googler. “In this mixed stream of messages, I think a lot of people had to make their own calls, and given the pressure, I think people made different calls.”
The Googler described the overall effect as “gaslighting”, and recalled receiving messages from management such as, “If the TVCs want to work more, let them work more.” All seven current and former employees interviewed by the Guardian said they had either experienced or witnessed contract workers performing unpaid overtime.
“To my knowledge, no one ever said, you need to work TVCs above their contracts, but it was set up so that it was the only way to get the expected work done, and if anyone raised concerns they would be openly mocked and belittled,” said another current Googler.
“The 40-hour thing was just not respected,” said a former associate linguist. “It was made clear to us that we were never to log more than 40 hours, but we were never told not to work more than 40 hours.
“The work that they assign often takes more than hours hours,” they added. “Every week you fill out a timesheet. One person one time did submit overtime, and they were chastised. No punishment, but definitely told not to work overtime.”
A spokeswoman for Google said that it was company policy that temp workers must be paid for all hours worked, even if overtime was not approved in advance.
“Working off the clock is the very definition of wage theft,” said Beth Ross, a longtime labor and employment attorney. Ross said that both Google and Adecco could face liability for unpaid wages and damages under federal and state law.
‘They dangle that carrot’The associate linguist was one of several who said that they took the position at Google in hopes that they could eventually convert to a full-time position. Several members of Pygmalion are former contract workers, including the current head of the team, who took over when Ha, the executive who founded the team, moved on to another project.
“People did [unpaid overtime] because they were dangled the opportunity of becoming a full-time employee, which is against company policy,” a current Googler said. “There’s a particular leveraging of people’s desire to become full time,” said another.
“When I was hired, I was very explicitly told that there is no ladder,” a current contract worker said. “‘This is not a temp-to-hire position. There is no moving up’ … But the reality on the team is very much one where there is clearly a ladder. A certain percentage of the associate linguists will get project manager. A certain percentage of project managers get converted to full time. We watch it happen, and they dangle that carrot.”
Google employees enjoy perks such as free meals, on-site yoga classes, free massages and generous benefits packages. Photograph: Lucy Nicholson/REUTERSOne Googler who successfully converted to a full-time position after working as a temp on Pygmalion said that at times the bargain was even made explicit. In April 2017, they recalled, Ha attended a meeting of outsourced Pygmalion project managers in London and “explain[ed] that the position was designed for conversion and that we should be proactive in asking for more work in order to achieve this”.
The Google spokeswoman said that it is company policy not to make any commitment about employment or conversion to temps, and that Googlers who manage temps are required to take a mandatory training on this and other policies related to TVCs.
‘Why do it?’The disparity in wages and benefits between Google employees and contract workers is stark. Alphabet recently reported median pay of $246,804, and employees enjoy perks such as free meals, on-site yoga classes, free massages and generous benefits.
Amid increasing activism by Googlers and contract workers, Google recently announced improved minimum standards for US-based contract workers, including a minimum of eight paid sick days, “comprehensive” health insurance, and a minimum wage of at least $15 an hour by 2020. (A full-time job at that wage pays $31,200 a year; by comparison, Google charges its own employees $38,808 a year to place an infant in its onsite daycare facilities.)
Wages for contract workers on the Pygmalion team are well above the new minimum standard, usually starting around $25 an hour for associate linguists and going up to $35 an hour for project managers. But contractors complain about subpar benefits and other indignities.
The former project manager described Adecco’s benefits plan as “the worst health insurance I have ever had”. A current contract worker earning less than $60,000 annually said they were paying $180 each month in premiums for an individual plan with a $6,000 deductible. For families, the deductible is $12,000, according to documents reviewed by the Guardian. Google declined to comment on Adecco’s pay and benefits.
Google workers have walked out over the company’s handling of sexual misconduct claims and its treatment of temporary workers. Photograph: Noah Berger/APGooglers earn significantly more, and those on individual plans contribute between $0 and $53 for their health insurance and have a much lower deductible ($1,350), according to documents reviewed by the Guardian. Googlers with families pay up to $199 every two weeks, with a $2,700 deductible.
Others complained of a lack of trust and respect. In 2018, Google revoked the ability for contractors on Pygmalion to work while riding Google’s wifi-equipped commuter buses, creating frustration for those who spent three to four hours a day traveling to the company’s Mountain View campus and could no longer work and count that time toward their shift. Google said it works to ensure that temps, vendors and contractors do not have over-broad access to sensitive internal information for security reasons.
“Why do it?” a former associate linguist said of working unpaid overtime under these conditions. “I didn’t want to lose the job. Having Google on your résumé is important to a career … Later on, I came to find out that you can’t say ‘Google’ on your r ésumé. You have to say ‘Google by Adecco’.”
A weekend assignmentBoth Google and Adecco recently launched investigations into the allegations of unpaid overtime in Pygmalion.
“Our policy is clear that all temporary workers must be paid for any overtime worked,” said Eileen Naughton, Google’s vice-president of people operations, in a statement to the Guardian. “If we find that anyone isn’t properly paid, we make sure they are compensated appropriately and take action against any Google employee who violates this policy.”
The current investigation was initiated after the company received a report of a possible policy violation in February 2019, the Google spokeswoman said. The company will provide appropriate compensation if need be and will take action up to and including terminations if policy violations are found, she added.
The spokeswoman also acknowledged that concerns about unpaid overtime were raised to human resources in 2017, but said that the company investigated and did not find any such cases at the time.
“We are committed to ensuring all employees are compensated for all time worked,” said Mary Beth Waddill, a spokeswoman for Adecco. “Our longstanding policy is that every employee is required to report time accurately – even if that time isn’t pre-approved – and they should feel encouraged to do so by their managers. If we learn that this is not the case, we will work with Google to take appropriate action.”
On Friday 17 May, Adecco sent emails to current and former Pygmalion temps. Recipients were asked whether they reported all the hours they worked, and, if not, to estimate how many hours they worked unpaid. The emails requested a response by Monday 20 May, though a Google spokeswoman said this week that the deadline has been extended.
A Google employee reacted: “They’re asking people to work on the weekend to recall unbilled overwork. It seems like it’s designed to discourage people from responding.”
Indeed, one former contract worker who left the company many months ago said they received the email but did not bother to respond. “After I left, I didn’t keep records of the hours I worked,” they said. “Even if I wanted to report overtime now, how could I?”
- Do you work in the tech industry? Do you have concerns about workplace issues? Contact the author: firstname.lastname@example.org or email@example.com
Since you're here...… we have a small favour to ask. More people are reading and supporting our independent, investigative reporting than ever before. And unlike many news organisations, we have chosen an approach that allows us to keep our journalism accessible to all, regardless of where they live or what they can afford.
The Guardian is editorially independent, meaning we set our own agenda. Our journalism is free from commercial bias and not influenced by billionaire owners, politicians or shareholders. No one edits our editor. No one steers our opinion. This is important as it enables us to give a voice to those less heard, challenge the powerful and hold them to account. It’s what makes us different to so many others in the media, at a time when factual, honest reporting is critical.
Every contribution we receive from readers like you, big or small, goes directly into funding our journalism. This support enables us to keep working as we do – but we must maintain and build on it for every year to come. Support The Guardian from as little as $1 – and it only takes a minute. Thank you.
|RecommendKeepReplyMark as Last Read|
|From: Glenn Petersen||5/31/2019 9:21:18 PM|
|Justice Department is reportedly preparing antitrust probe of Google|
Published an hour ago
Jordan Novet @jordannovet
- Alphabet has faced antitrust probes before, but not from the U.S. Justice Department.
- In 2018 Alphabet had $136.8 billion in revenue.
The U.S. Justice Department is planning an antitrust investigation into Alphabet’s Google subsidiary, the Wall Street Journal reported on Friday. The effort will touch on web search and other parts of Google, the report said.
The report comes amid discussion from politicians and the public about whether large technology companies should be broken up. The Justice Department launched a major antitrust case against Microsoft in 1998 that led to several rules the company had to follow for years.
Alphabet, which racked up $136.8 billion in revenue in 2018, has faced antitrust pressure in the past.
In 2010, the company received an antitrust complaint from the European Commission regarding ranking of shopping search results and ads, which resulted in Google being fined $2.7 billion in 2017, according to Alphabet’s latest annual report. In 2016, the EC complained about practices related to Google’s Android operating system, leading to a $5.1 billion charge in 2018.
And in March the European Union ordered Google to pay around $1.7 billion because of advertising behavior.
Sen. Elizabeth Warren, who announced her presidential candidacy in December, has pressed for breaking up tech companies like Google. In a widely read post published on Medium in March, Warren said she was interested in appointing regulators who would be interested in undoing what she called “anti-competitive mergers,” including Google’s DoubleClick, Nest and Waze. “Current antitrust laws empower federal regulators to break up mergers that reduce competition,” she wrote.
Google and the Justice Department didn’t immediately respond to requests for comment.
|RecommendKeepReplyMark as Last ReadRead Replies (1)|
|To: Glenn Petersen who wrote (15229)||6/1/2019 8:51:44 PM|
|From: Sr K|
|It's the lead story in the WSJ.|
Justice Department Is Preparing Antitrust Investigation of Google
Probe would closely examine Google’s practices related to search, other businesses
Despite new initiatives from Google and Facebook, messing with privacy controls is like playing a carnival game. Knock out one way for advertisers to track you, and they quickly find another way to do it. WSJ's Joanna Stern heads to Coney Island to explain. Photo: Kenny Wassus
Updated June 1, 2019 1:06 p.m. ET
WASHINGTON—The Justice Department is gearing up for an antitrust investigation of Alphabet Inc.’s Google, a move that could present a major new layer of regulatory scrutiny for the search giant, according to people familiar with the matter.
The department’s antitrust division in recent weeks has been laying the groundwork for the probe, the people said. The Federal Trade Commission, which shares antitrust authority with the department, previously conducted a broad investigation of Google but closed it in 2013 ...
|RecommendKeepReplyMark as Last ReadRead Replies (1)|
|From: Glenn Petersen||6/2/2019 11:03:59 AM|
|DeepMind Can Now Beat Us at Multiplayer Games, Too|
Chess and Go were child’s play. Now A.I. is winning at capture the flag. Will such skills translate to the real world?
By Cade Metz
New York Times
May 30, 2019
Capture the flag is a game played by children across the open spaces of a summer camp, and by professional video gamers as part of popular titles like Quake III and Overwatch.
In both cases, it’s a team sport. Each side guards a flag while also scheming to grab the other side’s flag and bring it back to home base. Winning the game requires good old-fashioned teamwork, a coordinated balance between defense and attack:
In other words, capture the flag requires what would seem to be a very human set of skills. But researchers at an artificial intelligence lab in London have shown that machines can master this game, too, at least in the virtual world.
In a paper published on Thursday in Science (and previously available on the website arXiv before peer review), the researchers reported that they had designed automated “agents” that exhibited humanlike behavior when playing the capture the flag “game mode” inside Quake III. These agents were able to team up against human players or play alongside them, tailoring their behavior accordingly.
“They can adapt to teammates with arbitrary skills,” said Wojciech Czarnecki, a researcher with DeepMind, a lab owned by the same parent company as Google.
Through thousands of hours of game play, the agents learned very particular skills, like racing toward the opponent’s home base when a teammate was on the verge of capturing a flag. As human players know, the moment the opposing flag is brought to one’s home base, a new flag appears at the opposing base, ripe for the taking.
DeepMind’s project is part of a broad effort to build artificial intelligence that can play enormously complex, three-dimensional video games, including Quake III, Dota 2 and StarCraft II. Many researchers believe that success in the virtual arena will eventually lead to automated systems with improved abilities in the real world.
For instance, such skills could benefit warehouse robots as they work in groups to move goods from place to place, or help self-driving cars navigate en masse through heavy traffic. “Games have always been a benchmark for A.I.,” said Greg Brockman, who oversees similar research at OpenAI, a lab based in San Francisco. “If you can’t solve games, you can’t expect to solve anything else.”
Until recently, building a system that could match human players in a game like Quake III did not seem possible. But over the past several years, DeepMind, OpenAI and other labs have made significant advances, thanks to a mathematical technique called “reinforcement learning,” which allows machines to learn tasks by extreme trial and error.
By playing a game over and over again, an automated agent learns which strategies bring success and which do not. If an agent consistently wins more points by moving toward an opponent’s home base when a teammate is about to capture a flag, it adds this tactic to its arsenal of tricks.
In 2016, using the same fundamental technique, DeepMind researchers built a system that could beat the world’s top players at the ancient game of Go, the Eastern version of chess. Many experts had thought this would not be accomplished for another decade, given the enormous complexity of the game.
First-person video games are exponentially more complex, particularly when they involve coordination between teammates. DeepMind’s autonomous agents learned capture the flag by playing roughly 450,000 rounds of it, tallying about four years of game experience over weeks of training. At first, the agents failed miserably. But they gradually picked up the nuances of the game, like when to follow teammates as they raided an opponent’s home base:
Since completing this project, DeepMind researchers also designed a system that could beat professional players at StarCraft II, a strategy game set in space. And at OpenAI, researchers built a system that mastered Dota 2, a game that plays like a souped-up version of capture the flag. In April, a team of five autonomous agents beat a team of five of the world’s best human players.
Last year, William Lee, a professional Dota 2 player and commentator known as Blitz, played against an early version of the technology that could play only one-on-one, not as part of a team, and he was unimpressed. But as the agents continued to learn the game and he played them as a team, he was shocked by their skill.
“I didn’t think it would be possible for the machine to play five-on-five, let alone win,” he said. “I was absolutely blown away.”
As impressive as such technology has been among gamers, many artificial-intelligence experts question whether it will ultimately translate to solving real-world problems. DeepMind’s agents are not really collaborating, said Mark Riedl, a professor at Georgia Tech College of Computing who specializes in artificial intelligence. They are merely responding to what is happening in the game, rather than trading messages with one another, as human players do. (Even mere ants can collaborate by trading chemical signals.)
Although the result looks like collaboration, the agents achieve it because, individually, they so completely understand what is happening in the game.
“How you define teamwork is not something I want to tackle,” said Max Jaderberg, another DeepMind researcher who worked on the project. “But one agent will sit in the opponent’s base camp, waiting for the flag to appear, and that is only possible if it is relying on its teammates.”
Games like this are not nearly as complex as the real world. “3-D environments are designed to make navigation easy,” Dr. Riedl said. “Strategy and coordination in Quake are simple.”
Reinforcement learning is ideally suited to such games. In a video game, it is easy to identify the metric for success: more points. (In capture the flag, players earn points according to how many flags are captured.) But in the real world, no one is keeping score. Researchers must define success in other ways.
This can be done, at least with simple tasks. At OpenAI, researchers have trained a robotic hand to manipulate an alphabet block as a child might. Tell the hand to show you the letter A, and it will show you the letter A.
At a Google robotics lab, researchers have shown that machines can learn to pick up random items, such as Ping-Pong balls and plastic bananas, and toss them into a bin several feet away. This kind of technology could help sort through bins of items in huge warehouses and distribution centers run by Amazon, FedEx and other companies. Today, human workers handle such tasks.
As labs like DeepMind and OpenAI tackle bigger problems, they may begin to require ridiculously large amounts of computing power. As OpenAI’s system learned to play Dota 2 over several months — more than 45,000 years of game play — it came to rely on tens of thousands of computer chips. Renting access to all those chips cost the lab millions of dollars, Mr. Brockman said.
DeepMind and OpenAI, which is funded by various Silicon Valley kingpins including Khosla Ventures and the tech billionaire Reid Hoffman, can afford all that computing power. But academic labs and other small operations cannot, said Devendra Chaplot, an A.I. researcher at Carnegie Mellon University. The worry, for some, is that a few well-funded labs will dominate the future of artificial intelligence.
But even the big labs may not have the computing power needed to move these techniques into the complexities of the real world, which may require stronger forms of A.I. that can learn even faster. Though machines can now win capture the flag in the virtual world, they are still hopeless across the open spaces of summer camp — and will be for quite a while.
|RecommendKeepReplyMark as Last ReadRead Replies (1)|