We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   PoliticsThe Surveillance State

Previous 10 Next 10 
From: Glenn Petersen10/15/2019 8:33:37 PM
   of 173
Student tracking, secret scores: How college admissions offices rank prospects before they apply

Before many schools even look at an application, they comb through prospective students’ personal data, such as web-browsing habits and financial history

By Douglas MacMillan and Nick Anderson
The Washington Post
October 14, 2019

Washington Post illustration/iStock

To learn more about prospective students, admissions officers at the University of Wisconsin-Stout turned to a little-known but increasingly common practice: They installed tracking software on their school website.

When one student visited the site last year, the software automatically recognized who she was based on a piece of code, called a cookie, which it had placed on her computer during a prior visit. The software sent an alert to the school’s assistant director of admissions containing the student’s name, contact information and details about her life and activities on the site, according to internal university records reviewed by The Washington Post. The email said she was a graduating high school senior in Little Chute, Wis., of Mexican descent who had applied to UW-Stout.

The admissions officer also received a link to a private profile of the student, listing all 27 pages she had viewed on the school’s website and how long she spent on each one. A map on this page showed her geographical location, and an “affinity index” estimated her level of interest in attending the school. Her score of 91 out of 100 predicted she was highly likely to accept an admission offer from UW-Stout, the records showed.

Colleges are collecting more data about prospective students than ever before — part of an effort, administrators say, to make better predictions about which students are the most likely to apply, accept an offer and enroll. Records reviewed by The Post show that at least 44 public and private universities in the United States work with outside consulting companies to collect and analyze data on prospective students, by tracking their Web activity or formulating predictive scores to measure each student’s likelihood of enrolling.

The practices may raise a hidden barrier to a college education for underprivileged students. While colleges have used data for many years to decide which regions and high schools to target their recruiting, the latest tools let administrators build rich profiles on individual students and quickly determine whether they have enough family income to help the school meet revenue goals.

The Post identified colleges with data operations by reviewing the customer lists of two top admissions consulting firms: Capture Higher Ed and Ruffalo Noel Levitz. The Post interviewed admissions staffers at 23 colleges, examined contracts and emails obtained from 26 public universities through open-records laws, and used a Web privacy tool to confirm the presence of Capture Higher Ed’s tracking software on the websites of 33 universities.

Records and interviews show that colleges are building vast repositories of data on prospective students — scanning test scores, Zip codes, high school transcripts, academic interests, Web browsing histories, ethnic backgrounds and household incomes for clues about which students would make the best candidates for admission. At many schools, this data is used to give students a score from 1 to 100, which determines how much attention colleges pay them in the recruiting process.

Scoring and tracking are popular at schools that are struggling to survive. Faced with shrinking sources of funding and growing competition for high school graduates, cash-strapped colleges are experimenting with new ways to identify and attract students who can afford to pay tuition, said Lloyd Thacker, a former admissions counselor and founder of the Education Conservancy, a nonprofit research group.

“An admission dean is more and more a businessperson charged with bringing in revenue,” Thacker said. “The more fearful they are about survival, the more willing they are to embrace new strategies.”

Admissions consulting companies charge schools tens of thousands of dollars a year to collect and analyze the data of millions of students. In emails reviewed by The Post, employees of Louisville-based Capture Higher Ed urged school administrators to hand over all data they felt comfortable sharing.

“We love data, so the more the merrier,” one of Capture’s consultants wrote in a 2017 email to the admissions director at UW-Stout.

Capture Higher Ed spokesman Jim Davidson said the company helps schools provide relevant information to students who have chosen to receive that information. Students can opt out of Web tracking by contacting schools directly, he said.

Doug Mell, a spokesman for UW-Stout, said in an email that the school used Capture’s Web tracking for a one-year trial and did not renew the contract this year. The female student who was tracked last year voluntarily gave the school her background information when she applied, he said. She enrolled in the school last year.

Consultants are expanding their influence on college campuses. Ruffalo Noel Levitz, based in Cedar Rapids, Iowa, has hired the top admissions officers at more than two dozen universities — including Vanderbilt, Creighton and Marquette — to do paid consulting work on the side, according to interviews and records. Some university officials received compensation from Ruffalo Noel Levitz at the same time that their schools were paying customers of the company — raising questions about potential conflicts of interest, Thacker said.

The vast majority of universities reviewed by The Post do not tell students the schools are collecting their information. In a review of the online privacy policies of all 33 schools using Web tracking software, only three disclosed the purpose of the tracking. The other 30 omitted any explanation or did not explain the full extent or purpose of their tracking.

The State University of New York’s College of Environmental Science and Forestry said in its online privacy policy that it “does not use cookies.” However, a representative from the school said in an email that the school does use Capture Higher Ed’s tracking cookies to show relevant pop-up ads to students but deletes the cookies from its databases “within four hours.

The privacy policy for SUNY College of Environmental Science and Forestry. (College's website)
Some privacy experts say colleges’ failure to disclose the full extent of how they share data with outside consultants may violate the spirit if not the letter of the Family Educational Rights and Privacy Act, or FERPA, a federal law protecting the privacy of student education records at schools that receive federal education funds. FERPA generally requires that schools ask for students’ permission before sharing their personal data with any outside parties.

Rather than getting permission, some schools have classified the consulting companies as “school officials,” a legal designation that exempts them from FERPA if certain conditions are met.

Zachary Greenberg, a program officer at the Foundation for Individual Rights in Education, a student advocacy group, said colleges that do this risk undermining one of the goals of FERPA — to make the management of records more transparent. “Students deserve to know where their information is going,” Greenberg said.

The Education Department can suspend all federal funding to any school it finds in violation of FERPA but has never imposed that penalty in the 45 years since the law was created. The agency has other enforcement measures and works with offenders to voluntarily come into compliance, said Angela Morabito, a spokeswoman for the Education Department. She declined to say whether colleges may be violating the law by sharing data with consulting companies.

Many schools do not give students the ability to opt out of data collection. Jacquelyn Malcolm, chief information officer at the State University of New York’s Buffalo State College, said that if prospective students do not want their Web browsing tracked, they should not visit her school’s website.

“You have a choice of not interacting at all,” Malcolm said in an interview, adding that applicants can get information by calling the school, visiting its social media accounts or visiting other websites with information about different colleges.

In an email, a spokesman for SUNY Buffalo State later said that the school is exploring new ways to inform students about its privacy practices and that anyone can request not to be tracked by sending an email directly to Malcolm.

Filtering recruits with socioeconomic dataData tools appeal to schools that are trying to increase revenue by recruiting students who can afford to pay tuition.

At Mississippi State University, a state school with more than 18,000 undergraduates, administrators use data to filter a large number of potential applicants down to a select pool of recruits who are a good fit for the school’s academic programs and do not need much financial aid.

Each year, Mississippi State buys data on thousands of high school students from testing firms including the College Board, which owns the SAT, said John Dickerson, assistant vice president for enrollment. These students all gave permission to have their data shared by checking a box when they took the SAT. The nonprofit testing company says on its website that it licenses the names and data of each student for 47 cents apiece.

The section of SAT instructions that explains how the College Board may collect and share student data.
Next, Mississippi State shares its list of prospects with Ruffalo Noel Levitz, which uses a formula to assign each one a score. According to Dickerson, the formula for out-of-state students gives the most weight (30 percent) to a student’s desired major; someone choosing agriculture or veterinary sciences, areas where the school is strong, will score higher than a student who wants to major in music. The formula also weighs their distance from campus (7.9 percent), income level (7.2 percent) and consumer purchasing behavior (6.8 percent), among other factors.

The formula is an example of predictive analytics, a field of computer science that attempts to predict the likelihood of future events by looking for patterns in data. Similar to software that tries to predict what movies or music someone will like, these formulas attempt to guess which students are a good match for a college based on how many attributes they have in common with students who previously enrolled in the school.

A predictive formula may also be adjusted to favor the types of people a college wants more of, such as ethnic minorities or students of financial means.

Mississippi State uses socioeconomic data in its admissions algorithm to recruit more high-income students from outside the state, Dickerson said. Like many public universities, Mississippi State has ramped up out-of-state recruiting because those students pay higher tuition. The university drew 42 percent of its freshmen from out of state in 2018, up from 26 percent a decade earlier, federal data shows.

“From a practical standpoint,” Dickerson said, “you would want to know if folks have an ability to pay.”

The vast majority of Mississippi State students still receive some form of financial aid, and the school says it does not use financial information to determine who gets an offer of admission. However, focusing recruiting resources on higher-income students means lower-income students may receive less encouragement to apply for college.

Shaquilla Wordlaw, a junior at Mississippi State, said she thinks it is a good idea for college recruiters to use more data to target messages to the right students. But Wordlaw, who is from Starkville, where Mississippi State is based, says the school should not discriminate against students based on their income.

“They’re choosing those who are a part of the upper class rather than middle or lower, because they want money,” Wordlaw said. “They’re not focused on the education they are providing.”

Consulting companies may estimate a student’s financial position by checking their Zip codes against U.S. Census data for estimated household incomes in that area. Ruffalo Noel Levitz and Capture Higher Ed also buy information from third-party data brokers, which gather consumer data from public and private databases on property holders, magazine subscribers and supermarket loyalty-card members.

Some schools say data analysis can help them find students who might not have applied in the first place. George Mason University, in Northern Virginia, uses data analysis tools to look for nontraditional prospects who might have working-class parents or be the first in their family to go to college, said David Burge, the school’s vice president for enrollment management.

Consulting companies woo college officialsAs they pursue student data, colleges have embraced an industry of consultants.

Hundreds of school administrators filled the ballroom of a Nashville convention center in late July for a keynote speech by Sumit Nijhawan, a former tech executive who became CEO of Ruffalo Noel Levitz last year. He paced before a large screen and discussed how data is helping colleges tailor their pitch to individual students, similar to how tech companies such as Spotify and Netflix surface music and videos based on the user, he said. “Usually the solution to problems is lurking somewhere around in data,” Nijhawan told the crowd. “And there’s a lot of data in higher ed — no doubt about that.”

The three-day conference, replete with lunch buffets, PowerPoint presentations and free coffee mugs with company logos, was an example of how consulting companies are trying to win over school officials as they negotiate for larger contracts and more access to student data.

At least 30 admissions officers have taken part in Ruffalo Noel Levitz’s associate consulting program over the past decade, according to interviews and records posted on the company’s website. Emails reviewed by The Post show the program is helping Ruffalo Noel Levitz build closer ties to campus decision-makers.

Cecilia Castellano, vice provost of strategic enrollment planning at Bowling Green State University in Ohio, became an associate consultant for Ruffalo Noel Levitz around the same time her school signed a three-year, $48,000 contract, which was obtained by The Post through a public-records request. Castellano, who was listed as the “primary contact” on that business deal in October 2016, received emails from Ruffalo Noel Levitz a few days later, asking her to sign up for a “new associate training workshop” later in the year.

In an email to Castellano the same year, Ruffalo Noel Levitz asked her to help pitch a $13,590-per-person certification program to potential customers. “Please encourage the teams on your client campuses to consider the program,” the email said.

Dave Kielmeyer, a university spokesman, said Castellano attended the consultant training in 2016 and has since done two paid consulting projects for the company. She got approval from the school, which did not see the work as a conflict, Kielmeyer said. Castellano “has a role in hiring vendors,” he said, but the school’s provost or chief financial officer must approve consulting contracts.

Admissions officers at Vanderbilt, Creighton and Marquette universities say that they have disclosed their consulting roles with their colleges and are careful not to work with competing schools. Nijhawan, the Ruffalo Noel Levitz CEO, said in an interview that the program is aimed at helping school administrators “share knowledge across the industry.”

Matching ‘cookies’ to student identitiesSome of the same technologies that big companies use to track users and show ads to consumers are gaining traction in college admissions. One example is Capture Higher Ed’s behavioral tracking service, which relies on cookies to record every click students make when they visit a university website.

Each visitor to the university site gets a cookie, which sends Capture information including that person’s Internet protocol address, the type of computer and browser they are using, what time of day they visited the site and which pages within the site they clicked on, according to Patrick Jackson, chief technology officer for digital privacy firm Disconnect, who reviewed college websites on behalf of The Post.

Every time that person returns to the site, Capture learns more information about them, such as their interest in athletics or the amount of time they spend on financial aid pages, according to promotional videos on the company’s website.

Initially, the cookies identify each visitor by the IP address, a unique code associated with a computer’s Internet connection, but Capture also offers software tools to match the cookie data with people’s real identities, according to the company’s promotional videos. Colleges do this by sending marketing emails to thousands of prospective students, inviting them to click on a hyperlink inside the message for more information about a particular topic, according to the videos.

When a student clicks on the link, Capture learns which email address is associated with which IP address, connecting the student’s real identity to the college’s snapshot of the student’s Web browsing history, Capture executives said in one of the videos.

“We are embedding links in every email,” Billy Pierce, then director of undergraduate admission at the University of Toledo, a Capture customer, said onstage at a college admissions conference in 2016. “You want more of the identified visitors coming to your website because those are the kids that you have their name, their address, their email, sometimes their phone number — any information you have in your system now gets tied to their behavior,” Pierce said at the conference, a video of which was posted to YouTube.

Meghan Cunningham, a spokeswoman for the University of Toledo, said the school uses Capture’s software code on its website and in some — not all — of its marketing emails in an effort to give students information relevant to them. In an email, Pierce added that students choose to give their names and contact information to the school.

Admissions officers say behavioral tracking helps them serve students in the application process. When a college sees that a qualified student is serious about applying based on the student’s Web behavior, it can dedicate more staffers to follow up.

“An admissions conselor may only have an hour in a given day to make contact with prospective students,” Chrissy Holliday, vice president of enrollment at Colorado State University at Pueblo — a Capture Higher Ed client — said in an email. “The web data allows the counselor to know which students are currently most engaged and might benefit most from that contact.”

But Web tracking may unfairly provide an advantage to students with better access to technology, said Bradley Shear, a Maryland lawyer who has pushed for better regulation of students’ online privacy. A low-income student may be a strong academic candidate but receive less attention from recruiters because the student does not own a smartphone or have high-speed Internet access at home, he said.

“I don’t think the algorithm should run the admissions department,” Shear said.

Share RecommendKeepReplyMark as Last Read

To: richardred who wrote (56)10/15/2019 8:44:24 PM
From: Glenn Petersen
   of 173
Interesting. I put it on my Watch List. Sitting on $49 million in cash with a market cap of approximately $62 million.


Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen10/16/2019 11:03:41 AM
   of 173
Without encryption, we will lose all privacy. This is our new battleground

The US, UK and Australia are taking on Facebook in a bid to undermine the only method that protects our personal information

Edward Snowden is a US surveillance whistleblower
The Guardian
Tue 15 Oct 2019 01.00 EDT
Last modified on Tue 15 Oct 2019 21.06 EDT

‘If internet traffic is unencrypted, any government, company, or criminal that happens to notice it can – and, in fact, does – steal a copy of it, secretly recording your information for ever.’ Photograph: Kacper Pempel/Reuters

In every country of the world, the security of computers keeps the lights on, the shelves stocked, the dams closed, and transportation running. For more than half a decade, the vulnerability of our computers and computer networks has been ranked the number one risk in the US Intelligence Community’s Worldwide Threat Assessment – that’s higher than terrorism, higher than war. Your bank balance, the local hospital’s equipment, and the 2020 US presidential election, among many, many other things, all depend on computer safety.

And yet, in the midst of the greatest computer security crisis in history, the US government, along with the governments of the UK and Australia, is attempting to undermine the only method that currently exists for reliably protecting the world’s information: encryption. Should they succeed in their quest to undermine encryption, our public infrastructure and private lives will be rendered permanently unsafe.

In the simplest terms, encryption is a method of protecting information, the primary way to keep digital communications safe. Every email you write, every keyword you type into a search box – every embarrassing thing you do online – is transmitted across an increasingly hostile internet. Earlier this month the US, alongside the UK and Australia, called on Facebook to create a “backdoor”, or fatal flaw, into its encrypted messaging apps, which would allow anyone with the key to that backdoor unlimited access to private communications. So far, Facebook has resisted this.

If internet traffic is unencrypted, any government, company, or criminal that happens to notice it can – and, in fact, does – steal a copy of it, secretly recording your information for ever. If, however, you encrypt this traffic, your information cannot be read: only those who have a special decryption key can unlock it.

I know a little about this, because for a time I operated part of the US National Security Agency’s global system of mass surveillance. In June 2013 I worked with journalists to reveal that system to a scandalised world. Without encryption I could not have written the story of how it all happened – my book Permanent Record – and got the manuscript safely across borders that I myself can’t cross. More importantly, encryption helps everyone from reporters, dissidents, activists, NGO workers and whistleblowers, to doctors, lawyers and politicians, to do their work – not just in the world’s most dangerous and repressive countries, but in every single country.

When I came forward in 2013, the US government wasn’t just passively surveilling internet traffic as it crossed the network, but had also found ways to co-opt and, at times, infiltrate the internal networks of major American tech companies. At the time, only a small fraction of web traffic was encrypted: six years later, Facebook, Google and Apple have made encryption-by-default a central part of their products, with the result that today close to 80% of web traffic is encrypted. Even the former director of US national intelligence, James Clapper, credits the revelation of mass surveillance with significantly advancing the commercial adoption of encryption. The internet is more secure as a result. Too secure, in the opinion of some governments.

Donald Trump’s attorney general, William Barr, who authorised one of the earliest mass surveillance programmes without reviewing whether it was legal, is now signalling an intention to halt – or even roll back – the progress of the last six years. WhatsApp, the messaging service owned by Facebook, already uses end-to-end encryption (E2EE): in March the company announced its intention to incorporate E2EE into its other messaging apps – Facebook Messenger and Instagram – as well. Now Barr is launching a public campaign to prevent Facebook from climbing this next rung on the ladder of digital security. This began with an open letter co-signed by Barr, UK home secretary Priti Patel, Australia’s minister for home affairs and the US secretary of homeland security, demanding Facebook abandon its encryption proposals.

If Barr’s campaign is successful, the communications of billions will remain frozen in a state of permanent insecurity: users will be vulnerable by design. And those communications will be vulnerable not only to investigators in the US, UK and Australia, but also to the intelligence agencies of China, Russia and Saudi Arabia – not to mention hackers around the world.

End-to-end encrypted communication systems are designed so that messages can be read only by the sender and their intended recipients, even if the encrypted – meaning locked – messages themselves are stored by an untrusted third party, for example, a social media company such as Facebook.

The central improvement E2EE provides over older security systems is in ensuring the keys that unlock any given message are only ever stored on the specific devices at the end-points of a communication – for example the phones of the sender or receiver of the message – rather than the middlemen who own the various internet platforms enabling it. Since E2EE keys aren’t held by these intermediary service providers, they can no longer be stolen in the event of the massive corporate data breaches that are so common today, providing an essential security benefit. In short, E2EE enables companies such as Facebook, Google or Apple to protect their users from their scrutiny: by ensuring they no longer hold the keys to our most private conversations, these corporations become less of an all-seeing eye than a blindfolded courier.

It is striking that when a company as potentially dangerous as Facebook appears to be at least publicly willing to implement technology that makes users safer by limiting its own power, it is the US government that cries foul. This is because the government would suddenly become less able to treat Facebook as a convenient trove of private lives.

To justify its opposition to encryption, the US government has, as is traditional, invoked the spectre of the web’s darkest forces. Without total access to the complete history of every person’s activity on Facebook, the government claims it would be unable to investigate terrorists, drug dealers money launderers and the perpetrators of child abuse – bad actors who, in reality, prefer not to plan their crimes on public platforms, especially not on US-based ones that employ some of the most sophisticated automatic filters and reporting methods available.

The true explanation for why the US, UK and Australian governments want to do away with end-to-end encryption is less about public safety than it is about power: E2EE gives control to individuals and the devices they use to send, receive and encrypt communications, not to the companies and carriers that route them. This, then, would require government surveillance to become more targeted and methodical, rather than indiscriminate and universal.

What this shift jeopardises is strictly nations’ ability to spy on populations at mass scale, at least in a manner that requires little more than paperwork. By limiting the amount of personal records and intensely private communications held by companies, governments are returning to classic methods of investigation that are both effective and rights-respecting, in lieu of total surveillance. In this outcome we remain not only safe, but free.

• Edward Snowden is former CIA officer and whistleblower, and author of Permanent Record. He is president of the board of directors of the Freedom of the Press Foundation

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen10/20/2019 1:09:32 PM
1 Recommendation   of 173
How Photos of Your Kids Are Powering Surveillance Technology

Millions of Flickr images were sucked into a database called MegaFace. Now some of those faces may have the ability to sue.

By Kashmir Hill and Aaron Krolik
New York Times

The pictures of Chloe and Jasper Papa as kids are typically goofy fare: grinning with their parents; sticking their tongues out; costumed for Halloween. Their mother, Dominique Allman Papa, uploaded them to Flickr after joining the photo-sharing site in 2005.

None of them could have foreseen that 14 years later, those images would reside in an unprecedentedly huge facial-recognition database called MegaFace. Containing the likenesses of nearly 700,000 individuals, it has been downloaded by dozens of companies to train a new generation of face-identification algorithms, used to track protesters, surveil terrorists, spot problem gamblers and spy on the public at large.

“It’s gross and uncomfortable,” said Mx. Papa, who is now 19 and attending college in Oregon. “I wish they would have asked me first if I wanted to be part of it. I think artificial intelligence is cool and I want it to be smarter, but generally you ask people to participate in research. I learned that in high school biology.”

By law, most Americans in the database don’t need to be asked for their permission — but the Papas should have been.

As residents of Illinois, they are protected by one of the strictest state privacy laws on the books: the Biometric Information Privacy Act, a 2008 measure that imposes financial penalties for using an Illinoisan’s fingerprints or face scans without consent. Those who used the database — companies including Google, Amazon, Mitsubishi Electric, Tencent and SenseTime — appear to have been unaware of the law, and as a result may have huge financial liability, according to several lawyers and law professors familiar with the legislation.

How MegaFace was born

How did the Papas and hundreds of thousands of other people end up in the database? It’s a roundabout story.

In the infancy of facial-recognition technology, researchers developed their algorithms with subjects’ clear consent: In the 1990s, universities had volunteers come to studios to be photographed from many angles. Later, researchers turned to more aggressive and surreptitious methods to gather faces at a grander scale, tapping into surveillance cameras in coffee shops, college campuses and public spaces, and scraping photos posted online.

According to Adam Harvey, an artist who tracks the data sets, there are probably more than 200 in existence, containing tens of millions of photos of approximately one million people. (Some of the sets are derived from others, so the figures include some duplicates.) But these caches had flaws. Surveillance images are often low quality, for example, and gathering pictures from the internet tends to yield too many celebrities.

In June 2014, seeking to advance the cause of computer vision, Yahoo unveiled what it called “ the largest public multimedia collection that has ever been released,” featuring 100 million photos and videos. Yahoo got the images — all of which had Creative Commons or commercial use licenses — from Flickr, a subsidiary.

The database creators said their motivation was to even the playing field in machine learning. Researchers need enormous amounts of data to train their algorithms, and workers at just a few information-rich companies — like Facebook and Google — had a big advantage over everyone else.

“We wanted to empower the research community by giving them a robust database,” said David Ayman Shamma, who was a director of research at Yahoo until 2016 and helped create the Flickr project. Users weren’t notified that their photos and videos were included, but Mr. Shamma and his team built in what they thought was a safeguard.

They didn’t distribute users’ photos directly, but rather links to the photos; that way, if a user deleted the images or made them private, they would no longer be accessible through the database.

But this safeguard was flawed. The New York Times found a security vulnerability that allows a Flickr user’s photos to be accessed even after they’ve been made private. (Scott Kinzie, a spokesman for SmugMug, which acquired Flickr from Yahoo in 2018, said the flaw “potentially impacts a very small number of our members today, and we are actively working to deploy an update as quickly as possible.” Ben MacAskill, the company’s chief operating officer, added that the Yahoo collection was created “years before our engagement with Flickr.”)

Additionally, some researchers who accessed the database simply downloaded versions of the images and then redistributed them, including a team from the University of Washington. In 2015, two of the school’s computer science professors — Ira Kemelmacher-Shlizerman and Steve Seitz — and their graduate students used the Flickr data to create MegaFace.

Containing more than four million photos of some 672,000 people, it held deep promise for testing and perfecting face-recognition algorithms.

Monitoring Uighurs and outing porn actors Importantly to the University of Washington researchers, MegaFace included children like Chloe and Jasper Papa. Face-recognition systems tend to perform poorly on young people, but Flickr offered a chance to improve that with a bonanza of children’s faces, for the simple reason that people love posting photos of their kids online.

In 2015 and 2016, the University of Washington ran the “MegaFace Challenge,” inviting groups working on face-recognition technology to use the data set to test how well their algorithms were working.

The school asked people downloading the data to agree to use it only for “noncommercial research and educational purposes.” More than 100 organizations participated, including Google, Tencent, SenseTime and NtechLab. In all, according to a 2016 university news release, “more than 300 research groups” have worked with the database. It has been publicly cited by researchers from Amazon and, according to Mr. Harvey, Mitsubishi Electric and Philips.

Some of these companies have been criticized for the way clients have deployed their algorithms: SenseTime’s technology has been used to monitor the Uighur population in China, while NtechLab’s has been used to out pornography actors and identify strangers on the subway in Russia.

SenseTime’s chief marketing officer, June Jin, said that company researchers used the MegaFace database only for academic purposes. “Researchers have to use the same data set to ensure their results are comparable like-for-like,” Ms. Jin wrote in an email. “As MegaFace is the most widely recognized database of its kind, it has become the de facto facial-recognition training and test set for the global academic and research community.”

NtechLab spokesman Nikolay Grunin said the company deleted MegaFace after taking part in the challenge, and added that “the main build of our algorithm has never been trained on these images.” Google declined to comment.

A spokeswoman for the University of Washington declined to make MegaFace’s lead researchers available for interviews, saying they “have moved on to other projects and don’t have the time to comment on this.” Efforts to contact them individually were unsuccessful.

MegaFace’s creation was financed in part by Samsung, Google’s Faculty Research Award, and by the National Science Foundation/Intel.

In recent years, Ms. Kemelmacher-Shlizerman has sold a face-swapping image company to Facebook and advanced deep-fake technology by converting audio clips of Barack Obama into a realistic, synthetic video of him giving a speech. She is now working on a “moonshot project” at Google.

‘What the hell? That is bonkers’ MegaFace remains publicly available for download. When The New York Times recently requested access, it was granted within a minute.

MegaFace doesn’t contain people’s names, but its data is not anonymized. A spokesman for the University of Washington said researchers wanted to honor the images’ Creative Commons licenses. As a result, each photo includes a numerical identifier that links back to the original Flickr photographer’s account. In this way, The Times was able to trace many photos in the database to the people who took them.

“What the hell? That is bonkers,” said Nick Alt, an entrepreneur in Los Angeles, when told his pictures were in the database, including photos he took of children at a public event in Playa Vista, Calif., a decade ago.

“The reason I went to Flickr originally was that you could set the license to be noncommercial. Absolutely would I not have let my photos be used for machine-learning projects. I feel like such a schmuck for posting that picture. But I did it 13 years ago, before privacy was a thing.”

Another subject, who asked to be identified as J., is now a 15-year-old high school sophomore in Las Vegas. Photos of him as a toddler are in the MegaFace database, thanks to his uncle’s posting them to a Flickr album after a family reunion a decade ago. J. was incredulous that it wasn’t illegal to put him in the database without his permission, and he is worried about the repercussions.

Since middle school, he has been part of an Air Force Association program called CyberPatriot, which tries to steer young people with programming skills toward careers in intelligence and the military. “I’m very protective of my digital footprint because of it,” he said. “I try not to post photos of myself online. What if I decide to work for the N.S.A.?”

For J., Mr. Alt and most other Americans in the photos, there is little recourse. Privacy law is generally so permissive in the United States that companies are free to use millions of people’s faces without their knowledge to power the spread of face-recognition technology. But there is an exception.

In 2008, Illinois passed a prescient law protecting the “biometric identifiers and biometric information” of its residents. Two other states, Texas and Washington, went on to pass their own biometric privacy laws, but they aren’t as robust as the one in Illinois, which strictly forbids private entities to collect, capture, purchase or otherwise obtain a person’s biometrics — including a scan of their “face geometry” — without that person’s consent.

“Photos themselves are not covered by the Biometric Information Privacy Act, but the scan of the photos should be. The mere use of biometric data is a violation of the statute,” said Faye Jones, a law professor at the University of Illinois. “Using that in an algorithmic contest when you haven’t notified people is a violation of the law.”

Illinois residents like the Papas whose faceprints are used without their permission have the right to sue, said Ms. Jones, and are entitled to $1,000 per use, or $5,000 if the use was “reckless.” The Times attempted to measure how many people from Illinois are in the MegaFace database; one approach, using self-reported location information, suggested 6,000 individuals, and another, using geotagging metadata, indicated as many as 13,000.

Their biometrics have likely been processed by dozens of companies. According to multiple legal experts in Illinois, the combined liability could add up to more than a billion dollars, and could form the basis of a class action.

“We have plenty of ambitious class-action lawyers here in Illinois,” said Jeffrey Widman, the managing partner at Fox Rothschild in Chicago. “The law’s been on the books in Illinois since 2008 but was basically ignored for a decade. I guarantee you that in 2014 or 2015, this potential liability wasn’t on anyone’s radar. But the technology has now caught up with the law.”

A $35 billion case against Facebook

It’s remarkable that the Illinois law even exists. According to Matthew Kugler, a law professor at Northwestern University who has researched the Illinois act, it was inspired by the 2007 bankruptcy of a company called Pay by Touch, which had the fingerprints of many Americans, including Illinoisans, on file; there were worries that it could sell them during its liquidation.

No one from the technology industry weighed in on the bill, according to legislative and lobbying records.

“When the law was passed, no one who is now concerned about it was thinking about the issue,” Mr. Kugler said. Silicon Valley is aware of the law now. Bloomberg News reported in April 2018 that lobbyists for Google and Facebook were trying to weaken its provisions.

More than 200 class-action lawsuits alleging misuse of residents’ biometrics have been filed in Illinois since 2015, including a $35 billion case against Facebook for using face recognition to tag people in photos. That lawsuit gained momentum in August, when the United States Court of Appeals for the Ninth Circuit rejected the company’s arguments that the people did not suffer “concrete harm.”

In recent years, technology companies have been treading more lightly in states with biometric legislation. When Google released a feature in 2018 that matched selfies to famous works of art, people in Illinois and Texas couldn’t use it. And Google’s Nest security cameras don’t offer an otherwise standard feature for recognizing familiar faces in Illinois.

“It’s creepy that you found me. I always lived with the philosophy that what I put out there was public, but I couldn’t have imagined this,” said Wendy Piersall, a publisher and City Council member in Woodstock, Ill., whose photos, along with those of her three children, were in the MegaFace database.

“We can’t use the fun art app; why are you using our kids’ faces to test your software?” she added. “My photos there are geotagged to Illinois. It’s not hard to figure out where these pictures were taken. I’m not a sue-happy person, but I would cheer someone else on to go after this.”

Privacy nihilism

Some of the Illinois lawsuits have been settled or dismissed, but most are active, and Mr. Kugler, the Northwestern law professor, noted that basic legal questions remained unanswered. It’s unclear what the legal liability would be for a company that takes photos uploaded in Illinois but processes the facial data in another state, or even another country.

“Defendants are going to be creative in searching for arguments, because no one wants to be stuck holding this expensive hot potato,” he said.

A spokesman for Amazon Web Services said its use of the data set was “compliant with B.I.P.A.,” while declining to explain how. Mario Fante, a spokesman for Philips, wrote in an email that the company “was never aware of any Illinois residents included in the above-mentioned data set.”

Victor Balta, a spokesman for the University of Washington, said, “All uses of photos in the researchers’ database are lawful. The U.W. is a public research university, not a private entity, and the Illinois law targets private entities.”

Some of the Illinoisans we found in MegaFace and contacted were indifferent about the use of their faces.

“I do know that when you upload information online, it can be used in unexpected ways, so I guess I’m not surprised,” said Chris Scheufele, a web developer in Springfield. “When you upload information to the internet and make it available for public consumption, you should expect it to be scraped.”

What about the subjects of his photos? Mr. Scheufele laughed. “I haven’t talked to my wife about it,” he said.

Privacy nihilism” is an increasingly familiar term for giving up on trying to control data about oneself in the digital era. What happened to Chloe Papa could, depending on your perspective, argue for extreme vigilance or total resignation: Who could have possibly predicted that a snapshot of a toddler in 2005 would contribute, a decade and a half later, to the development of bleeding-edge surveillance technology?

“We have become accustomed to trading convenience for privacy, so that has dulled our senses as to what is happening with all the data gathered about us,” said Ms. Jones, the law professor. “But people are starting to wake up.

Blacki Migliozzi contributed reporting. Design and production by Michael Beswetherick, Amanda Cordero, Jim DeMaria and Jessica White.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)

To: Glenn Petersen who wrote (61)10/24/2019 12:53:27 PM
From: DinoNavarre
1 Recommendation   of 173

Share RecommendKeepReplyMark as Last Read

From: DinoNavarre10/24/2019 1:16:11 PM
   of 173
I wonder how companies like SAIL will fit into the Surveillance state...??? Part of DRM/TPM business.

The company offers on-premises software and cloud-based solutions, which empower organizations to govern the digital identities of employees, contractors, business partners, and other users, as well as manage their constantly changing access rights to enterprise applications and data across hybrid IT environments.

Will DRM/TPM guys morph into systems that access peoples entire online/electronic history then assign a Digital Identity Score...???

Share RecommendKeepReplyMark as Last Read

From: DinoNavarre11/1/2019 8:54:59 AM
   of 173
“Using video footage, emotion recognition technology can rapidly identify criminal suspects by analysing their mental state?.?.?.?

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen11/1/2019 2:23:54 PM
1 Recommendation   of 173
Motorola, known for cellphones, is fast becoming a major player in government surveillance and artificial intelligence

Motorola Solutions is among the tech firms racing to deliver new ways of monitoring the public.

Oct. 2, 2019, 3:30 AM CDT
By Jon Schuppe

Motorola Solutions has invested in powerful surveillance tools, including automated license plate readers, police body cameras equipped with appearance-detection technology and security cameras that can find people.Erik Carter / for NBC News

The surveillance tools have been installed in schools and public housing, deployed on roads and public transit, and worn by police officers.

They have™ve been developed by an array of technology firms competing for government business.

And many are now owned by a company seeking to grab a bigger piece of a booming market.

Motorola, a brand typically associated with cellphones and police radios, has joined the race among tech firms to deliver new ways of monitoring the public.

Since 2017, the Chicago-based tech company now known as Motorola Solutions, after Motorola Inc. spun off its mobile phone business has invested $1.7 billion to support or acquire companies that build police body cameras; train cameras to spot certain faces or behavior; sift through video for suspicious people; and track the movement of cars by their license plates. By consolidating these tools within a single corporation, and potentially combining them into a single product, Motorola Solutions is boosting its stature in the surveillance industry ─ and amplifying concerns about the government's growing power to watch people almost anywhere they go.

Your privacy is more protected when information about you is scattered among agencies and entities. When all that is unified under one roof, that sharpens the privacy issues, said Jay Stanley, a senior policy analyst for the American Civil Liberties Union, where he researches technology's impact on privacy. I don't know exactly what kind of synergies a company like Motorola Solutions might get from assembling all these pieces, but in general it's a scary prospect.

A Vigilant Solutions camera that scans license plates. Motorola Solutions now owns the company. Vigilant Solutions

Motorola Solutions did not make executives available to answer questions about its acquisitions or plans for them. The company provided a statement that described its plan to add artificial intelligence products, including object detection and unusual motion detection, to a package it sells to public safety agencies. The systems can help flag a potential trespasser or the appearance of smoke, the company said. The company emphasized that the new tools are not meant to make automatic policing decisions but to help officers decide how to act.

Stanley first noticed signs of Motorola Solutions new direction while walking through the exhibition hall at the International Association of Chiefs of Police annual conference two years ago. In a blog post about his visit, he included a photo of a Motorola Solutions booth promoting its new line of police body cameras alongside new ways of analyzing video.

Motorola Solutions' move into high-tech surveillance hasn't attracted much scrutiny from privacy researchers. But that is changing as the company continues to assemble powerful surveillance tools.

They include:

Police body cameras that learn what people look like: In 2017, Motorola Solutions invested in and partnered with a Boston artificial intelligence startup called Neurala to develop police body cameras that can search in real time for people or objects based on their appearances. Motorola Solutions says the partnership was limited to "early technology feasibility studies, prototypes and customer research." In July 2019, Motorola Solutions acquired WatchGuard, a Texas body-camera maker, which will help it add to its existing line of body cameras.

Surveillance cameras that can track people's movements: In March 2018, Motorola Solutions acquired Avigilon, a Canadian company that sells surveillance cameras along with software to scan footage for a particular person or vehicle. Avigilon has contracts to provide these artificial intelligence-driven applications to several school districts, including Wilson County, Tennessee; Fulton County, Georgia; and Broward County, Florida. Clients also include the New Bedford Housing Authority in Massachusetts, the Elk Grove Police Department in California and the city of Berkeley, California, according to Avigilon's promotional materials, public contract documents and interviews with local government officials.

Automated license plate readers: In January, Motorola Solutions bought VaaS International Holdings, owner of California-based Vigilant Solutions, which works with government agencies, including hundreds of police departments in the United States and Immigration and Customs Enforcement, to collect and share data on the location of cars captured by its automated license plate readers. VaaS also owns Digital Recognition Network, which shares license plate reader data with private companies, including car-repossession agents.

In an Aug. 1 conference call with stock market analysts, Motorola Solutions Chairman and CEO Greg Brown described how his company has combined these technologies into suites of products, with a particular focus on police and schools.

We are about building platforms, Brown said. It's not just video. … It’s the storage, it's the management, it's the analytics, it's the machine learning and the AI that take all of that end-to-end experience and provide use cases around specific verticals to differentiate. This is a hot market.”

This strategy isn't unique to Motorola Solutions. Much bigger players in the surveillance market, including Amazon, Microsoft and IBM, have expanded their holdings in artificial intelligence companies. So have rising competitors like Axon, formerly known as Taser, which has become the nation's largest provider of police body cameras and is exploring new uses of artificial intelligence. (It established an ethics board for advice on how to develop these technologies.)

Using artificial intelligence to analyze footage from the nation's ever-growing networks of surveillance cameras helps police agencies do their jobs more efficiently âs it saves time sifting for evidence, and allows easy redaction of people's faces from footage that is released to the public. Authorities now use facial recognition to identify suspects in all kinds of crimes, from murder to shoplifting. License plate readers have been used to recover stolen cars, solve drive-by shootings and track down serial burglars, police say.

Among the fastest growing surveillance markets are public schools. Motorola Solutions has taken advantage of that trend, with Avigilon winning about a dozen contracts over the last two years to install state-of-the-art video networks. Buyers include Wilson County Schools in Tennessee, which outfitted two of its newest schools with cameras that can automatically track people through rooms and hallways based on their appearance.

Avigilon Appearance Search technology allows users to search for a person by selecting physical descriptions, including hair and clothing color, gender and age.Avigilon

Whatever tools are available to provide a safe environment for our students and staff I think it's our obligation to explore, said Mickey Hall, Wilson County Schools deputy director.

The next time the district builds a school, Hall said, it may include systems that read the license plates of all cars that drive past, and that identify visitors by their faces.

But the technology also raises the risk of the government and private corporations amassing too much power to peer into people's lives, Stanley, of the ACLU, said. Once a person is targeted for this kind of surveillance, whether they are suspected in a crime, hold unpopular political views or have been mistaken for someone else, authorities can examine where they've been and may find something incriminating, Stanley said.

It turns scattered cameras into mass surveillance machines that allow companies and agencies to use them to rewind someone's life, he said.

Keith Housum, an analyst who follows Motorola Solutions at the equity research firm Northcoast Research, said its recent surveillance-related acquisitions comprise a small fraction of the company's operations but have helped boost its stock price, as investors see it as pursuing new revenue sources.

AI is very important to their product portfolio because so much information is being gathered, so much video is being recorded, that there's no way you can go through it with the naked eye, Housum said, adding, "Its where the new future is going to be."

Dave Maass, a senior investigative researcher at the Electronic Frontier Foundation, a nonprofit organization that advocates for limits on government surveillance and has documented law enforcement’s widespread sharing of data collected by Vigilant Solutions license plate readers, said he'd like to see more local governments pass ordinances requiring deeper public scrutiny of the technology.

He also said Motorola Solutions should follow other companies, including Axon and Microsoft, in seeking ways to make sure their products are used responsibility by their customers. Axon has pledged not to outfit its body cameras with facial recognition technology, and Microsoft President Brad Smith has said his company won't sell facial recognition to governments for use in mass surveillance.

There's a potential for positive change if Motorola decides to reform Vigilant as it incorporates it into its other business. However, the merging of all these technologies underlines that surveillance is big business, Maass said in an email. And it's important to ensure that decisions about surveillance in our society aren't driven by sales and profit, but by what the community actually wants and needs.

Share RecommendKeepReplyMark as Last Read

From: Ron11/3/2019 10:04:24 AM
1 Recommendation   of 173
Porch Pirates. Surveillance and Social Media

Share RecommendKeepReplyMark as Last Read

From: Glenn Petersen11/3/2019 10:56:36 AM
   of 173
h/t Old_Sparky

Joe Rogan talks to Edward Snowdon. Very interesting and articulate young man. What he says ties into the deep state mess going on in the US today.

Share RecommendKeepReplyMark as Last ReadRead Replies (1)
Previous 10 Next 10