|From: Ron||11/1/2019 5:52:12 PM|
|70% of videos watched on Youtube are reportedly from recommendation algorithms. Site by former Google employee tracks recommendations: |
Tech companies have long pitched algorithmic suggestions as giving users what they want, but there are clear downsides even beyond wasted hours online. Researchers have found evidence that recommendation algorithms used by YouTube and Amazon can amplify conspiracy theories and pseudoscience.
Guillaume Chaslot, who previously worked on recommendations at YouTube but now works to document their flaws, says those problems stem from companies designing systems designed primarily to maximize the time users spend on their services. It works— YouTube has said more than 70 percent of viewing time stems from recommendations—but the results aren’t always pretty. “The AI is optimized to find clickbait,” he says.
|RecommendKeepReplyMark as Last Read|
|From: Glenn Petersen||11/2/2019 9:37:18 AM|
|Russia just brought in a law to try to disconnect its internet from the rest of the world|
Published Fri, Nov 1 20193:30 AM EDT
Updated Fri, Nov 1 20196:00 AM EDT
Elizabeth Schulze @eschulze9
- Russia’s “sovereign internet” law went into effect on Friday.
- The law tightens Moscow’s control over the country’s internet infrastructure and aims to provide a way for Russia to disconnect its networks from the rest of the world.
- Experts doubt whether such a move is technically possible.
MOSCOW, RUSSIA - 2019/03/10: Participants in an opposition rally in central Moscow protest against tightening state control over the internet in Russia.
SOPA Images | LightRocket | Getty Images
It’s been called an online Iron Curtain.
On Friday, a controversial law went into force that enables Russia to try to disconnect its internet from the rest of the world, worrying critics who fear the measure will promote online censorship.
The Kremlin says its “sovereign internet” law, which was signed by President Vladimir Putin in May, is a security measure to protect Russia in the event of an emergency or foreign threat like a cyberattack. The law will allow Moscow to tighten control over the country’s internet by routing web traffic through state-controlled infrastructure and creating a national system of domain names.
In theory, the measure would allow Russia to operate its own internal networks that could run independently from the rest of the World Wide Web.
Experts doubt whether such a move is technically possible and say the law is, instead, an attempt by the Russian government to censor information online.
“To be able to manage the information flow in their favor, they have to have a system in place beforehand,” said Sergey Sanovich of Princeton University’s Center for Information Technology Policy in a CNBC interview.
Thousands of protesters took to the streets in Russia to protest the measure earlier this year, while human rights advocates warned the law threatens free speech and media.
“The ‘sovereign internet’ law purports to provide a legal basis for mass surveillance and allows the government to effectively enforce online existing legislation that undermines freedom of expression and privacy,” Human Rights Watch said in a blog post Thursday.
Putin has taken a series of other steps to try to curb online freedoms, such as banning encrypted messaging service Telegram, but many of those attempts have proven to be unsuccessful.
“The goal is to be able to block what they don’t want without harming the network overall,” Sanovich said.
Russia not like China
Unlike China’s Great Firewall, which was built on a tight concentration of state-run network operators, Russia allowed its internet to develop freely over the past three decades. Undoing global network connections is tricky, according to Andrew Sullivan, president and CEO of the Internet Society.
“You can think of the network connectivity like water that is trying to get to the lower ground; it’s going to keep trying to flow,” he told CNBC. “You have to do a whole lot of work to make sure that the traffic won’t flow.”
Sullivan said Russia has tried to carry out tests to block its internet in the past but the networks proved to be resilient. The new law, he said, will end up making the internet less reliable for users in Russia.
“By using this regulatory model for the internet when the internet isn’t really designed to work that way, we risk doing damage,” he said.
|RecommendKeepReplyMark as Last Read|
|From: Glenn Petersen||11/2/2019 9:56:29 AM|
|Zuckerberg’s power to hurt Trump|
Mike Allen, Sara Fischer
November 2, 2019
Top Republicans are privately worried about a new threat to President Trump’s campaign: the possibility of Facebook pulling a Twitter and banning political ads.
Why it matters: Facebook says it won't, but future regulatory pressure could change that. If Facebook were to ban — or even limit — ads, it could upend Trump’s fundraising and re-election plan, GOP officials tell Axios.
-- Trump relies heavily — much more so than Democrats — on targeted Facebook ads to shape views and raise money.
Red flag: Kara Swisher, of Recode, the super plugged-in tech writer, predicted on CNBC's "Squawk Box" that Mark Zuckerberg will ultimately buckle on allowing demonstrably false political adds on Facebook: "He's going to change his mind — 100% ... [H]e's done it before."
-- Twitter this week announced a ban on political and advocacy ads. (" Platforms give pols a free pass to lie," by Scott Rosenberg)Why it would hurt Trump: His campaign has mastered the art of using Facebook’s precision-targeting of people to raise money, stir opposition to impeachment, move voters and even sell Trump shirts and hats.
-- Trump campaign manager Brad Parscale ridiculed the decision ("yet another attempt by the left to silence Trump and conservatives"), signaling the wicked backlash that would hit Zuckerberg.
-- The Trump campaign often uses highly emotional appeals to get clicks and engagement, which provides valuable data on would-be voters and small-dollar donors. Trump campaign communications director Tim Murtaugh told Axios: "We’ve always known that President Trump was too successful online and that Democrats would one day seek to wipe him off the Internet."
-- "That’s why we’ve invested so heavily in building up our data to allow us to communicate with millions of voters away from any third-party platforms like Facebook."By the numbers: The Trump campaign has spent $15.7 million dollars on Facebook ads this year, according to data from progressive advertising firm Bully Pulpit Interactive.
-- "Democrats demanding internet platforms shut down political advertising will guarantee Trump’s victory in 2020. They’re idiots."
-- The next closest Democratic spender is billionaire Tom Steyer, who has so far spent less than half of that. Go deeper:
-- Those numbers don't include millions of dollars of additional Facebook ad spending from outside groups. The conservative non-profit Judicial Watch, for example, has spent $2.5 million on issue ads since the beginning of the year.
|RecommendKeepReplyMark as Last ReadRead Replies (1)|
|To: Glenn Petersen who wrote (6562)||11/4/2019 3:38:20 PM|
|From: Glenn Petersen|
|The Internet Archive Is Making Wikipedia More Reliable|
The operator of the Wayback Machine allows Wikipedia's users to check citations from books as well as the web.
November 3, 2019
Photograph: Alexander Spatari/Getty Images
Wikipedia is the arbiter of truth on the internet. It's what settles arguments at bars. It supplies answers for the information snippets you see on your Google or Bing search results. It's the first stop for nearly everyone doing online research.
The reason people rely on Wikipedia, despite its imperfections, is that every claim is supposed to have citations. Any sentence that isn't backed up with a credible source risks being slapped with the dreaded "citation needed" label. Anyone can check out those citations to learn more about a subject, or verify that those sources actually say what a particular Wikipedia entry claims they do—that is, if you can find those sources.
It's easy enough when the sources are online. But many Wikipedia articles rely on good old-fashioned books. The entry on Martin Luther King Jr., for example, cites 66 different books. Until recently, if you wanted to verify that those books say what the article says they say, or if you just wanted to read the cited material, you'd need to track down a copy of the book.
Now, thanks to a new initiative by the Internet Archive, you can click the name of the book and see a two-page preview of the cited work, so long as the citation specifies a page number. You can also borrow a digital copy of the book, so long as no else has checked it out, for two weeks—much the same way you'd borrow a book from your local library. (Some groups of authors and publishers have challenged the archive's practice of allowing users to borrow unauthorized scanned books. The Internet Archive says it seeks to widen access to books in “balanced and respectful ways.”)
So far the Internet Archive has turned 130,000 references in Wikipedia entries in various languages into direct links to 50,000 books that the organization has scanned and made available to the public. The organization eventually hopes to allow users to view and borrow every book cited by Wikipedia, with the ultimate goal being to digitize every book ever published.
“Our goal is to be a library that’s useful and reachable by more people,” says Mark Graham, director of the Internet Archive's Wayback Machine service.
If successful, the Internet Archive's project would be a boon to students, journalists, or anyone who wants to check the references of a Wikipedia entry. Google Books also has a massive collection of digitized print books, but it tends to only show small snippets of a text.
"I've tried to verify Wikipedia pages by searching blurbs in Google Books but it's an unpredictable link, and you often don't have enough surrounding context to evaluate the use," says Mike Caulfield, a digital literacy expert and director of blended and networked learning at Washington State University Vancouver. "The ability to read a page or two of context around a quote is crucial to both editors trying to protect the integrity of articles, and to readers who need to get to that next step of verification."
You could, of course, verify the information the traditional way by tracking down a physical copy of a book. But students working late into the night on term papers, or reporters on tight deadlines, might not have time to order a book on Amazon or wait for a library book to become available. In other cases, books might be hard to come by. The Wikipedia entry on the internment of Japanese-Americans during World War II, for example, cites hard-to-find titles, says Internet Archive director of partnerships Wendy Hanamura. But thanks to the Internet Archive's Digital Library of Japanese-American Incarceration, created with the Seattle-based organization Densho, many of those rare books are now available online.
The Internet Archive embarked on its effort to weave digital books into Wikipedia after the 2016 election. "No matter who you wanted to be president, I would say almost everyone would agree the whole process was a train wreck," Internet Archive founder Brewster Kahle said in a speech in San Francisco last week. From fake news and inauthentic social media campaigns waged by foreign nations to concerns about voting systems themselves being rigged, there were plenty of ways that technology and information systems failed the public. So Kahle convened a group of people to discuss how to improve the information ecosystem. One issue that came up was the fragility of Wikipedia citations. Books and academic journals supply some of the best, most reliable information for Wikipedia editors, but those sources frequently are either unavailable online or are behind paywalls. And even freely available internet content often disappears.
|RecommendKeepReplyMark as Last Read|
|From: Ron||11/11/2019 4:14:18 PM|
|Google’s Secret ‘Project Nightingale’ Gathers Personal Health Data on Millions of Americans|
Search giant is amassing health records from Ascension facilities in 21 states; patients not yet informed
Google is engaged with one of the country’s largest health-care systems to collect and crunch the detailed personal health information of millions of Americans across 21 states.
The initiative, code-named “Project Nightingale,” appears to be the largest in a series of efforts by Silicon Valley giants to gain access to personal health data and establish a toehold in the massive health-care industry. Amazon.com Inc., Apple Inc. and Microsoft Corp. are also aggressively pushing into health care, though they haven’t yet struck deals of this scope.
Google began the effort in secret last year with St. Louis-based Ascension, the second-largest health system in the U.S., with the data sharing accelerating since summer, the documents show.
The data involved in Project Nightingale encompasses lab results, doctor diagnoses and hospitalization records, among other categories, and amounts to a complete health history, including patient names and dates of birth.
Neither patients nor doctors have been notified. At least 150 Google employees already have access to much of the data on tens of millions of patients, according to a person familiar with the matter and documents.
Some Ascension employees have raised questions about the way the data is being collected and shared, both from a technological and ethical perspective, according to the people familiar with the project. But privacy experts said it appeared to be permissible under federal law. That law, the Health Insurance Portability and Accountability Act of 1996, generally allows hospitals to share data with business partners without telling patients, as long as the information is used “only to help the covered entity carry out its health care functions.”
Google in this case is using the data, in part, to design new software, underpinned by advanced artificial intelligence and machine learning, that zeroes in on individual patients to suggest changes to their care. Staffers across Alphabet Inc., Google’s parent, have access to the patient information, documents show, including some employees of Google Brain, a research science division credited with some of the company’s biggest breakthroughs.
In a press release issued after the Journal’s article was published, the companies said the project is compliant with federal health law and includes robust protections for patient data.
|RecommendKeepReplyMark as Last Read|
|From: Glenn Petersen||11/16/2019 9:21:23 AM|
|Wikipedia co-founder wants to give you an alternative to Facebook and Twitter|
WT:Social will be funded by user donations, not advertising.
Christine Fisher, @cfisherwrites
Rosdiana Ciaravolo via Getty Images
Two years ago, Wikipedia co-founder Jimmy Wales launched Wikitribune, an online publication meant to combat fake news with original stories by reporters and "citizen journalists." Wikitribune never really caught on, so now, Wales is shifting gears. Wikitribune is relaunching as WT:Social, a social-networking site and news sharing platform. He hopes it will be an alternative to Facebook and Twitter.
Like those platforms, WT:Social will let users share articles. But WT:Social will be funded by donations, rather than advertising. "The business model of social media companies, of pure advertising, is problematic," Wales told Financial Times. "It turns out the huge winner is low-quality content."
Unlike Facebook and Twitter, which use algorithms to bump posts with the most comments or likes to the top, WT:Social will show the newest links first. It may add an "upvote" button in the future.
WT:Social will also support small, niche communities. Those sound wholesome now (think: beekeeping), but we've seen how small communities can fester online. WT:Social promises, "We will foster an environment where bad actors are removed because it is right, not because it suddenly affects our bottom-line."
WT:Social will be free to join, but at the moment, you either have to sign up for a waitlist, donate or invite friends. Just a month old, it already has 50,000 users, Wales told FT, adding "Obviously the ambition is not 50,000 or 500,000 but 50m and 500m."
|RecommendKeepReplyMark as Last Read|
|From: Ron||11/18/2019 12:40:36 PM|
|‘Low-Code’ Becomes High Priority as Automation Demands Soar |
CIOs are expanding the use of tools that let noncoders create applications Chief information officers, on the hook to automate manual and repetitive business processes, are increasingly turning to tools designed to create applications quickly, without the sweat of writing and debugging lines of code.
Collectively known as “low-code,” these tools have been available in some form for decades. But they have grown more popular with information-technology staff and other departments as workplace automation grows and young, mobile-savvy people join the workforce.
With low-code, employees can quickly make apps by picking, dragging and dropping from a collection of ready-made software building blocks.
Johnson Controls International PLC, an Ireland-based industrial and technology conglomerate that makes heating, ventilation, and air conditioning systems, tapped nontech employees like engineers to create low-code dashboards that track installations, record project metrics and manage service calls, said Chief Information Officer Nancy Berce.
The company, which has about 105,000 employees across more than 100 countries, set up guardrails so the low-code apps don’t disrupt the resiliency of its central systems, she said.
“A lot of people are creating a lot of good things; how do we start to share that and make that more available to broader users? We haven’t quite figured that one out yet. That’s the next level of maturity,” Ms. Berce said.
Freeing up staff to focus on core technology issues was one of the reasons St. Luke’s University Health Network in Pennsylvania started using low-code, said CIO Chad Brisendine.
“There’s always a bigger appetite for IT than what we’re able to provide. I see this as helping meet that demand,” Mr. Brisendine said.
IT employees turned to low-code to build more than 20 applications using Microsoft Corp. tools. None of them took more than 20 hours to create.
It took eight hours to make an app that pulls information from the hospital’s systems, including a Workday Inc. platform, to track and send reminders to staff on continuing medical training, a requirement for doctors to retain their license. The author, an analyst in the IT department, didn’t know how to code, Mr. Brisendine said.
Mr. Brisendine next year plans to expand low-code training to more business units within St. Luke’s, which has about 15,000 employees.
Companies including Siemens AG , Appian Corp. , Pegasystems Inc. and Salesforce.com Inc. also provide low-code tools.
Forms of low-code have been around for decades, but combining it with the use of application programming interfaces, chunks of code designed to connect systems and platforms and share data, has made it easier for those not conversant in C++ or Java to create applications with a punch, said Jason Wong, senior director at research and advisory company Gartner Inc.
Gartner is projecting that low-code will account for more than 65% of application development activity by 2024.
David Hoag, CIO at Chicago-based Options Clearing Corp., a central clearinghouse serving as a backstop for trades in the options market, said making low-code applications is as easy as dragging and dropping widgets.
The company used low-code to develop a visitor-registration system as part of an “app a day” program, where technology teams work with other departments to create applications to solve pressing business problems. The system, created in less than a day, registers visitors, logs arrival and departure times, captures visitor and badge information, and helps the facilities team generate reports on visitor activity.
Similar commercial software was quoted at costing between $30,000 and $50,000 a year, Mr. Hoag said.
OCC started building low-code apps in 2015 and today uses about 30 of them. Mr. Hoag sees low-code’s use spreading beyond IT.
|RecommendKeepReplyMark as Last Read|