SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.

   Non-TechWeblogs and Twitter


Previous 10 
From: Ron11/8/2016 5:59:35 PM
   of 1269
 
Data from Twitter can be used by repressive governments to put a black bag over a citizen's head, and make them disappear:
The Twitter Firehose: This is not the Twitter that Jack Dorsey Usually Talks About

bloomberg.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen3/20/2017 9:33:10 AM
1 Recommendation   of 1269
 
The Tweet That Caused A Seizure: How Twitter Was Weaponized

When Kurt Eichenwald, a senior writer at Newsweek, publicly disclosed that he suffered from epilepsy, little did he imagine someone sending a tweet as an online weapon–a GIF image of a strobe light–to provoke a seizure.

Robert Glatter, MD Contributor
Forbes
March 20, 2017

Courtesy; Chris Ratcliffe,Bloomberg
__________________________________

That is exactly what happened to him late in 2016, and now the FBI has arrested John Rayne Rivello, accusing him of sending the tweet containing the potentially deadly file. The FBI has subsequently charged him with criminal cyberstalking with the intent to kill or cause bodily harm.

While we have long known about the profound psychological effects of online bullying, this case illustrates how tweets can be weaponized, leading to bodily harm or even death .

Such electronic transmission of a deadly image is akin to sending pipe bombs or toxic substances such as anthrax via the US postal service: the net effect can be bodily harm or death.

Photosensitivity Epilepsy

Patients with photosensitivity epilepsy can have seizures that are typically triggered by flashing lights, as well as certain visual patterns which are considered bold or contrasting including stripes or check images. Playing video games in excess may also lead to a seizure as well. There appears to be a genetic component to developing this type of epilepsy, which can be treated using anti-epileptic medications.

Patients with photosensitivity epilepsy generally experience a generalized tonic-clonic seizure , more commonly known as convulsive type seizure, as opposed to a seizure where there is only staring and no muscular contraction. During this time, there is loss of consciousness, jerking or contraction of muscles as they contract or stiffen or relax, with abnormal breathing, tongue biting, and incontinence of urine.

As the seizure abates, the muscles relax and there is a gradual return of consciousness. Upon awakening, a person often is tired, sluggish, and sleepy, does not remember the event, and may experience a headache with generalized muscle aches from the muscle contractions.

Approximately 1 in 100 persons in the U.S. suffer from epilepsy , and it is estimated that 3-5% of these people have photosensitivity epilepsy. Children and teens are more apt to suffer with photosensitivity epilepsy, with girls more often affected. Yet, boys actually have more seizures, because they typically spend more time engaged in playing video games, a known and frequent trigger.

If a seizure is prolonged, it can starve brain cells of necessary oxygen, leading to cell death and injury , and potential long term neurologic consequences, impairing cognition, speech and result in weakness. Bodily injury as a result of seizures may include blunt head trauma, lacerations, neck injuries, and joint dislocations.

A seizure develops when there is disordered and chaotic electrical activity in the brain, with epilepsy defined by those with recurrent seizures.

forbes.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen4/1/2017 9:35:44 AM
1 Recommendation   of 1269
 
Inside Twitter’s Obsessive Quest To Ditch The Egg

An icon that came to represent Twitter’s dark side is giving way to one designed not to turn anyone off, without looking like something you’d want to stick with.

By Harry McCracken

Fast Company
03.31.17 | 12:00 pm

Once upon a time, the designers responsible for Twitter’s look and feel reveled in the service’s association with birds. For instance, they not only stuck their winged mascot–known at the time as “Larry the Bird”–into the interface itself, but gave him a field of clouds along the top edge to soar through. And when they unveiled a major redesign in 2010, they provided every new user with a default profile picture that depicted fresh beginnings in a decidedly Twitter-esque way: as an egg.

The idea was that “eventually you’d crack out of an egg and become an amazing Twitter user,” says senior manager of product design Bryan Haggerty, who worked on the project and recalls toying with the idea of even showing the hatching in progress.

A lot has changed since the Twitter egg debuted almost seven years ago. For one thing, the company’s design philosophy has evolved. Quirky is out; straightforward is in. Nowadays, “the playfulness of Twitter is in the content our users are creating, versus how much the brand steps forward in the UI,” says product designer Jen Cotton.

More significantly, the egg has taken on cultural associations that nobody could have anticipated in 2010. Rather than suggesting the promise of new life, it’s become universal shorthand for Twitter’s least desirable accounts: trolls (and bots) engaged in various forms of harassment and spam, created by people so eager to wreak anonymous havoc that they can’t be bothered to upload a portrait image.

The egg’s unsavory reputation has been hard on Twitter’s image. It also hasn’t done any favors for users who stuck with the default avatar out of innocence rather than malevolence. Some members have grown emotionally attached to their eggs or want to maintain a low profile; others simply haven’t gotten around to changing them, or have had trouble figuring out how to do so. (Uploading a profile photo is enough of a stumbling block for newbies that Twitter removed the step from the initial sign-up procedure last year.)

“These regular users would be using a troll’s clothing in some ways, not realizing that they probably should be changing that,” says Haggerty.



A Twitter egg (with one of its seven background-color variants) and the new image that will represent a user who hasn’t chosen a profile picture.Starting today, however, the egg is history. Twitter is dumping the tarnished icon for a new default profile picture–a blobby silhouette of a person’s head and shoulders, intentionally designed to represent a human without being concrete about gender, race, or any other characteristic. Everyone who’s been an egg until now, whatever their rationale, will automatically switch over.

Now, if Twitter ditched the egg in isolation, critics might reasonably take the move as an unserious response to the serious problem behind the visual. Troublemakers, after all, can hide behind the new human silhouette just as freely as they’ve used the egg as a cloak of anonymity. But the timing of the new image isn’t random. Twitter chose to roll it out only after taking multiple steps in recent months to deal with its abuse issue. The company has implemented technology to make it harder for suspended users to open new accounts, and to prevent abusive tweets from spreading virally. It also lets members hide tweets sent by people who are using the default profile picture–an option that will remain useful even though that default is no longer an egg.

“Our safety team has done a lot of really meaningful work in this space,” says Cotton. The new default profile picture “is one effort that design can play a part in.”

Unlike the egg, which was originally meant to be eye-catching and appealing–it even came with a variety of candy-colored backgrounds!–the new avatar aims to be an anodyne placeholder that users will recognize as representing themselves, but quickly want to eliminate by uploading a personal image. “We put words to design to: generic, universal, serious, inclusive, unbranded, and temporary,” says product designer Jeremy Reiss. “An empty state, essentially, is what we wanted it to be,” adds Haggerty.

Designing something to look generic and temporary, it turned out, was surprisingly tricky–especially given that Twitter wanted to be generic and temporary in a way that would make sense to almost anybody. Though the amount of real estate the default picture consumes is tiny, the design team spent about a month figuring out how best to use it.



The default profile pictures that predated the egg.How Do You Represent Nothingness?In its early years, Twitter cycled through a variety of images as the default profile picture. At first, it showed a clip-art drawing of a man with a briefcase–a decision that’s less mysterious if you remember that even the beloved Twitter Fail Whale originated as a stock image by artist Yiying Lu. Briefcase guy eventually gave way to a googley-eyed emoticon-like face, who was displaced by Larry the Bird himself. And then the egg.



Some of the graphic directions that Twitter explored as egg replacements.None of these past works of design proved instructive when the egg-replacement project kicked off. The Twitter design team considered a variety of ways to convey the idea of a default image, says Reiss: “We looked at what other services did, what people expect.” A silhouette of a person was one obvious early contender, but so were tiny drawings of landscapes and patterns that didn’t depict anything in particular.

After pondering its options, the team came back to the silhouetted head as the most logical choice: Twitter, after all, is about people. And then the real work began.



The folks responsible for the redesign spent time pondering precedents such as the “man” and “woman” symbols used on restroom signs, which mostly served to show the pitfalls of grinding human beings down to iconic representation. The male symbol looked like the default; the female one, decked out in a billowing dress that made her shoulders seem narrower than they actually were, came off as “other.”

Twitter isn’t the only social network to wrestle with these sorts of issues: Facebook’s Caitlin Winner has written about her redesign of the company’s “friends” icon, which previously depicted a helmet-haired female lurking behind a male with a pronounced cowlick. But Twitter’s challenge was all the more thorny because the network doesn’t require members to disclose their gender. Rather than creating an avatar that was recognizably female or male, the company needed one that could represent absolutely anyone who might sign up for the service.

Instead of defaulting to the perfectly spherical head of a restroom-signage figure, the designers began playing with other approaches. They gravitated toward a gumdrop-like shape and found it had Rorschach Test-like qualities. “The second you start playing with head shape, you start thinking, ‘Oh, this might not just be a single gender,’” says Cotton. “Is that a man with a beard? Is that a woman with a bob?” Rounding off the shoulders, they found, also helped them create a symbol for “human being” that wasn’t freighted with any specific characteristics.



From round head to gumdrop head.Color themes were another matter of debate. Instead of the egg’s expressive backgrounds, the design team wanted something utilitarian. It also couldn’t be interpreted as indicating a particular race. (Even emoji with screaming-yellow skin can seemingly depict a caucasian person.) The scheme the company settled on–a dark gray figure on a light gray background–had the bonus virtue of being easily discernible by users with impaired vision, accessibility being a current Twitter initiative.

Vague, Mundane, and NoticeableThe gumdrop-headed human passed one test when a variety of Twitter employees outside the design department split almost 50/50 on whether it was male or female. Popped into mockups of the app in place of the egg, it was even more effective. “The eggs were all these vibrant colors, and you didn’t pick up that something was missing,” says Haggerty. “When we put [the new image] in there, it really highlighted the absence: ‘Oh, this person doesn’t have a profile pic.’ Or ‘Oh, I probably should put my picture on here. I don’t look like I’m actually on this platform.’”

When Twitter showed me the final design of the new portrait picture, it occurred to me that the symbol’s head–an oval that’s pointier on the top than the bottom–could be construed as evoking the egg that it’s replacing. When I asked Haggerty, Reiss, and Cotton about that, they told me in startled unison that any resemblance was purely coincidental. Like I said, it’s a Rorschach Test.



Twitter’s 2010 egg designs, and the revised 2014 version, which reduced the color variety and eliminated the 3-D shadow effect.The amount of care that Twitter’s designers put into finessing the new default picture may seem extreme, but they did so in hopes that they could build something to last. “We want to put this out there, and we also don’t want to have to come back and change it in a year or two years,” says Haggerty. “We want it to have longevity.”

Which is not to say that they want it to be anywhere near as recognizable and pervasive an element of the service as the egg has been. In fact, as part of the new image’s arrival, Twitter is launching a campaign to encourage members to get rid of it. “We’ll be prompting people who do have eggs to upload a picture of themselves, to show their best selves,” Cotton explains. In other words: The less we see of this new profile picture, the bigger a success it will be.

fastcodesign.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen5/18/2017 11:55:52 AM
   of 1269
 
What you should know about Twitter’s latest privacy policy update

As it begins to store people’s off-Twitter browsing data for longer, the company adds new data controls and ends Do Not Track support.

Tim Peterson
Marketing Land
on May 17, 2017 at 8:16 pm

When you visit a site that features a tweet button or an embedded tweet, Twitter is able to recognize that you’re on that site and use that information to target you with ads. And now it’s going to hang onto that information for a bit longer but give you more control over it.

Twitter updated its privacy policy on Wednesday so that it can use the information it collects about people’s off-Twitter web browsing for up to 30 days, as opposed to the previous 10-day maximum, according to the updated document that takes effect on June 18. The extension could help Twitter when it comes to making sure its ads are aimed at enough of the right people, which could aid its struggle to attract direct-response advertisers and reverse its advertising revenue declines.

Coinciding with the update, Twitter has also
added a new section to the settings menu on its site and in its mobile apps that details the information Twitter uses to target a person with ads and lets that person deselect individual interest categories and request a list of the companies that use Twitter’s Tailored Audiences option to target them with ads based on information like their email address, Twitter handle or whether they visited the advertiser’s site or used its mobile app.

At the same time Twitter is giving people more control over how they are targeted, it is removing support for Do Not Track, which people can use to ask every website they visit not to track their behavior in order to target them with ads. Twitter made a big deal about supporting Do Not Track in May 2012, so its reversal is a surprise — unless you’ve been following the wave of major ad-supported digital platforms opting to ignore Do Not Track requests. When Hulu announced last July that it would no longer support Do Not Track, it joined nine other major digital platforms that do not respond to these opt-out requests. Now Twitter has joined that list.

Twitter explained its change in position in an update to the Do Not Track entry on its help site. “While we had hoped that our support for Do Not Track would spur industry adoption, an industry-standard approach to Do Not Track did not materialize,” according to the company.

That’s pretty much the same reason that Hulu, Facebook, Google and others have cited for not supporting Do Not Track, though the standard is slated to become an official recommendation by the World Wide Web Consortium (W3C) in August 2017.

While Twitter will no longer support Do Not Track once its new privacy policy takes effect on June 18, the company still offers options for people to disable ads targeted based information collected off Twitter. People can pull up Twitter’s settings menu, select “Privacy and Safety,” then “Personalization and data,” and then toggle off “Personalize ads.” That menu also includes an option to disable Twitter from being able to see when a person visits a site that features a tweet button or an embedded tweet, as well as a nuclear option that also prevents Twitter from sharing a person’s data with other companies, using location-based data to personalize content on Twitter and connecting data across the different devices a person may use to log in to Twitter.

marketingland.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen1/29/2018 9:35:22 PM
   of 1269
 
The Follower Factory

Everyone wants to be popular online.

Some even pay for it.

Inside social media’s black market.

By NICHOLAS CONFESSORE, GABRIEL J.X. DANCE, RICHARD HARRIS and MARK HANSEN
New York Times
JAN. 27, 2018

The real Jessica Rychly is a Minnesota teenager with a broad smile and wavy hair. She likes reading and the rapper Post Malone. When she goes on Facebook or Twitter, she sometimes muses about being bored or trades jokes with friends. Occasionally, like many teenagers, she posts a duck-face selfie.

But on Twitter, there is a version of Jessica that none of her friends or family would recognize. While the two Jessicas share a name, photograph and whimsical bio — “I have issues” — the other Jessica promoted accounts hawking Canadian real estate investments, cryptocurrency and a radio station in Ghana. The fake Jessica followed or retweeted accounts using Arabic and Indonesian, languages the real Jessica does not speak. While she was a 17-year-old high school senior, her fake counterpart frequently promoted graphic pornography, retweeting accounts called Squirtamania and Porno Dan.

All these accounts belong to customers of an obscure American company named Devumi that has collected millions of dollars in a shadowy global marketplace for social media fraud. Devumi sells Twitter followers and retweets to celebrities, businesses and anyone who wants to appear more popular or exert influence online. Drawing on an estimated stock of at least 3.5 million automated accounts, each sold many times over, the company has provided customers with more than 200 million Twitter followers, a New York Times investigation found.

The accounts that most resemble real people, like Ms. Rychly, reveal a kind of large-scale social identity theft. At least 55,000 of the accounts use the names, profile pictures, hometowns and other personal details of real Twitter users, including minors, according to a Times data analysis.



Jessica Rychly, whose social identity was stolen by a Twitter bot when she was in high school.
____________________

“I don’t want my picture connected to the account, nor my name,” Ms. Rychly, now 19, said. “I can’t believe that someone would even pay for it. It is just horrible.”

These accounts are counterfeit coins in the booming economy of online influence, reaching into virtually any industry where a mass audience — or the illusion of it — can be monetized. Fake accounts, deployed by governments, criminals and entrepreneurs, now infest social media networks. By some calculations, as many as 48 million of Twitter’s reported active users — nearly 15 percent — are automated accounts designed to simulate real people, though the company claims that number is far lower.

In November, Facebook disclosed to investors that it had at least twice as many fake users as it previously estimated, indicating that up to 60 million automated accounts may roam the world’s largest social media platform. These fake accounts, known as bots, can help sway advertising audiences and reshape political debates. They can defraud businesses and ruin reputations. Yet their creation and sale fall into a legal gray zone.

“The continued viability of fraudulent accounts and interactions on social media platforms — and the professionalization of these fraudulent services — is an indication that there’s still much work to do,” said Senator Mark Warner, the Virginia Democrat and ranking member of the Senate Intelligence Committee, which has been investigating the spread of fake accounts on Facebook, Twitter and other platforms.

Despite rising criticism of social media companies and growing scrutiny by elected officials, the trade in fake followers has remained largely opaque. While Twitter and other platforms prohibit buying followers, Devumi and dozens of other sites openly sell them. And social media companies, whose market value is closely tied to the number of people using their services, make their own rules about detecting and eliminating fake accounts.

Devumi’s founder, German Calas, denied that his company sold fake followers and said he knew nothing about social identities stolen from real users. “The allegations are false, and we do not have knowledge of any such activity,” Mr. Calas said in an email exchange in November.

The Times reviewed business and court records showing that Devumi has more than 200,000 customers, including reality television stars, professional athletes, comedians, TED speakers, pastors and models. In most cases, the records show, they purchased their own followers. In others, their employees, agents, public relations companies, family members or friends did the buying. For just pennies each — sometimes even less — Devumi offers Twitter followers, views on YouTube, plays on SoundCloud, the music-hosting site, and endorsements on LinkedIn, the professional-networking site.

The actor John Leguizamo has Devumi followers. So do Michael Dell, the computer billionaire, and Ray Lewis, the football commentator and former Ravens linebacker. Kathy Ireland, the onetime swimsuit model who today presides over a half-billion-dollar licensing empire, has hundreds of thousands of fake Devumi followers, as does Akbar Gbajabiamila, the host of the show “American Ninja Warrior.” Even a Twitter board member, Martha Lane Fox, has some.

At a time when Facebook, Twitter and Google are grappling with an epidemic of political manipulation and fake news, Devumi’s fake followers also serve as phantom foot soldiers in political battles online. Devumi’s customers include both avid supporters and fervent critics of President Trump, and both liberal cable pundits and a reporter at the alt-right bastion Breitbart. Randy Bryce, an ironworker seeking to unseat Representative Paul Ryan of Wisconsin, purchased Devumi followers in 2015, when he was a blogger and labor activist. Louise Linton, the wife of the Treasury secretary, Steven Mnuchin, bought followers when she was trying to gain traction as an actress.

Devumi’s products serve politicians and governments overseas, too. An editor at China’s state-run news agency, Xinhua, paid Devumi for hundreds of thousands of followers and retweets on Twitter, which the country’s government has banned but sees as a forum for issuing propaganda abroad. An adviser to Ecuador’s president, Lenín Moreno, bought tens of thousands of followers and retweets for Mr. Moreno’s campaign accounts during last year’s elections.

Kristin Binns, a Twitter spokeswoman, said the company did not typically suspend users suspected of buying bots, in part because it is difficult for the business to know who is responsible for any given purchase. Twitter would not say whether a sample of fake accounts provided by The Times — each based on a real user — violated the company’s policies against impersonation.

“We continue to fight hard to tackle any malicious automation on our platform as well as false or spam accounts,” Ms. Binns said.

Unlike some social media companies, Twitter does not require accounts to be associated with a real person. It also permits more automated access to its platform than other companies, making it easier to set up and control large numbers of accounts.
___________________________________

Three Types of Twitter Bots

A scheduled bot posts messages based on the time. The Big Ben bot tweets every hour.

Watcher bots monitor other Twitter accounts or websites and tweet when something changes. When the United States Geological Survey posts about earthquakes in the San Francisco Bay Area, the SF QuakeBot tweets the relevant information.

Amplification bots, like those sold by Devumi, follow, retweet and like tweets sent by clients who have bought their services.
______________________________________

“Social media is a virtual world that is filled with half bots, half real people,” said Rami Essaid, the founder of Distil Networks, a cybersecurity company that specializes in eradicating bot networks. “You can’t take any tweet at face value. And not everything is what it seems.”

Including, it turns out, Devumi itself.

The Influence Economy

Last year, three billion people logged on to social media networks like Facebook, WhatsApp and China’s Sina Weibo. The world’s collective yearning for connection has not only reshaped the Fortune 500 and upended the advertising industry but also created a new status marker: the number of people who follow, like or “friend” you. For some entertainers and entrepreneurs, this virtual status is a real-world currency. Follower counts on social networks help determine who will hire them, how much they are paid for bookings or endorsements, even how potential customers evaluate their businesses or products.

High follower counts are also critical for so-called influencers, a budding market of amateur tastemakers and YouTube stars where advertisers now lavish billions of dollars a year on sponsorship deals. The more people influencers reach, the more money they make. According to data collected by Captiv8, a company that connects influencers to brands, an influencer with 100,000 followers might earn an average of $2,000 for a promotional tweet, while an influencer with a million followers might earn $20,000.

Genuine fame often translates into genuine social media influence, as fans follow and like their favorite movie stars, celebrity chefs and models. But shortcuts are also available: On sites like Social Envy and DIYLikes.com, it takes little more than a credit-card number to buy a huge following on almost any social media platform. Most of these sites offer what they describe as “active” or “organic” followers, never quite stating whether real people are behind them. Once purchased, the followers can be a powerful tool.

“You see a higher follower count, or a higher retweet count, and you assume this person is important, or this tweet was well received,” said Rand Fishkin, the founder of Moz, a company that makes search engine optimization software. “As a result, you might be more likely to amplify it, to share it or to follow that person.”

Twitter and Facebook can be similarly influenced. “Social platforms are trying to recommend stuff — and they say, ‘Is the stuff we are recommending popular?’” said Julian Tempelsman, the co-founder of Smyte, a security firm that helps companies combat online abuse, bots and fraud. “Follower counts are one of the factors social media platforms use.”

Search on Google for how to buy more followers, and Devumi often turns up among the first results. Visitors are greeted by a polished website listing a Manhattan address, displaying testimonials from customers and a money-back guarantee. Best of all, Devumi claims, the company’s products are blessed by the platform for which they are selling followers. “We only use promotion techniques that are Twitter approved so your account is never at risk of getting suspended or penalized,” Devumi’s website promises.

To better understand Devumi’s business, we became a customer. In April, The Times set up a test account on Twitter and paid Devumi $225 for 25,000 followers, or about a penny each. As advertised, the first 10,000 or so looked like real people. They had pictures and full names, hometowns and often authentic-seeming biographies. One account looked like that of Ms. Rychly, the young Minnesota woman.

But on closer inspection, some of the details seemed off. The account names had extra letters or underscores, or easy-to-miss substitutions, like a lowercase “L” in place of an uppercase “I.”

The next 15,000 followers from Devumi were more obviously suspect: no profile pictures, and jumbles of letters, numbers and word fragments instead of names.

In August, a Times reporter emailed Mr. Calas, asking if he would answer questions about Devumi. Mr. Calas did not respond. Twitter forbids selling or buying followers or retweets, and Devumi promises customers absolute discretion. “Your info is always kept confidential,” the company’s website reads. “Our followers look like any other followers and are always delivered naturally. The only way anyone will know is if you tell them.”

Buying Bots

But company records reviewed by The Times revealed much of what Devumi and its customers prefer to conceal.

Most of Devumi’s best-known buyers are selling products, services or themselves on social media. In interviews, their explanations varied. They bought followers because they were curious about how it worked, or felt pressure to generate high follower counts for themselves or their customers. “Everyone does it,” said the actress Deirdre Lovejoy, a Devumi customer.

While some said they believed Devumi was supplying real potential fans or customers, others acknowledged that they knew or suspected they were getting fake accounts. Several said they regretted their purchases.

“It’s fraud,” said James Cracknell, a British rower and Olympic gold medalist who bought 50,000 followers from Devumi. “People who judge by how many likes or how many followers, it’s not a healthy thing.”

Ms. Ireland has over a million followers on Twitter, which she often uses to promote companies with whom she has endorsement deals. The Wisconsin-based American Family Insurance, for example, said that the former model was one of its most influential Twitter “brand ambassadors,” celebrities who are paid to help promote products.

But in January last year, Ms. Ireland had only about 160,000 followers. The next month, an employee at the branding agency she owns, Sterling/Winters, spent about $2,000 for 300,000 more followers, according to Devumi records. The employee later made more purchases, he acknowledged in an interview. Much of Ms. Ireland’s Twitter following appears to consist of bots, a Times analysis found.

A spokeswoman said that the employee had acted without Ms. Ireland’s authorization and had been suspended after The Times asked about the purchases. “I’m sure he thought he was fulfilling his duties, but it’s not something he should have done,” said the spokeswoman, Rona Menashe.

Similarly, Ms. Lane Fox, a British e-commerce pioneer, member of Parliament and Twitter board member, blamed a “rogue employee” for a series of follower purchases spanning more than a year. She declined to name the person.

Several Devumi customers or their representatives contacted by The Times declined to comment, among them Mr. Leguizamo, whose followers were bought by an associate. Many more did not respond to repeated efforts to contact them.

A few denied making Devumi purchases. They include Ashley Knight, Mr. Lewis’s personal assistant, whose email address was listed on an order for 250,000 followers, and Eric Kaplan, a friend to Mr. Trump and motivational speaker whose personal email address was associated with eight orders. A Twitter account belonging to Paul Hollywood, the celebrity baker, was deleted after The Times emailed him with questions. Mr. Hollywood then sent a reply: “Account does not exist.”

Devumi’s Web

Many of these celebrities, business leaders, sports stars and other Twitter users bought their own followers, records show. In other cases, the purchases were made by their employees, agents, family members or other associates.

Over two years, the Democratic public relations consultant and CNN contributor Hilary Rosen bought more than a half-million fake followers from Devumi. Ms. Rosen previously spent more than a decade as head of the Recording Industry Association of America. In an interview, she described the purchases as “an experiment I did several years ago to see how it worked.” She made more than a dozen purchases of followers from 2015 to 2017, according to company records.

Other buyers said they had faced pressure from employers to generate social media followers. Marcus Holmlund, a young freelance writer, was at first thrilled when Wilhelmina, the international modeling agency, hired him to manage its social media efforts. But when Wilhelmina’s Twitter following didn’t grow fast enough, Mr. Holmlund said, a supervisor told him to buy followers or find another job. In 2015, despite misgivings, he began making monthly Devumi purchases out of his own pocket.

“I felt stuck with the threat of being fired, or worse, never working in fashion again,” said Mr. Holmlund, who left in late 2015. “Since then, I tell anyone and everyone who ever asks that it’s a total scam — it won’t boost their engagement.” (A Wilhelmina spokeswoman declined to comment.)

Several Devumi customers acknowledged that they bought bots because their careers had come to depend, in part, on the appearance of social media influence. “No one will take you seriously if you don’t have a noteworthy presence,” said Jason Schenker, an economist who specializes in economic forecasting and has purchased at least 260,000 followers.

Not surprisingly, Devumi has sold millions of followers and retweets to entertainers on the lower and middle rungs of Hollywood, such as the actor Ryan Hurst, a star of the television series “Sons of Anarchy.” In 2016 and 2017, he bought a total of 750,000 followers, about three-quarters of his current count. It cost less than $4,000, according to company records. Mr. Hurst did not respond to multiple requests for comment.

Devumi also sells bots to reality television stars, who can parlay fame into endorsement and appearance fees. Sonja Morgan, a cast member on the Bravo show “The Real Housewives of New York City,” uses her Devumi-boosted Twitter feed to promote her fashion line, a shopping app and a website that sells personalized “video shout-outs.” One former “American Idol” contestant, Clay Aiken, even paid Devumi to spread a grievance: his customer service complaint against Volvo. Devumi bots retweeted his complaint 5,000 times.

Mr. Aiken and Ms. Morgan did not respond to requests for comment.

More than a hundred self-described influencers — whose market value is even more directly linked to their follower counts on social media — have purchased Twitter followers from Devumi. Justin Blau, a popular Las Vegas-based D.J. who performs as 3LAU, acquired 50,000 followers and thousands of retweets. In an email, Mr. Blau said a former member of his management team bought them without his approval.

At least five Devumi influencer customers are also contractors for HelloSociety, an influencer agency owned by The New York Times Company. (A Times spokeswoman said the company sought to verify that the audience of each contractor was legitimate and would not do business with anyone who violated that standard.) Lucas Peterson, a freelance journalist who writes a travel column for The Times, also bought followers from Devumi.

Influencers need not be well known to rake in endorsement money. According to a recent profile in the British tabloid The Sun, two young siblings, Arabella and Jaadin Daho, earn a combined $100,000 a year as influencers, working with brands such as Amazon, Disney, Louis Vuitton and Nintendo. Arabella, who is 14, tweets under the name Amazing Arabella.

But her Twitter account — and her brother’s — are boosted by thousands of retweets purchased by their mother and manager, Shadia Daho, according to Devumi records. Ms. Daho did not respond to repeated attempts to reach her by email and through a public relations firm.

While Devumi sells millions of followers directly to celebrities and influencers, its customers also include marketing and public relations agencies, which buy followers for their own customers. Phil Pallen, a brand strategist based in Los Angeles, offers customers “growth & ad campaigns” on social media. At least a dozen times, company records show, Mr. Pallen has paid Devumi to deliver those results. Beginning in 2014, for example, he purchased tens of thousands of followers for Lori Greiner, the inventor and “Shark Tank” co-host.

Mr. Pallen at first denied buying those followers. After The Times contacted Ms. Greiner, Mr. Pallen said he had “experimented” with the company but “stopped using it long ago.” A lawyer for Ms. Greiner said she had asked him to stop after learning of the first purchases.

Still, records show, Mr. Pallen bought Ms. Greiner more Devumi followers in 2016.

Marketing consultants sometimes buy followers for themselves, too, in effect purchasing the evidence of their supposed expertise. In 2015, Jeetendr Sehdev, a former adjunct professor at the University of Southern California who calls himself “the world’s leading celebrity branding authority,” began buying hundreds of thousands of fake followers from Devumi.

He did not respond to requests for comment. But in his recent best-selling book, “The Kim Kardashian Principle: Why Shameless Sells,” he had a different explanation for his rising follower count. “My social media following exploded,” Mr. Sehdev claimed, because he had discovered the true secret to celebrity influence: “Authenticity is the key.”

Stolen and Sold

Among the followers delivered to Mr. Sehdev was Ms. Rychly — or at least, a copy of her. The fake Rychly account, created in 2014, was included in the purchase orders of hundreds of Devumi customers. It was retweeted by Mr. Schenker, the economist, and Arabella Daho, the teenage influencer. Clive Standen, star of the show “Taken,” ended up with Ms. Rychly’s stolen social identity. So did the television baker Mr. Hollywood, the French entertainer DJ Snake and Ms. Ireland. (DJ Snake’s followers were purchased by a former manager, and Mr. Standen did not respond to requests for comment.)

The fake Ms. Rychly also retweeted at least five accounts linked to a prolific American pornographer named Dan Leal, who is based in Hungary and tweets as @PornoDan. Mr. Leal, who has bought at least 150,000 followers from Devumi in recent years, is one of at least dozens of customers who work in the adult film industry or as escorts, according to a review of Devumi records.

In an email, Mr. Leal said that buying followers for his business generated more than enough new revenue to pay for the expense. He was not worried about being penalized by Twitter, Mr. Leal said. “Countless public figures, companies, music acts, etc. purchase followers,” he wrote. “If Twitter was to purge everyone who did so there would be hardly any of them on it.”

Devumi has sold at least tens of thousands of similar high-quality bots, a Times analysis found. In some cases, a single real Twitter user was transformed into hundreds of different bots, each a minute variation on the original.

Michael Symon, a celebrity chef and Devumi client, has almost a million followers.

Some followers have been identified as bots by a New York Times investigation, including Ms. Rychly’s stolen profile. Others appear to be human.

On an individual basis, bot detection can be tricky, but when bots are examined as a group, distinct patterns emerge.

The first @chefsymon follower was Corey Cova, a real person who worked with Mr. Symon in Cleveland, Ohio. Mr. Cova joined Twitter in February 2009.

The second was Zappos, the online retailer, which joined the platform in June 2007.

Over time more people followed @chefsymon.

But in early 2013 a distinct pattern emerged.

Mysterious “families” of accounts appeared — all created within a short period of time, and all following Mr. Symon almost simultaneously.

Here's @lwanttobejes, the bot impersonating Ms. Rychly.

These families contain thousands of similar accounts using stolen profiles. Patterns like these are clear evidence of bot activity.

Records reviewed by The Times show Mr. Symon bought 100,000 Twitter followers from Devumi in September 2014, and another 500,000 in November 2015. An earlier tranche of bots appears to have been purchased in early 2013.

“I thought it would drive traffic,” said Mr. Symon. “I thought it was going to be influencers and people in my field. It’s embarrassing.”

Many clients exhibit the same pattern
___________________________________

Martha Lane Fox, a businesswoman and member of Britain's House of Lords, blamed a rogue employee for at least seven Devumi purchases made using Ms. Lane Fox’s email address. The biggest — 25,000 followers — was made days after she became a Twitter board member in April 2016.

Aaron Klein, a radio talk show host and the Jerusalem bureau chief for Breitbart News, bought at least 35,000 followers from Devumi, according to records. A Times analysis found that the majority of his followers were bots, as demonstrated by the unusual patterns seen here.

James Cracknell is a rowing world champion and Olympic gold medalist for Britain who made a series of Devumi purchases over 2016. Mr. Cracknell expressed regret, saying, “I don’t want anybody following me who is not interested in me.”

Hilary Rosen, a political commentator and CNN contributor, paid for over a half-million Twitter followers. Many of those accounts have since vanished, but nearly half of her followers — including the fake Jessica Rychly — also follow Mr. Symon.
_________________________

These fake accounts borrowed social identities from Twitter users in every American state and dozens of countries, from adults and minors alike, from highly active users and those who hadn’t logged in to their accounts for months or years.

Sam Dodd, a college student and aspiring filmmaker, set up his Twitter account as a high school sophomore in Maryland. Before he even graduated, his Twitter details were copied onto a bot account.

The fake account remained dormant until last year, when it suddenly began retweeting Devumi customers continuously. This summer, the fake Mr. Dodd promoted various pornographic accounts, including Mr. Leal’s Immoral Productions, as well as a link to a gambling website.

“I don’t know why they’d take my identity — I’m a 20-year-old college student,” Mr. Dodd said. “I’m not well known.” But even unknown, Mr. Dodd’s social identity has value in the influence economy. At prices posted in December, Devumi was selling high-quality followers for under two cents each. Sold to about 2,000 customers — the rough number that many Devumi bot accounts follow — his social identity could bring Devumi around $30.

The stolen social identities of Twitter users like Mr. Dodd are critical to Devumi’s brand. The high-quality bots are usually delivered to customers first, followed by millions of cheaper, low-quality bots, like sawdust mixed in with grated Parmesan.

Some of Devumi’s high-quality bots, in effect, replace an idle Twitter account — belonging to someone who stopped using the service — with a fake one. Whitney Wolfe, an executive assistant who lives in Florida, opened a Twitter account in 2008, when she was a wedding planner. By the time she stopped using it regularly in 2014, a fake account copying her personal information had been created. In recent months, it has retweeted adult film actresses, several influencers and an escort turned memoirist.

“The content — pictures of women in thongs, pictures of women’s chests — it’s not anything I want to be represented with my faith, my name, where I live,” said Ms. Wolfe, who is active in her local Southern Baptist congregation.

Other victims were still active on Twitter when Devumi-sold bots began impersonating them. Salle Ingle, a 40-year-old engineer who lives in Colorado, said she worried that a potential employer would come across the fake version of her while vetting her social media accounts.

“I’ve been applying for new jobs, and I’m really grateful that no one saw this account and thought it was me,” Ms. Ingle said. Once contacted by The Times, Ms. Ingle reported the account to Twitter, which deactivated it.

After emailing Mr. Calas last year, a Times reporter visited Devumi’s Manhattan address, listed on its website. The building has dozens of tenants, including a medical clinic and a labor union. But Devumi and its parent company, Bytion, do not appear to be among them. A spokesman for the building’s owner said neither Devumi nor Bytion had ever rented space there.

Like the followers Devumi sold, the office was an illusion.



Devumi lists a Manhattan building as its address, but the property owner said it had never rented space there. Dave Sanders for The New York Times
______________________________

Man of Mysteries

In real life, Devumi is based in a small office suite above a Mexican restaurant in West Palm Beach, Fla., overlooking an alley crowded with Dumpsters and parked cars. Mr. Calas lives a short commute away, in a penthouse apartment.

On his LinkedIn profile, Mr. Calas is described as a “serial entrepreneur,” with a long record in the tech business and an advanced degree from the Massachusetts Institute of Technology. But Mr. Calas’s persona, too, is a mixture of fact and fantasy.

Mr. Calas, who is 27, grew up in South Florida, where as a teenager he learned web design and built sites for local businesses, according to earlier versions of his personal web page available on the Internet Archive.

Eventually he taught himself techniques for search engine optimization — the art of pushing a web page higher in search results. While in high school, he began taking classes at Palm Beach State College, where he earned an associate degree in 2012, according to a school spokeswoman. Within a few years, Mr. Calas was claiming to have built dozens of online businesses serving 10 million customers, now under the Bytion umbrella.



German Calas, the founder of Devumi. His persona is a mixture of fact and fantasy.
_________________________

“I started this company with a thousand dollars in the bank, without investors, and only the burning passion for success,” Mr. Calas wrote last year on the job-listing site Glassdoor.

As Mr. Calas’s ambitions grew, so did his embroidery. A copy of his résumé posted online in 2014 claimed that he earned a physics degree from Princeton University in 2000, when he would have been about 10 years old, and a Ph.D. in computer science from M.I.T. Representatives for both schools said they had no record of Mr. Calas’s attending their institution. His current LinkedIn page says that he has a master’s degree in “international business” from M.I.T., a degree it does not offer.

According to former employees interviewed by The Times, turnover was high at Devumi, and Mr. Calas kept his operation tightly compartmentalized. Employees sometimes had little idea what their colleagues were doing, even if they were working on the same project.

The ex-employees asked for anonymity for fear of lawsuits or because they were subject to nondisclosure agreements with Mr. Calas’s companies. But their comments are echoed in reviews on Glassdoor, where some former employees said that Mr. Calas was uncommunicative and demanded that they install monitoring software on their personal devices.

Dozens of Devumi’s customer service and order fulfillment personnel are based in the Philippines, according to company records. Employing overseas contractors may have helped Mr. Calas hold down costs. But it also appears to have left him vulnerable to a kind of social identity theft himself.

Last August, Mr. Calas sued Ronwaldo Boado, a Filipino contractor who previously worked for Devumi as an assistant customer support manager. After being fired for squabbling with other members of his team, Mr. Boado took control of a Devumi email account listing more than 170,000 customer orders, Mr. Calas alleged in court papers. Then Mr. Boado created a fake Devumi. (Some details of the lawsuit and of Devumi were previously reported by the Bureau of Investigative Journalism.)

His copycat company used a similar name — DevumiBoost — and copied the design of Devumi’s website, Mr. Calas alleged. The fake Devumi even listed the same phantom Manhattan address. Over a stretch of days last July, Mr. Boado, posing as a Devumi employee, emailed hundreds of Devumi customers to inform them that their orders needed to be reprocessed on DevumiBoost. Then he impersonated the customers, too, emailing Devumi under different aliases to ask that Devumi cancel the original orders. Mr. Boado, according to Mr. Calas, was trying to steal his customers. (Mr. Boado did not respond to emails seeking a response to Mr. Calas’s claims.)

Mr. Calas’s lawsuit also revealed something else: Devumi doesn’t appear to make its own bots. Instead, the company buys them wholesale — from a thriving global market of fake social media accounts.



Devumi is actually based in a small office suite in West Palm Beach, Fla. Scott McIntyre for The New York Times
_________________________

The Social Supply Chain

Scattered around the web is an array of obscure websites where anonymous bot makers around the world connect with retailers like Devumi. While individual customers can buy from some of these bare-boned sites — Peakerr, CheapPanel and YTbot, among others — they are less user-friendly. Some, for example, do not accept credit cards, only cryptocurrencies like Bitcoin.

But each site sells followers, likes and shares in bulk, for a variety of social media platforms and in different languages. The accounts they sell may change hands repeatedly. The same account may even be available from more than one seller.

Devumi, according to one former employee, sourced bots from different bot makers depending on price, quality and reliability. On Peakerr, for example, 1,000 high-quality, English-language bots with photos costs a little more than a dollar. Devumi charges $17 for the same quantity.

The price difference has allowed Mr. Calas to build a small fortune, according to company records. In just a few years, Devumi sold about 200 million Twitter followers to at least 39,000 customers, accounting for a third of more than $6 million in sales during that period.

Last month, Mr. Calas asked for examples of bots The Times found that copied real users. After receiving the names of 10 accounts, Mr. Calas, who had agreed to an interview, asked for more time to analyze them. Then he stopped responding to emails.

Ms. Binns, the Twitter spokeswoman, said the company did not proactively review accounts to see if they were impersonating other users. Instead, the company’s efforts are focused on identifying and suspending accounts that violate Twitter’s spam policies. In December, for example, the company identified an average of 6.4 million suspicious accounts each week, she said.

All of the sample accounts provided by The Times violated Twitter’s anti-spam policies and were shut down, Ms. Binns said. “We take the action of suspending an account from the platform very seriously,” she said. “At the same time, we want to aggressively fight spam on the platform.”

The company also suspended Devumi’s account on Saturday after the Times article was published online.

Yet Twitter has not imposed seemingly simple safeguards that would help throttle bot manufacturers, such as requiring anyone signing up for a new account to pass an anti-spam test, as many commercial sites do. As a result, Twitter now hosts vast swaths of unused accounts, including what are probably dormant accounts controlled by bot makers.

Former employees said the company’s security team for many years was more focused on abuse by real users, including racist and sexist content and orchestrated harassment campaigns. Only recently, they said, after revelations that Russia-aligned hackers had deployed networks of Twitter bots to spread divisive content and junk news, has Twitter turned more attention to weeding out fake accounts.

Leslie Miley, an engineer who worked on security and user safety at Twitter before leaving in late 2015, said, “Twitter as a social network was designed with almost no accountability.”

Some critics believe Twitter has a business incentive against weeding out bots too aggressively. Over the past two years, the company has struggled to generate the user growth seen by rivals like Facebook and Snapchat. And outside researchers have disputed the company’s estimates for how many of its active users are actually bots.
___________________________________

The Evolution of Twitter’s Timeline

September 2013 Twitter does not have retweet or favorite buttons visible seven years after its launch.

October 2013 Shortly thereafter, icons for replies, retweets and favorites are introduced. They do not display counts for those actions, however.

February 2014 In early 2014, Twitter begins displaying the number of replies, retweets and favorites for each tweet in a user’s timeline.

June 2017 In June of last year, Twitter began updating the counts of replies, retweets and favorites in real time, drawing even more attention to them.
__________________________

“We’re working with completely unregulated, closed ecosystems that aren’t reporting on these things. They have a perverse incentive to let it happen,” said Mr. Essaid, the cybersecurity expert. “They want to police it to the extent it doesn’t seem obvious, but they make money off it.”

In January, after almost two years of promoting hundreds of Devumi customers, the fake Jessica Rychly account was finally flagged by Twitter’s security algorithms. It was recently suspended.

But the real Ms. Rychly may soon leave Twitter for good.

“I am probably just going to delete my Twitter account,” she said.

Reporting was contributed by Manuela Andreoni, Jeremy Ashkenas, Laurent Bastien Corbeil, Nic Dias, Elise Hansen, Michael Keller, Manuel Villa and Felipe Villamor. Research was contributed by Susan C. Beachy, Doris Burke and Alain Delaquérière.

Design and development by Danny DeBelius and Richard Harris. Art direction by Antonio De Luca and Jason Fujikuni. Illustration photo credits: Ireland, Rodin Eckenroth/Getty Images; Lewis, D Dipasupil/FilmMagic; Leguizamo, Amanda Edwards/WireImage, via Getty Images; Ingle, Morgan Rachel Levy for The New York Times.

nytimes.com

Share RecommendKeepReplyMark as Last Read


From: Glenn Petersen4/6/2018 3:50:15 PM
   of 1269
 
“Did We Create This Monster?”

How Twitter Turned Toxic

For years, the company’s zeal for free speech blinded it to safety concerns. Now it’s scrambling to make up for lost time.

By Austin Carr and Harry McCracken
Fast Company
April 4, 2018

Yair Rosenberg wanted to troll the trolls.

Rosenberg, a senior writer for Jewish-focused news-and-culture website Tablet Magazine, had become a leading target of anti-Semitic Twitter users during his reporting on the 2016 U.S. presidential campaign. Despite being pelted with slurs, he wasn’t overly fixated on the Nazis who had embraced the service. “For the most part I found them rather laughable and easily ignored,” he says.

But one particular type of Twitter troll did gnaw at him: the ones who posed as minorities–using stolen photos of real people–and then infiltrated high-profile conversations to spew venom. “Unsuspecting readers would see this guy who looks like an Orthodox Jew or a Muslim woman saying something basically offensive,” he explains. “So they think, Oh, Muslims are religious. Jews are religious. And they are horrifically offensive people.”

Rosenberg decided to fight back. Working with Neal Chandra, a San Francisco–based developer he’d never met, he created an automated Twitter bot called Imposter Buster. Starting in December 2016, it inserted itself into the same Twitter threads as the hoax accounts and politely exposed the trolls’ masquerade (“FYI, this account is a racist impersonating a Jew to defame Jews”).

Imposter Buster soon came under attack itself–by racists who reported it to Twitter for harassment. Unexpectedly, the company sided with the trolls: It suspended the bot for spammy behavior the following April. With assistance from the Anti-Defamation League, Rosenberg and Chandra got that decision reversed three days later. But their targets continued to file harassment reports, and last December Twitter once again blacklisted Imposter Buster, this time for good.

Rosenberg, who considers his effort good citizenship rather than vigilantism, still isn’t sure why Twitter found it unacceptable; he never received an explanation directly from the company. But the ruling gave racists a win by technical knockout.

For all the ways in which the Imposter Buster saga is unique, it’s also symptomatic of larger issues that have long bedeviled Twitter: abuse, the weaponizing of anonymity, bot wars, and slow-motion decision making by the people running a real-time platform. These problems have only intensified since Donald Trump became president and chose Twitter as his primary mouthpiece. The platform is now the world’s principal venue for politics and outrage, culture and conversation–the home for both #MAGA and #MeToo.

This status has helped improve the company’s fortunes. Daily usage is up a healthy 12% year over year, and Twitter reported its first-ever quarterly profit in February, capping a 12-month period during which its stock doubled. Although the company still seems unlikely ever to match Facebook’s scale and profitability, it’s not in danger of failing. The occasional cries from financial analysts for CEO Jack Dorsey to sell Twitter or from critics for him to shut it down look more and more out of step.

Despite Twitter’s more comfortable standing, Dorsey has been increasingly vocal about his service’s problems. “We are committed to making Twitter safer,” the company pledged in its February shareholder letter. On the accompanying investor call, Dorsey outlined an “information quality” initiative to improve content and accounts on the service. Monthly active users have stalled at 330 million–a fact that the company attributes in part to its ongoing pruning of spammers. Twitter’s cleanup efforts are an admission, albeit an implicit one, that the array of troublemakers who still roam the platform–the hate-mongers, fake-news purveyors, and armies of shady bots designed to influence public opinion–are impeding its ability to grow. (Twitter did not make Dorsey, or any other executive, available to be interviewed for this story. Most of the more than 60 sources we spoke to, including 44 former Twitter employees, requested anonymity.)

Though the company has taken significant steps in recent years to remove bad actors, it hasn’t shaken the lingering impression that it isn’t trying hard enough to make the service a safer space. Twitter’s response to negative incidents is often unsatisfying to its users and more than a trifle mysterious–its punishment of Rosenberg, instead of his tormentors, being a prime example. “Please can someone smart make a new website where there’s only 140 characters and no Nazis?” one user tweeted shortly after Twitter introduced 280-character tweets in November.

Twitter is not alone in wrestling with the fact that its product is being corrupted for malevolence: Facebook and Google have come under heightened scrutiny since the presidential election, as more information comes to light revealing how their platforms manipulate citizens, from Cambridge Analytica to conspiracy videos. The companies’ responses have been timid, reactive, or worse. “All of them are guilty of waiting too long to address the current problem, and all of them have a long way to go,” says Jonathon Morgan, founder of Data for Democracy, a team of technologists and data experts who tackle governmental social-impact projects.

The stakes are particularly high for Twitter, given that enabling breaking news and global discourse is key to both its user appeal and business model. Its challenges, increasingly, are the world’s.

How did Twitter get into this mess? Why is it only now addressing the malfeasance that has dogged the platform for years? “Safety got away from Twitter,” says a former VP at the company. “It was Pandora’s box. Once it’s opened, how do you put it all back in again?”

In Twitter’s early days, as the microblogging platform’s founders were figuring out its purpose, its users showed them Twitter’s power for good. Galvanized by global social movements, dissidents, activists, and whistle-blowers embracing Twitter, free expression became the startup’s guiding principle. “Let the tweets flow,” said Alex Macgillivray, Twitter’s first general counsel, who later served as deputy CTO in the Obama administration. Internally, Twitter thought of itself as “the free-speech wing of the free-speech party.”

This ideology proved naive. “Twitter became so convinced of the virtue of its commitment to free speech that the leadership utterly misunderstood how it was being hijacked and weaponized,” says a former executive.

The first sign of trouble was spam. Child pornography, phishing attacks, and bots flooded the tweetstream. Twitter, at the time, seemed to be distracted by other challenges. When the company appointed Dick Costolo as CEO in October 2010, he was trying to fix Twitter’s underlying infrastructure–the company had become synonymous with its “fail whale” server-error page, which exemplified its weak engineering foundation. Though Twitter was rocketing toward 100 million users during 2011, its antispam team included just four dedicated engineers. “Spam was incredibly embarrassing, and they built these stupidly bare-minimum tools to [fight it],” says a former senior engineer, who remembers “goddamn bot wars erupting” as fake accounts fought each other for clicks.

Twitter’s trust and safety group, responsible for safeguarding users, was run by Del Harvey, Twitter employee No. 25. She had an atypical résumé for Silicon Valley: Harvey had previously worked with Perverted Justice, a controversial volunteer group that used web chat rooms to ferret out apparent sexual predators, and partnered with NBC’s To Catch a Predator, posing as a minor to lure in pedophiles for arrest on TV. Her lack of traditional technical and policy experience made her a polarizing figure within the organization, although allies have found her passion about safety issues inspiring. In the early days, “she personally responded to individual [affected] users–Del worked tirelessly,” says Macgillivray. “[She] took on some of the most complex issues that Twitter faced. We didn’t get everything right, but Del’s leadership was very often a factor when we did.”

Harvey’s view, championed by Macgillivray and other executives, was that bad speech could ultimately be defeated with more speech, a belief that echoed Supreme Court Justice Louis Brandeis’s 1927 landmark First Amendment decision that this remedy is always preferable to “enforced silence.” Harvey occasionally used as an example the phrase “Yo bitch,” which bad actors intend as invective, but others perceive as a sassy hello. Who was Twitter to decide? The marketplace of ideas would figure it out.

By 2012, spam was mutating into destructive trolling and hate speech. The few engineers in Harvey’s group had built some internal tools to enable her team to more quickly remove illegal content such as child pornography, but they weren’t prepared for the proliferation of harassment on Twitter. “Every time you build a wall, someone is going to build a higher ladder, and there are always more people outside trying to fuck you over than there are inside trying to stop them,” says a former platform engineer. That year, Australian TV personality Charlotte Dawson was subjected to a rash of vicious tweets–e.g., “go hang yourself”–after she spoke out against online abuse. Dawson attempted suicide and was hospitalized. The following summer, in the U.K., after activist Caroline Criado-Perez campaigned to get a woman’s image featured on the 10-pound note, her Twitter feed was deluged with trolls sending her 50 rape threats per hour.

The company responded by creating a dedicated button for reporting abuse within tweets, yet trolls only grew stronger on the platform. Internally, Costolo complained that the “abuse economics” were “backward.” It took just seconds to create an account to harass someone, but reporting that abuse required filling out a time-consuming form. Harvey’s team, earnest about reviewing the context of each reported tweet but lacking a large enough support staff, moved slowly. Multiple sources say it wasn’t uncommon for her group to take months to respond to backlogged abuse tickets. Because they lacked the necessary language support, team members had to rely on Google Translate for answering many non-English complaints. User support agents, who manually evaluated flagged tweets, were so overwhelmed by tickets that if banned users appealed a suspension, they would sometimes simply release the offenders back onto the platform. “They were drowning,” says a source who worked closely with Harvey. “To this day, it’s shocking to me how bad Twitter was at safety.”

Twitter’s leadership, meanwhile, was focused on preparing for the company’s November 2013 IPO, and as a result it devoted the bulk of its engineering resources to the team overseeing user growth, which was key to Twitter’s pitch to Wall Street. Harvey didn’t have the technical support she needed to build scalable solutions to Twitter’s woes.

Toxicity on the platform intensified during this time, especially in international markets. Trolls organized to spread misogynist messages in India and anti-Semitic ones in Europe. In Latin America, bots began infecting elections. Hundreds used during Brazil’s 2014 presidential race spread propaganda, leading a company executive to meet with government officials, during which, according to a source, “pretty much every member of the Brazilian house and senate asked, ‘What are you doing about bots?'” (Around this time, Russia reportedly began testing bots of its own to sway public opinion through disinformation. Twitter largely tolerated automated accounts on the platform; a knowledgeable source recalls the company once sending a cease-and-desist letter to a bot farmer, which was disregarded, a symbol of its anemic response to this issue.) Twitter’s leadership seemed deaf to cries from overseas offices. “It was such a Bay Area company,” says a former international employee, echoing a common grievance that Twitter fell victim to Silicon Valley myopia. “Whenever [an incident] happened in the U.S., it was a company-wide tragedy. We would be like, ‘But this happens to us every day!””

It wasn’t until mid-2014, around the time that trolls forced comedian Robin Williams’s daughter, Zelda, off the service in the wake of her father’s suicide–she later returned–that Costolo had finally had enough. Costolo, who had been the victim of abuse in his own feed, lost faith in Harvey, multiple sources say. He put a different department in charge of responding to user-submitted abuse tickets, though he left Harvey in charge of setting the company’s trust and safety guidelines.

Soon, the threats morphed again: ISIS began to leverage Twitter to radicalize followers. Steeped in free-speech values, company executives struggled to respond. Once beheading videos started circulating, “there were brutal arguments with Dick,” recalls a former top executive. “He’d say, ‘You can’t show people getting killed on the platform! We should just erase it!’ And [others would argue], ‘But what about a PhD student posting a picture of the Kennedy assassination?’?” They decided to allow imagery of beheadings, but only until the knife touches the neck, and, according to two sources, the company assigned support agents to search for and report beheading content–so the same team could then remove it. “It was the stupidest thing in the world,” says the source who worked closely with Harvey. “[Executives] already made the policy decision to take down the content, but they didn’t want to build the tools to [proactively] enforce the policy.” (Twitter has since purged hundreds of thousands of ISIS-related accounts, a muscular approach that has won the platform praise.)

Costolo, frustrated with the company’s meager efforts in tackling these problems, sent a company-wide memo in February 2015 complaining that he was “ashamed” by how much Twitter “sucked” at dealing with abuse. “If I could rewind the clock, I’d get more aggressive earlier,” Costolo tells Fast Company, stressing that the “blame” lays on nobody “other than the CEO at the time: me.”

“I often hear people in Silicon Valley talking about fake news and disinformation as problems we can engineer our way out of,” says Brendan Nyhan, codirector of Bright Line Watch, a group that monitors threats to democratic processes. “That’s wrong. People are looking for a solution that doesn’t exist.”

The Valley may be coming around to this understanding. Last year, Facebook and YouTube announced initiatives to expand their content-policing teams to 20,000 and 10,000 workers, respectively. Twitter, meanwhile, had just 3,317 employees across the entire company at the end of 2017, a fraction of whom are dedicated to improving “information quality.”

Putting mass quantities of human beings on the job, though, isn’t a panacea either. It introduces new issues, from personal biases to having to make complicated calls on content in a matter of seconds. “These reviewers use detailed rules designed to direct them to make consistent decisions,” says Susan Benesch, faculty associate at Harvard’s Berkman Klein Center for Internet and Society and director of the Dangerous Speech Project. “That’s a hard thing to do, especially at scale.”

Humans are often to blame for overly broad purges that capture benign content that doesn’t violate policy, such as when YouTube did a sweep for extremist and gun-related videos after the Parkland shooting, deleting specific clips and even entire channels that shouldn’t have been subject to elimination. A YouTube spokesperson admitted to Bloomberg, “Newer members may misapply some of our policies resulting in mistaken removals.”

The enormity of this quality-control conundrum helps explain why Twitter frequently fails, at least initially, to remove tweets that users report for harassment–some including allusions to death or rape–even though they would appear to violate its community standards. The company also catches flak for taking action against tweets that do offend these rules but have an extraordinary context, as when it temporarily suspended actress Rose McGowan for including a private phone number in a flurry of tweets excoriating Hollywood notables in the wake of the Harvey Weinstein sexual harassment scandal. “You end up going down a slippery slope on a lot of these things,” says a former C-level Twitter executive. “?’Oh, the simple solution is X!’ That’s why you hear now, ‘Why don’t you just get rid of bots?!’ Well, lots of [legitimate media] use automated [accounts] to post headlines. Lots of these easy solutions are a lot more complex.”

Five months after Costolo’s February 2015 lament, he resigned from Twitter. Cofounder Jack Dorsey, who had run the company until he was fired in 2008, replaced Costolo as CEO (while retaining the same job at his payments company, Square, headquartered one block away in San Francisco). Dorsey, an English major in a land of computer scientists, had deep thoughts about Twitter’s future, but he couldn’t always articulate them in a way that translated to engineers. “I’d be shocked if you found somebody [to whom] Jack gave an extremely clear articulation of his thesis for Twitter,” says the former top executive, noting that Dorsey has described the service by using such metaphors as the Golden Gate Bridge and an electrical outlet for a toaster. Once, he gathered the San Francisco office for a meeting where he told employees he wanted to define Twitter’s mission–and proceeded to play the Beatles’s “Blackbird” as attendees listened in confused silence.

There was no doubt, though, that he believed in Twitter’s defining ethos. “Twitter stands for freedom of expression. We stand for speaking truth to power,” Dorsey tweeted on his first official day back as Twitter’s CEO, in October 2015.

By the time Dorsey’s tenure got under way, Twitter had gotten a better handle on some of the verbal pollution plaguing the service. The company’s anti-abuse operations had been taken over by Tina Bhatnagar, a no-nonsense veteran of Salesforce who had little patience for free-speech hand-wringing. Bhatnagar dramatically increased the number of outsourced support agents working for the company and was able to reduce the average response time on abuse-report tickets to just hours, though some felt the process became too much of a numbers game. “She was more like, ‘Just fucking suspend them,'” says a source who worked closely with her. If much of the company was guided by Justice Brandeis’s words, Bhatnagar represented Justice Potter Stewart’s famous quote about obscenity: “I know it when I see it.”

This ideological split was reflected in the company’s organizational hierarchy, which kept Harvey and Bhatnagar in separate parts of the company–legal and engineering, respectively–with separate managers. “They often worked on the exact same things but with very different approaches–it was just bonkers,” says a former high-level employee who felt ricocheted between the two factions. Even those seemingly on the same team didn’t always see eye to eye: According to three sources, Colin Crowell, Twitter’s VP of public policy, at one point refused to report to Harvey’s boss, general counsel Vijaya Gadde (Macgillivray’s successor), due in part to disagreements about how best to approach free-speech issues.

Contentiousness grew common: Bhatnagar’s team would want to suspend users it found abusive, only to be overruled by Gadde and Harvey. “That drove Tina crazy,” says a source familiar with the dynamic. “She’d go looking for Jack, but Jack would be at Square, so the next day he’d listen and take notes on his phone and say, ‘Let me think about it.’ Jack couldn’t make a decision without either upsetting the free-speech people or the online-safety people, so things were never resolved.”

Dorsey’s supporters argue that he wasn’t necessarily indecisive–there were simply no easy answers. Disputes that bubbled up to Dorsey were often bizarre edge cases, which meant that any decision he made would be hard to generalize to a wide range of instances. “You can have a perfectly written rule, but if it’s impossible to apply to 330 million users, it’s as good as having nothing,” says a source familiar with the company’s challenges.

Dorsey had other business demands to attend to at the time. When he returned as CEO, user growth had stalled, the stock had declined nearly 70% since its high following the IPO, the company was on track to lose more than $500 million in 2015 alone, and a number of highly regarded employees were about to leave. Although Twitter made some progress in releasing new products, including Moments and its live-video features, it struggled to refresh its core experience. In January 2016, Dorsey teased users with hints at an expansion of Twitter’s long-standing 140-character limit, but it took another 22 months to launch 280-character tweets. “Twitter was a hot mess,” says Leslie Miley, who managed the engineering group responsible for safety features until he was laid off in late 2015. “When you switch product VPs every year, it’s hard to keep a strategy in place.”

Then the U.S. presidential election arrived. All of Twitter’s warts were about to be magnified on the world stage. Twitter’s support agents, the ones reviewing flagged content and wading through the darkest muck of social media, witnessed the earliest warning signs as Donald Trump started sweeping the primaries. “We saw this radical shift,” says one at the time. Discrimination seemed more flagrant, the propaganda and bots more aggressive. Says another: “You’d remove it and it’d come back within minutes, supporting Nazis, hating Jews, [memes featuring] ovens, and oh, the frog…the green frog!” (That would be Pepe, a crudely drawn cartoon that white supremacists co-opted.)

A July 2016 troll attack on SNL and Ghostbusters star Leslie Jones– incited by alt-right provocateur Milo Yiannopoulos–proved to be a seminal moment for Twitter’s anti-harassment efforts. After Jones was bombarded with racist and sexist tweets, Dorsey met with her personally to apologize and declared an “abuse emergency” internally. The company banned Yiannopoulos. It also enhanced its muting and blocking features and introduced an opt-in tool that allows users to filter out what Twitter has determined to be “lower-quality content.”The idea was that Twitter wouldn’t be suppressing free speech–it would merely not be shoving unwanted tweets into its users’ faces.

But these efforts weren’t enough to shield users from the noxiousness of the Clinton–Trump election cycle. During the Jones attack, screenshots of fake, Photoshopped tweets purporting to show divisive things Jones had shared spread virally across the platform. This type of disinformation gambit would become a hallmark of the 2016 election and beyond, and Twitter did not appreciate the strength of this new front in the information wars.

Of the two presidential campaigns, Trump’s better knew how to take advantage of the service to amplify its candidate’s voice. When Twitter landed massive ad deals from the Republican nominee, left-leaning employees complained to the sales team that it should stop accepting Trump’s “bullshit money.”

The ongoing, unresolved disputes over what Twitter should allow on its platform continued to flare into the fall. In October, the company reneged on a $5 million deal with the Trump campaign for a custom #CrookedHillary emoji. “There was vicious [internal] debate and back-channeling to Jack,” says a source involved. “Jack was conflicted. At the eleventh hour, he pulled the plug.” Trump allies later blasted Twitter for its perceived political bias.

On November 8, employees were shocked as the election returns poured in, and the morning after Trump’s victory, Twitter’s headquarters were a ghost town. Employees had finally begun to take stock of the role their platform had played not only in Trump’s rise but in the polarization and radicalization of discourse.

“We all had this ‘holy shit’ moment,” says a product team leader at the time, adding that everyone was asking the same question: “Did we create this monster?”

In the months following Trump’s win, employees widely expected Dorsey to address Twitter’s role in the election head-on, but about a dozen sources indicate that the CEO remained mostly silent on the matter internally. “You can’t take credit for the Arab Spring without taking responsibility for Donald Trump,” says Leslie Miley, the former safety manager.

Over time, though, Dorsey’s thinking evolved, and he seems to be less ambivalent about what he’ll allow on the platform. Sources cite Trump’s controversial immigration ban and continued alt-right manipulation as influences. At the same time, Twitter began to draw greater scrutiny from the public, and the U.S. Congress, for its role in spreading disinformation.

Dorsey empowered engineering leaders Ed Ho and David Gasca to go after Twitter’s problems full bore, and in February 2017, as part of what some internally called an “abuse sprint,” the company rolled out more aggressive measures to permanently bar bad actors on the platform and better filter out potentially abusive or low-quality content. “Jack became a little bit obsessed,” says a source. “Engineering in every department was asked to stop working on whatever they were doing and focus on safety.”

Twitter’s safety operations, previously siloed, became more integrated with the consumer-product side of the company. The results have been positive. In May 2017, for example, after learning how much abuse users were being subjected to via Twitter’s direct messages feature, the team overseeing the product came up with the idea of introducing a secondary inbox to capture bad content, akin to a spam folder. “They’re starting to get things right,” says a former manager at the company, “addressing these problems as a combination of product and policy.”

During a live video Q&A Dorsey hosted in March, he was asked why trust and safety didn’t work with engineering much earlier. The CEO laughed, then admitted, “We had a lot of historical divisions within the company where we weren’t as collaborative as we could be. We’ve been recognizing where that lack of collaboration has hurt us.”

Even previous victims of Twitter abuse have recognized that the company’s new safety measures have helped. “I think Twitter is doing a better job than they get public credit for,” says Brianna Wu, the developer who became a principal target of Gamergate, the loose-knit collective of trolls whose 2014 attacks on prominent women in the gaming industry was a canary in the Twitter-harassment coal mine. “Most of the death threats I get these days are either sent to me on Facebook or through email, because Twitter has been so effective at intercepting them before I can even see them,” she adds, sounding surprisingly cheery. (Wu’s encounters with the dark side of social networking helped inspire her current campaign for a U.S. House seat in the Boston area, with online safety as one of her principal issues.)

Twitter has also been more proactive since the election in banning accounts and removing verifications, particularly of white nationalists and alt-right leaders such as Richard Spencer. (The blue check mark signifying a verified user was originally designed to confirm identity but has come to be interpreted as an endorsement.) According to three sources, Dorsey himself has personally directed some of these decisions.

Twitter began rolling out a series of policy and feature changes last October that prioritized civility and truthfulness over free-speech absolutism. For instance, while threatening murder has always been unacceptable, now even speaking of it approvingly in any context will earn users a suspension. The company has also made it more difficult to bulk-tweet misinformation.

Such crackdowns haven’t yet eliminated the service’s festering problems: After February’s Parkland mass shooting, some surviving students became targets of harassment, and Russia-linked bots reportedly spread pro-gun sentiments and disinformation. Nobody, though, can accuse Twitter of not confronting its worst elements. The pressure on Dorsey to keep this momentum going is coming from Wall Street, too: On a recent earnings call, a Goldman Sachs analyst pressed Dorsey about the company’s progress toward eliminating bots and enforcing safety policies. “Information quality,” Dorsey responded, is now Twitter’s “core job.”

This past Valentine’s Day, Senator Mark Warner entered his stately corner suite in Washington, D.C.’s Hart Senate Office Building, poured himself a Vitaminwater, and rushed into an explanation of why Silicon Valley needs to be held accountable for its role in the 2016 election. As the Democratic vice chairman of the Senate Intelligence Committee, Warner is swamped with high-profile hearings and classified briefings, but the topic is also personal for the self-described “tech guy” who made a fortune in the 1980s investing in telecoms.

Warner is coleading the committee’s investigation into Russian election interference, which has increasingly centered on the growing, unfettered power of technology giants, whom he believes need to get over their “arrogance” and fix their platforms. “One of the things that really offended me was the initial reaction from the tech companies to blow us off,” he began, leaning forward in his leather chair. “’Oh no! There’s nothing here! Don’t look!’ Only with relentless pressure did they start to come clean.”

He saved his harshest words for Twitter, which he said has dragged its feet far more than Facebook or Google. “All of Twitter’s actions were in the wake of Facebook’s,” Warner complained in his gravelly voice, his face reddening. “They’re drafting!” The company was the only one to miss the January 8 deadline for providing answers to the Intelligence Committee’s inquiries, and, making matters worse, Twitter disclosed weeks later that Kremlin-linked bots managed to generate more than 450 million impressions, substantially higher than the company previously reported. “There’s been this [excuse of], ‘Oh, well, that’s just Twitter.’ That’s not a long-term viable answer.”

Warner stated that he has had offline conversations directly with Mark Zuckerberg, but never Dorsey. Throwing shade, Warner smiled as he suggested that the company may not be able to commit as many resources as Facebook and Google can because it has a “more complicated, less lucrative business model.”

The big question now is what government intervention might look like. Warner suggested several broad policy prescriptions, including antitrust and data privacy regulations, but the one with the greatest potential effect on Twitter and its rivals would be to make them liable for the content on their platforms. When asked if the European Union, which has been more forceful in its regulation of the technology industry, could serve as a model, the senator replied, “[I’m] glad the EU is acting. I think they’re bolder than we are.”

If the U.S. government does start taking a more activist role in overseeing social networks, it will unleash some of the same nettlesome issues that Europe is already working through. On January 1, for instance, Germany began enforcing a law known as (deep breath) Netzwerkdurchsetzungsgesetz, or NetzDG for short. Rather than establish new restrictions on hate speech, it mandates that large social networks remove material that violates the country’s existing speech laws–which are far more stringent than their U.S. equivalents–within 24 hours of being notified of its existence. ”Decisions that would take months in a regular court are now [made] by social media companies in just minutes,” says Mirko Hohmann, a Berlin-based project manager for the Global Public Policy Institute.

As evidence of how this approach can create unintended outcomes, he points to an instance in which Twitter temporarily shut down the account of a German humor magazine after it tweeted satirically in the voice of Beatrix von Storch, a leader of a far-right party. “No court would have judged these tweets illegal, but a Twitter employee under pressure did,” Hohmann says. (The company apparently even deleted an old tweet by one of NetzDG’s architects, Heiko Maas, in which he called another politician an idiot.)

In the U.S., rather than wait for federal action or international guidance, state lawmakers in Maryland, New York, and Washington are already working to regulate political ads on social networks. As Warner said, the era of Silicon Valley self-policing is over.

Whether or not the federal government steps in, hardening the big social networks against abuse will involve implementing solutions which haven’t even been invented yet. “If there was a magical wand that they could wave to solve this challenge, with the substantial resources and expertise that they have, then they absolutely would,” says Graham Brookie, deputy director of the Atlantic Council’s Digital Forensic Research Lab.

Still, there are many things Twitter can do to protect its platform. Using technology to identify nefarious bots is a thorny matter, but Twitter could label all automated accounts as such, which wouldn’t hobble legitimate feeds but would make it tougher for Russian bots to pose as heartland Trump supporters.

“The issue here is not that there is automation on Twitter,” says Renée DiResta, head of policy for Data for Democracy and a founding advisor for the Center for Humane Technology. “The issue is that there are automated accounts that trying to be treated as real people, that are acting like real people, that are manipulating people.”

Twitter could also do more to discourage people from creating objectionable content in the first place by making its rules more visible and digestible. Susan Benesch, whose Dangerous Speech Project is a member of Twitter’s Trust and Safety Council, says she’s implored executives to raise the visibility of the “Twitter Rules” policies, which outline what you can’t say on the service. “They say, ‘Nobody reads the rules,'” she recounts. “And I say ‘That’s right. And nobody reads the Constitution, but that doesn’t mean we shouldn’t have civics classes and try to get people to read it.'”

The company could also build trust by embracing transparency as more than a buzzword, sharing more with users and marketers about how exactly Twitter works and collaborating with outside researchers. Compared to other social-media behemoths, its business model is far less reliant on using secretive algorithms to monetize its users’ data and behaviors, giving it an opportunity to be open in ways that the rest seldom are. “The way that people use Twitter, it becomes a little bit easier to see things and understand things,” says Jason Kint, CEO of publisher trade group Digital Content Next. “Whereas it’s incredibly difficult with YouTube and I’d say with Facebook it’s fairly difficult.”

Toward this more collaborative end, and inspired by research conducted by nonprofit Cortico and MIT’s Laboratory for Social Machines, Twitter announced in March that it will attempt to measure its own “conversational health.” It invited other organizations to participate in this process, and Twitter says it will reveal its first partners in July.

The effort is intriguing, but the crowdsourced initiative also sounds eerily similar to Twitter’s Trust and Safety Council, whose mission since it was convened in February 2016 has been for advocates, academics, and grassroots organizations to provide input on the company’s safety approach.

Many people who worked for Twitter want not a metric but a mea culpa. According to one source who has discussed these issues with the company’s leadership, “Their response to everything was basically, ‘Look, we hear you, but you can’t blame Twitter for what happened. If it wasn’t us, it would’ve been another medium.’ The executives didn’t own up to the fact that we are responsible, and that was one of the reasons why I quit.”

Even Senator Warner believes that before his colleagues consider legislation, the tech companies’ CEOs ought to testify before Congress. “I want them all, not just Dorsey. I want Mark and I want [Google cofounders] Sergey [Brin] and Larry [Page],” he said. “Don’t send your lawyers, don’t send the policy guys. They owe the American public an explanation.”

When Twitter debuted its new health metrics initiative, the American public seemed to finally get one, after Dorsey tweeted about Twitter, “We didn’t fully predict or understand the real-world negative consequences. We acknowledge that now.” He continued: “We aren’t proud of how people have taken advantage of our service, or our inability to address it fast enough. . . . We’ve focused most of our efforts on removing content against our terms, instead of building a systemic framework to help encourage more healthy debate, conversations, and critical thinking. This is the approach we now need.”

One week later, Dorsey continued to acknowledge past missteps during a 47-minute live video broadcast on Twitter. “We will make mistakes–I will certainly make mistakes,” he said. “I have done so in the past around this entire topic of safety, abuse, misinformation, [and] manipulation on the platform.”

The point of the live stream was to talk more about measuring discourse, and Dorsey tried to answer user-submitted questions. But the hundreds of real-time comments scrolling by on the screen illustrated the immense challenge ahead. As the video continued, his feed filled with anti-Semitic and homophobic insults, caustic complaints from users who fear Twitter is silencing their beliefs, and plaintive cries for the company to stop racism. Stroking his beard, Dorsey squinted at his phone, watching the bad speech flow as he searched for the good.

A version of this article appeared in the May 2018 issue of Fast Company magazine.

fastcompany.cmo

Share RecommendKeepReplyMark as Last ReadRead Replies (1)


To: Glenn Petersen who wrote (1267)7/17/2018 6:37:35 PM
From: Ron
   of 1269
 
Twitter says it 'doesn't have the bandwidth' to fix verification right now
theverge.com

Share RecommendKeepReplyMark as Last Read


From: TimF6/23/2020 6:07:41 PM
   of 1269
 
This is unfortunate. Slate Star Codex was a good blog. Not quite as good as it had been earlier, before the blogger started to shy away from some more possibly controversial topics (a decision that happened for somewhat similar reasons although not in response to a newspaper story).

----------

NYT Is Threatening My Safety By Revealing My Real Name, So I Am Deleting The Blog
So, I kind of deleted the blog. Sorry. Here’s my explanation.

Last week I talked to a New York Times technology reporter who was planning to write a story on Slate Star Codex. He told me it would be a mostly positive piece about how we were an interesting gathering place for people in tech, and how we were ahead of the curve on some aspects of the coronavirus situation. It probably would have been a very nice article.

Unfortunately, he told me he had discovered my real name and would reveal it in the article, ie doxx me. “Scott Alexander” is my real first and middle name, but I’ve tried to keep my last name secret. I haven’t always done great at this, but I’ve done better than “have it get printed in the New York Times“.

I have a lot of reasons for staying pseudonymous. First, I’m a psychiatrist, and psychiatrists are kind of obsessive about preventing their patients from knowing anything about who they are outside of work. You can read more about this in this Scientific American article – and remember that the last psychiatrist blogger to get doxxed abandoned his blog too. I am not one of the big sticklers on this, but I’m more of a stickler than “let the New York Times tell my patients where they can find my personal blog”. I think it’s plausible that if I became a national news figure under my real name, my patients – who run the gamut from far-left anarchists to far-right gun nuts – wouldn’t be able to engage with me in a normal therapeutic way. I also worry that my clinic would decide I am more of a liability than an asset and let me go, which would leave hundreds of patients in a dangerous situation as we tried to transition their care.

The second reason is more prosaic: some people want to kill me or ruin my life, and I would prefer not to make it too easy. I’ve received various death threats. I had someone on an anti-psychiatry subreddit put out a bounty for any information that could take me down (the mods deleted the post quickly, which I am grateful for). I’ve had dissatisfied blog readers call my work pretending to be dissatisfied patients in order to get me fired. And I recently learned that someone on SSC got SWATted in a way that they link to using their real name on the blog. I live with ten housemates including a three-year-old and an infant, and I would prefer this not happen to me or to them. Although I realize I accept some risk of this just by writing a blog with imperfect anonymity, getting doxxed on national news would take it to another level.

When I expressed these fears to the reporter, he said that it was New York Times policy to include real names, and he couldn’t change that. After considering my options, I decided on the one you see now. If there’s no blog, there’s no story. Or at least the story will have to include some discussion of NYT’s strategy of doxxing random bloggers for clicks.

I want to make it clear that I’m not saying I believe I’m above news coverage, or that people shouldn’t be allowed to express their opinion of my blog. If someone wants to write a hit piece about me, whatever, that’s life. If someone thinks I am so egregious that I don’t deserve the mask of anonymity, then I guess they have to name me, the same way they name criminals and terrorists. This wasn’t that. By all indications, this was just going to be a nice piece saying I got some things about coronavirus right early on. Getting punished for my crimes would at least be predictable, but I am not willing to be punished for my virtues.

I’m not sure what happens next. In my ideal world, the New York Times realizes they screwed up, promises not to use my real name in the article, and promises to rethink their strategy of doxxing random bloggers for clicks. Then I put the blog back up (of course I backed it up! I’m not a monster!) and we forget this ever happened.

Otherwise, I’m going to lie low for a while and see what happens. Maybe all my fears are totally overblown and nothing happens and I feel dumb. Maybe I get fired and keeping my job stops mattering. I’m not sure. I’d feel stupid if I caused the amount of ruckus this will probably cause and then caved and reopened immediately. But I would also be surprised if I never came back. We’ll see.

I’ve gotten an amazing amount of support the past few days as this situation played out. You don’t need to send me more – message very much received. I love all of you so much. I realize I am making your lives harder by taking the blog down. At some point I’ll figure out a way to make it up to you.

In the meantime, you can still use the r/slatestarcodex subreddit for sober non-political discussion, the not-officially-affiliated-with-us r/themotte subreddit for crazy heated political debate, and the SSC Discord server for whatever it is people do on Discord. Also, my biggest regret is I won’t get to blog about Gwern’s work with GPT-3, so go over and check it out.

There’s a SUBSCRIBE BY EMAIL button on the left – put your name there if you want to know if the blog restarts or something else interesting happens. I’ll make sure all relevant updates make it onto the subreddit, so watch that space.

There is no comments section for this post. The appropriate comments section is the feedback page of the New York Times. You may also want to email the New York Times technology editor Pui-Wing Tam at pui-wing.tam@nytimes.com, contact her on Twitter at @puiwingtam, or phone the New York Times at 844-NYTNEWS.

(please be polite – I don’t know if Ms. Tam was personally involved in this decision, and whoever is stuck answering feedback forms definitely wasn’t. Remember that you are representing me and the SSC community, and I will be very sad if you are a jerk to anybody. Please just explain the situation and ask them to stop doxxing random bloggers for clicks. If you are some sort of important tech person who the New York Times technology section might want to maintain good relations with, mention that.)

If you are a journalist who is willing to respect my desire for pseudonymity, I’m interested in talking to you about this situation (though I prefer communicating through text, not phone). My email is scott@slatestarcodex.com.


This entry was posted in Uncategorized on June 22, 2020.
slatestarcodex.com

Share RecommendKeepReplyMark as Last Read
Previous 10