|Google Bombs Are Our New Normal|
Author: Karen Wickre
Google had a problem. Beginning in 2003, a group of users had figured out how to game the site’s search results. This phenomenon was known as a “Google bomb”— a trick played by toying with Google’s algorithm. If users clicked on a site, it registered as popular and might rise in ranking results. The cons were often elaborate, like when a search for “miserable failure” turned up links to information about then-president George W. Bush. It seemed like the query represented Google’s editorial viewpoint; instead, it was a prank.
By early 2007, Google had all but vanquished the problem with the usual triage. A phalanx of technology and product people would huddle with the PR team to uncover the technical issue causing the bad outcome. They would work on a fix, or a workaround, and issue an apologetic explanation. The engineers might tackle a long-term adjustment to the algorithm addressing the root cause. Then it was back to business as usual.
These problems—often caused by hackers or pranksters, and occasionally triggered by people with truly bad intentions—weren’t everyday situations. They were edge cases.
But now, we have a new normal. Manipulating search results today seems more like an invasion than a joke. As the October 1 massacre in Las Vegas unfolded, Google displayed “news” results from rumor mills like 4Chan, and Facebook promulgated rumors and conspiracy theories, sullying the service on which, according to Pew Research, 45 percent of American adults get their news. Meanwhile, the rapid-fire nature of Twitter led users to pass along false information about missing people in the aftermath.
All of these cases signify the central place a number of digital services have staked out in our lives. We trust our devices: We trust them to surface the correct sources in our information feeds, we trust them to deliver our news, and we trust them to surface the opinions of our friends. So the biggest and most influential platforms falling prey to manipulations upsets that trust—and the order of things.
It’s hard to square the global power, reach, and ubiquity of these massive platforms with their youth: Google just turned 19. Facebook is 13. Twitter is 11 and a half. (None, in other words, out of their teens!) Until recently, widespread digital malfeasance was relatively rare on these young platforms. But in a world that increasingly seems dystopian, we now expect security breaches, hacks, purposeful fakery— all of it more or less constantly across the online services and tools we use. Whether the aim is financial, political, or even just hacking for hacking’s sake, the fact that so many of us live and work online means we are, collectively, an attractive and very large target.
If the companies providing the services we rely on want to keep or regain our trust, this new normal warrants a good deal more of their attention. When a problem occurs, the explanations, as I’ve written, have to reach us quickly and be as forthright. And for the technological fixes, a short-lived war room and an apologetic statement no longer do the trick.
Now that we seem to be in a never-ending arms race with miscreants ranging from lone rangers to state-run disinformation machines, we’re going to need more than an army of brilliant engineers patching holes and building workarounds. Companies need to build an ongoing approach—something like a Federation, through which the massive platforms and services we rely on routinely communicate and coordinate, despite the fact that they are also competitors.
These massive global platforms are always an attractive target for sophisticated hackers and state-sponsored bad actors, which is why I’ve been told that it’s not unusual for security engineers from rival businesses to stay in touch when they see unusual behavior or patterns; they share the information. This is one area where a federation approach is working, however informally.
Now, we need companies to extend it and stay on top of misdeeds and indicators of odd patterns from the get-go. If Twitter sees spikes in new account signups from, say, Macedonia or in Tagalog, it should note that to the federation, so others can review their systems. If Google sees an unusual spike in search queries for an uncommon phrase, it’s worth reporting, so perhaps Facebook can look for recent posts that use similar language. And so on.
One reason why the federation plan is necessary: No single company, no matter how massive and wealthy, can hire its way out of a steady gusher of bad information or false and manipulative ads. Mark Zuckerberg’s announcements in May and early October that Facebook will hire a total of 4,000 people to “monitor content”—flag, escalate and remove problematic items—doesn’t seem very savvy for any tech company priding itself on its ability to scale. These attempts at a solution reminded me of the US military’s repeated decisions to send many thousands of troops over the years in order to “win” in Vietnam. (We know how that turned out.) Another issue with throwing people at this kind of problem: The 4,000 (or whatever number Facebook might ultimately hire) of content flaggers are likely to be slotted as fairly low-level employees because the nature of such work is repetitive and thankless. So it’s not unusual for people in these roles to experience burnout and even PTSD from the awfulness of what they are seeing. As I say, this doesn’t seem like a useful approach.
Recent calls for these companies to take greater responsibility and add human intervention make a lot of sense, and I’d suggest these are also elements of a Federation strategy. I hope for a time when, for example, experienced editors—who know how to assess content for accuracy, research, and presentation—are an integral part of product and engineering teams across all of these platforms, all the time.
I don’t know if these sorts of human and technical adjustments are enough to stem the tide of all the digital mishegas out there. But I do know that throwing more bodies at the miasma of disinformation doesn’t work. Neither does saying “we’re just a platform.”
The era of the edge case—the exception, the outlier—is over. Welcome to our time, where trouble is forever brewing.