Skip to main contentSkip to navigationSkip to navigation
Mark Zuckerberg: ‘Facebook was built to accomplish a social mission – to make the world more open and connected.’
Mark Zuckerberg: ‘Facebook was built to accomplish a social mission – to make the world more open and connected.’ Photograph: Nam Y. Huh/AP
Mark Zuckerberg: ‘Facebook was built to accomplish a social mission – to make the world more open and connected.’ Photograph: Nam Y. Huh/AP

Dawn of the techlash

This article is more than 6 years old
Rachel Botsman

Once seen as saviours of democracy, tech giants are now viewed as threats to truth. But how did our faith in all things digital turn into an erosion of trust, particularly in the arena of information and politics?

Outside, the air was a crisp – minus two degrees – with six feet of snow piled high as heads of state and global business leaders gathered for the World Economic Forum last month. Inside, a different chill hung in the air, as a frosty backlash against the social media companies made itself felt on and off stage.

The one-time darlings of free, open and trusted communication were taking a battering. Marc Benioff, the larger-than-life Salesforce CEO, suggested Facebook should be regulated like a tobacco company, because of the harmful and addictive properties of social media. But the most scathing attack came from George Soros, the billionaire investor, who said that Facebook and Google have become “obstacles to innovation”, a “menace” to society whose “days are numbered”.

Davos picked up on snowballing public criticisms of social media. It’s hard to pinpoint when things started to go sour, but certainly the digital “misconduct” that characterised the Brexit referendum and the election of Donald Trump, with its allegations of Russian meddling, tarnished the reputations of the world’s tech titans and their platforms.

Once seen as saviours of democracy, those titans are now just as likely to be viewed as threats to truth or, at the very least, impassive billionaires falling down on the job of monitoring their own backyards.

It wasn’t always this way. Remember the early catchy slogans that emerged from those ping-pong-tabled tech temples in Silicon Valley? “A place for friends”, “Don’t be evil” or “You can make money without being evil” (rather poignant, given what was to come). Users were enchanted by the sudden, handheld power of a smartphone to voice anything, access anything; grassroots activist movements revelled in these new tools for spreading their cause. The idealism of social media – democracy, friction-free communication, one-button socialising proved infectious.

So how did that unbridled enthusiasm for all things digital morph into a critical erosion of trust in technology, particularly in politics? Was 2017 the year of reckoning, when technology suddenly crossed to the dark side or had it been heading that way for some time? It might be useful to recall how social media first discovered its political muscle.

On 11 September 2001, 29-year-old Scott Heiferman watched two planes crash into the World Trade Center. He ran up to the roof of his apartment building a couple of blocks away and was joined by a group of neighbours, many of whom he’d never met before. They watched in astonishment as the twin towers collapsed. In the following days, he noticed people, total strangers, across the city warmly saying hello and helping each other. “I started thinking deeply about community. What really brings people together?” says Heiferman.

Eight months later, in 2002, he founded Meetup, a social networking platform to help people with a common interest find each other and arrange to meet, face to face. “The core idea was to figure out how to help people use the internet to get off the internet,” he says. Pug owners who wanted to walk together, cancer survivors who needed support, even witches who wanted to form covens, quickly started using the platform.

“The things that people were using Meetup for were not the things that we imagined. But I didn’t really think we were building a political organising tool,” he says.

In early 2003, however, the start-up began to become just that, when more than 140,000 Howard Dean grassroots supporters used Meetup to mobilise support. This was in the pre-iPhone era, before Facebook had 2 billion people on it and before Trump had become a formidable late-night tweeter. Dean, who started as a long-shot candidate, went on to become a frontrunner for the 2004 Democratic presidential nomination, raising more than $50m online, largely through small donations. Politics had discovered the power of the internet.

A couple of years after the Dean campaign, a then-unknown Illinois state senator called Barack Obama embraced the platform, promising to go to any Meetup where 100 people had signed up. Shortly afterwards, my.barackobama.com, known as MyBO, was born. During his first 2008 election campaign, Obama built a following of 3.2 million-plus Facebook supporters and raised more than $500m online.

Teddy Goff, a 23-year-old digital strategist, was in the thick of it all, overseeing social media, blogs and email campaigns in the battleground states. “In the pre-internet age, Fox had an over-representation of toxic voices and I thought you could drown out those voices by empowering the 80% of the country not watching Fox News,” Goff says.

He went on to run a team of 250 people, overseeing everything digital for the 2012 re-election. It was a gargantuan role. The Obama campaign raised more than $690m, registered 1.1 million voters online and garnered 24 million Twitter followers and 34 million Facebook friends. It was a dewy-eyed era for technology.

At around the same time, Facebook filed its prospectus for a $5bn initial public offering (IPO). Mark Zuckerberg, its founder, told investors that the platform wanted to help create “a more honest and transparent dialogue around government”.

Zuckerberg, now one of the world’s richest men, had a touching vision, as he would later explain: “Facebook was not originally created to be a company. It was built to accomplish a social mission – to make the world more open and connected.”

From attempting to aid revolutions in the Arab spring, to co-ordinating the Occupy Wall Street movement, social networks soon brimmed with ambitions to level the playing field. It was all wildly promising. The internet would be a transparent environment that made it easier for people to hold political leaders accountable and even strengthen people’s capacity to relate to one another. On it went, the golden dream of the digital age, before the invaders arrived.

Were we naive? As unprecedented numbers of people channelled their political energies and beliefs into social media, shouldn’t we have foreseen the way the platforms could become vulnerable to manipulation and the spread of misinformation? Probably, but most of us failed to imagine the imaginable.

Barack Obama: ‘One of the biggest challenges we have to our democracy is the degree to which we don’t share a common baseline of facts.’ Photograph: Nettflix

In a recent interview on the Netflix show My Guest Needs No Introduction With David Letterman, Obama reflected on his first year out of office. He never mentioned Trump by name, but he did candidly discuss the “pervasive divisiveness” in society and how it is exacerbated by social media.

“One of the biggest challenges we have to our democracy is the degree to which we don’t share a common baseline of facts,” Obama said. “What the Russians exploited – but it was already here – is we are operating in completely different information universes. If you watch Fox News, you are living on a different planet than you are if you listen to NPR.” Put another way, we don’t have a shared sense of reality and that can be seriously played on.

“Fake news” has become a game of accusation and counter-accusation. If it started out as a useful identifier of misinformation, it is now an unhelpful catch-all term hurled at all kinds of uncomfortable truths a president, say, might not like. Likewise, many people, overwhelmed by the pace of change and the sheer amount of knowledge available, are beating a retreat to media echo chambers.

Often, the cocoon is self-spun. A recent Reuters Institute Digital News Report found that 44% of people in the US who use social media for news end up seeing sources from both the left and the right, at more than twice the rate of people who don’t use social media. However, that’s not to say they necessarily pay attention to any contrary views. When Facebook rolled out its “related articles” feature last year, users continued to ignore information that undermined their favoured narrative.

Separating truth from fiction is set to get even harder. Artificial intelligence and augmented reality, for example, will mean that we’ll have to question everything we see, hear or read forensically, to decide if it’s the real deal or clever fakery. In July 2017, a team of computer scientists at the University of Washington generated highly convincing videos of Obama. Using many hours of pre-existing footage, researchers applied machine learning techniques to mimic realistically how Obama moves his mouth, right down to his tics and mannerisms.

In that case, he had expressed those views, but it’s easy to see how this new breed of voice- and video- morphing tools could convincingly pop any words into the mouths of public figures. Putin proclaiming that Clinton is supported by Al-Qaida. Boris Johnson announcing he is prime minister. Trump declaring war on North Korea. If trust in what politicians say is already low, it could soon be non-existent.

Trust is a tricky thing to define and measure. It’s a balancing act and it doesn’t take much to tip it off beam. As the social psychologist Morton Deutsch wrote in his seminal 1973 book The Resolution of Conflict: “Trust involves the delicate juxtaposition of people’s loftiest hopes and aspirations with their deepest worries and darkest fears.” Trust is the remarkable force that pulls you over that gap between certainty and uncertainty; the bridge between the known and the unknown. And that’s why my definition of it is simple: trust is a confident relationship with the unknown. It is trust that has allowed the internet to flourish and take off in ways that were unimaginable when it first started. Who could have predicted not so long ago that we’d be hiring babysitters online, swiping right to date, revealing our bank details or getting into cars with strangers?

With its greater transparency and speedier fixes, the internet and the digital world also looked like the answer to the trust crisis happening in the old institutions.

For the past 18 years, communications group Edelman has been measuring public trust in institutions. In recent decades, public trust in banks, media, government and NGOs plummeted to an all-time low – no surprise there. Now, almost overnight it seems, a trust crisis of similar proportions is besetting the digital world.

“Don’t believe everything you read in the newspapers,” the old saying used to go. Now it is “don’t believe everything you read on Twitter or Facebook”. According to the Edelman report published on 22 January 2018, 63% of the 33,000 respondents said they no longer knew how to tell good journalism from rumour or falsehoods. In the UK, around 70% of Britons believe social media companies are not doing enough to stop extremist content being shared or to tackle illegal behaviours on their platforms. “Trust is only going to be regained when the truth moves back to centre stage,” commented Richard Edelman in the report’s summary.

Foreign trolls and politically motivated bots, accused of sowing discord through “computational propaganda”, aren’t helping. According to a recent study conducted by the Oxford Internet Institute, a third of Twitter traffic prior to the EU referendum appears to have come from scripted bots, mainly spreading pro-Leave content.

According to the research paper, Social media, sentiment and public opinions: Evidence from #Brexit and #USElection, written by three data scientists from Swansea University and the University of California, for every original tweet created by a bot, seven retweets were made by humans. In the 48 hours around the referendum, Russian-linked accounts posted more than 45,000 tweets encouraging people to vote for Brexit.

‘Brexit, however, soon began to look like just a dry run for Russian-linked trolls and political ads’ Photograph: Hannah Mckay/Reuters

Brexit, however, soon began to look like just a dry run for Russian-linked trolls and political ads. As part of the US Senate intelligence committee’s ongoing investigation into how Russia used social media to influence the result of the election, representatives from Facebook, Google and Twitter have been obliged to submit evidence about relevant activity on their platforms. Twitter provided a list, 65 pages long, with the handles of some 36,746 Russian-linked bots that tweeted a total of 1.4m times. The company estimates these tweets were viewed 288m times.

Facebook also admitted to lawmakers that between June 2015 and August 2017, 11.4 million Americans definitely saw Russian IRA (its troll farm) advertisements, which ranged from “like and share if you want Burqa banned in America” to claims that “Hillary is Satan, and her crimes and lies have proved just how evil she is. And even though Donald Trump is not a saint by any means, at least he is an honest man, and he cares deeply for his country.” At the bottom of the ad was a press “like” to help Jesus (Trump) win. The most successful ads were clicked on and shared by almost a quarter of the people who saw them. This means, according to Facebook, that up to 126 million residents (or almost half of the US population) were likely to have seen a Russian-linked post.

A study by BuzzFeed found that the top five fake news items in the last weeks of the election were all negatives for the Clinton campaign. In other words, the Facebook algorithm picked a side – it’s not neutral.

“Because of algorithms, social media has never really been a forum for the best ideas to rise organically to the top,” says Goff. “What’s been shown in the past couple of years is not so much that people are inherently racist, sexist or hard right-wing, but that these platforms are engineered to be vulnerable to propaganda campaigns, among other interventions.” Bot accounts furiously sharing messages happened on such a grand scale that it’s hard to believe the platforms didn’t notice. Algorithms “left to their own devices” mean that content generated by any random individual – with no journalistic track record, no fact-checking or no significant third-party filtering – can reach as many readers as, say, the BBC. And that’s a critical problem.

I’m a huge advocate of free speech, open democracy and online dialogue that mirrors the exchange of opinions that happens offline – at work, at home and even in classrooms. The issue is that it has become a free-for-all, a corruptible beast that we can’t, or haven’t, yet learned to control. We don’t have the tools to deal the scale of fraught challenges created by a new world of self-reference and “hands-off” proprietors.

Platforms are trying to figure out their role in it all – are they mere facilitators in bringing people together or something more? Who’s in charge? And who’s responsible when things go wrong?

Facebook insists it is not a media company but merely a “neutral technology pathway” facilitating connections between people. It is a misconceived and dangerous position. It is a media company with enormous influence in shaping someone’s worldview about whom to trust. And it is profit-driven. “Facebook makes money if the advertiser pays, regardless of whether people’s lives are being improved,” says Heiferman. In May 2017, Facebook reported that 98% of its quarterly revenue came from advertising, up from 85% in 2012. In other words, it’s in the company’s interests to keep our eyes glued to the screen, no matter what the content.

The tech leaders, rightly, are under fire for not doing nearly enough to detect and stem the flow of false information. Stung into action, in October 2017, Facebook made some initial “transparency and authenticity efforts” to force advertisers to verify their identity and label their ads more clearly.

“We’re making it possible to visit an advertiser’s page and see the ads they’re currently running,” says Samidh Chakrabarti, Facebook’s product manager for civic engagement. “We’ll soon also require organisations running election-related ads to confirm their identities so we can show viewers who exactly paid for them.”

In February 2017, in the run-up to the French presidential election, Facebook and Google News announced they were part of CrossCheck, an industry coalition of local media companies, including AFP, Le Monde and 15 others, to identify and fact-check dubious content. Stories flagged by two fact-checking companies as making false claims or pushing myths, such as Emmanuel Macron plotting a new tax on homeowners, were labelled as “contested”. Facebook also cracked down on more than 30,000 fake accounts in France, including those created by Russian intelligence agencies attempting to spy on Macron’s election campaign by posing as friends.

Many critics, however, and even other business leaders believe that social media companies should be doing more, a lot more, regardless of cost. “Protecting our community is more important than maximising our profits,” Zuckerberg said after the criticism. It won’t come cheap. According to David Wehner, the company’s chief financial officer, Facebook’s operating expenses could increase by 45% to 60% if the platform were to invest significantly in security features, or – horror of horrors – hire a lot more human beings to keep an eye on the algorithms.

Another, more insidious problem than even straight-out fake news, says Goff, is factual news presented out of context and designed to play on our brain’s tendency to jump to conclusions. Imagine, he says, a true story breaking about a Syrian refugee committing a murder.

“I can guarantee that Breitbart and Fox will spend the next 96 hours talking about nothing but this,” says Goff, “and it’s not fake news. In this scenario, it’s true. What they won’t say is that it’s the first murder committed by a Syrian refugee and that, statistically, murder by refugees is far [less frequent] than murder by those born in the United States and so this isn’t really a problem at all. The brain is vulnerable to these kinds of stories, stories that may not be a Russia-misinformation campaign.”

Technology is only the means. We also need to ask why our political ideologies have become so polarised, and take a hard look at our own behaviour, as well as that of the politicians themselves and the partisan media outlets who use these platforms, with their vast reach, to sow the seeds of distrust. Why are we so easily duped? Are we unwilling or unable to discern what’s true and what isn’t or to look for the boundaries between opinion, fact and misinformation? But what part are our own prejudices playing?

Luciano Floridi, of the Digital Ethics Lab at Oxford University, points out that technology alone can’t save us from ourselves. “The potential of technology to be a powerful positive force for democracy is huge and is still there. The problems arise when we ignore how technology can accentuate or highlight less attractive sides of human nature,” he says. “Prejudice. Jealousy. Intolerance of different views. Our tendency to play zero sum games. We against them. Saying technology is a threat to democracy is like saying food is bad for you because it causes obesity.”

It’s not enough to blame the messenger. Social media merely amplifies human intent – both good and bad. We need to be honest about our own, age-old appetite for ugly gossip and spreading half-baked information, about our own blindspots.

Is there a solution to it all? Plenty of smart people are working on technical fixes, if for no other reason than the tech companies know it’s in their own best interests to stem the haemorrhaging of trust. Whether they’ll go far enough remains to be seen.

We sometimes forget how uncharted this new digital world remains – it’s a work in progress. We forget that social media, for all its flaws, still brings people together, gives a voice to the voiceless, opens vast wells of information, exposes wrongdoing, sparks activism, allows us to meet up with unexpected strangers. The list goes on. It’s inevitable that there will be falls along the way, deviousness we didn’t foresee. Perhaps the present danger is that in our rush to condemn the corruption of digital technologies, we will unfairly condemn the technologies themselves.

Rachel Botsman is the author of Who Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart.

Most viewed

Most viewed