Facebook keeps promising to do better but is still a massive vector for political disinformation and pandemic conspiracies
In 2016, a widespread disinformation campaign coordinated by Russian intelligence agencies wreaked havoc on the integrity of our elections. Facebook’s newsfeed became the primary focus of the attacks, as users were invited to join groups created by Russian agents and fed inflammatory posts designed to suppress the vote.
Facebook would not reveal the full scope of the attack until forced to do so by ongoing investigative reporting. In the United States, Russian intelligence agencies reached as many as 126 million Americans.
Mark Zuckerberg, Facebook’s CEO, promised that the company would do better.
In 2018, Facebook conceded its platform had been used to incite sectarian violence against the Rohingya Muslim minority in Myanmar. In 2019, Facebook was used to amplify hate speech and calls for violence in India.
Again, Zuckerberg promised that the company would do better.
Enter COVID-19. There's never a good time to be bombarded by misinformation, but at present, acting on misinformation could be fatal. While people across the United States rely heavily on Facebook to check up on self-isolated friends and family, scam artists are pushing miracle cures, conspiracy theories, and outrageous lies about the outbreak. Some offer charitable relief like free groceries “due to the coronavirus” in an effort to get people to hand over their credit card numbers.
Facebook says it’s taking the problem seriously. But Consumer Reports was recently able to purchase a series of ads that included one which read, “Coronavirus is a HOAX. We’re being manipulated with fear. Don’t give in to the propaganda—just live your life like you always have.”
Facebook just announced that it will start warning users who engaged with harmful misinformation in response to a report recently released by the advocacy group Avaaz. According to the findings, an estimated 117 million users saw false or misleading content related to the pandemic, despite much of it having been tagged by Facebook’s own fact checkers, and contrary to Facebook executives’ claims of having fixed this issue.
After the report came out, Zuckerberg again promised that Facebook would do better.
Functional democracy requires an informed populace. Candidates tell you who they are and what they stand for, and you pick the one that most closely aligns with your value set. Facebook, in principle, provides a neutral platform for this vital discourse. The behemoth of social media boasts over 190 million users in the United States, spending an average of 27 minutes on the app daily. Thanks to the Covid-19 outbreak and shelter and place orders, the site has seen huge growth in users in the first quarter of 2020. Candidates for higher office can’t help but go through the social media giant.
Platforms like Facebook, at their core, are agnostic to what you’re sharing. It could be a cat video; it could be a skinhead expounding upon the virtues of the white race. Facebook has one goal, and one goal only: to keep you on Facebook, for as long as possible, sharing content, clicking likes, revealing your preferences, and thereby generating data that can be packaged and sold to third parties. Engagement is the only metric that matters, and a white supremacist recruiting video or a live-streamed mass shooting certainly drive it. In fact, there’s some data to indicate hate and lies spread faster than other content on the site, thanks to the very nature of these social media platforms--not to mention human nature.
Because of repeated failures on the part of Facebook executives to squarely address the problems on the site, there has never existed in human history such a powerful engine for spreading misleading information. If you happen to be a candidate who lies all the time without shame, these systemic weaknesses are easy to exploit.
During the impeachment trial, Donald Trump spent millions trying to muddy the waters, including running a blatantly misleading ad claiming Joe Biden “promised Ukraine a billion dollars if they fired the prosecutor investigating his son’s company.”
CNN refused to air the blatantly false ad. Facebook? No problem.
Facebook’s content is one part of the problem. The troves of data available for serving the ads are another. Microtargeting allows ad buyers to segment their buys based on an ocean of data generated every time a user scrolls through their feed, likes something, or even browses the web on other sites, thanks to today’s tracking technology.
Advertisers, including political campaigns, can rely on Facebook to share everything they know about users, including whether the battery is running low on their phones. The company doesn’t have a great history with the categories they’ve made available to ad-buyers, either. Some notable target segments over the past few years have included “Jew Haters,” and capabilities to discriminate by race in housing. They’ve said they’ll clamp down on coronavirus misinformation, but should you wish to peddle snake oil, until recently, you could buy ads based on ‘interest in pseudoscience.’
In 2016, the Trump campaign and Russian agents pulled every psychological lever they could get their hands on. For the most part, those levers are still available.
Most Americans would rather they weren’t. A recent Gallup poll shows a solid majority -- 72 percent -- of Americans believe that social media companies should not make information available to political campaigns for targeting purposes. It’s an issue enjoying widespread support, with little variation between Democrats, Republicans, and Independents. (A sizeable minority, 20 percent, believe no campaign ads should be shown online at all)
There is no shortage of ideas on how to fix Facebook. Senator Warren has called for using antitrust powers to break up tech giants like Amazon, Google, and Facebook. Others favor regulating it like a utility. And Avaaz projected that the company could cut their users tendency to believe in false or misleading information in half through the use of notifications informing users they’d seen information tagged as false or misleading by fact-checkers.
Many other social media giants have taken active steps to prevent the spread of misinformation from politicians. Twitter has banned political ads altogether. Google has banned false political ads and eliminated microtargeting. Both moves were seen as shots across Facebook’s bow.
While it may be impossible to disentangle the thorniest free speech vs. freedom-from-Nazis debates, some of these calls shouldn’t be difficult. Bleach can’t cure coronavirus, and Joe Biden didn’t crow about getting his son off the hook in Ukraine. And no, the Pope did not endorse Trump (or anyone else for that matter). No one should be able to buy Facebook ads filled with lies. Consumer Reports’ coronavirus hoax ads shouldn’t have been allowed to run. Neither should Donald Trump’s ads lying about his opponents.
Big-ticket ad buys from political campaigns, dark money political action committees and huge bot farms shouldn’t be able to microtarget ads so that only people susceptible to the content can see it. People give Facebook access to their interests, opinions, associations and choices, and that data should not be weaponized to deliver customized disinformation to them--for profit.
Mark Zuckerberg and Facebook have made a lot of promises. It’s long past time the company does better, or maybe Facebook’s fact-checkers should start tagging company PR releases as misinformation.