Google and Facebook Are Major Outlets for Media - So Why Aren't They Held Accountable for Spreading Fake News?
In Part 1, we looked at the consequences of two laws—the Digital Millennium Copyright Act and the Communications Decency Act, enacted 20 years ago—that allowed Silicon Valley giants like YouTube and Facebook to act as platforms and not publishers. These laws release them from personal responsibility for copyright infringement, slander and libel, if they followed take-down procedures in a reasonable amount of time.
The effects on legitimate copyright holders, in music and today in video media, are clear.
Now we will explore the effect on news and journalism.
Algorithm Is Curation and Editing
Using the internet service provider "pipe" argument, tech companies like Google and Facebook claim they have little or no editorial control over the content on their platforms. Silicon Valley companies use an analogy to compare themselves to phone companies rather than media publishers, arguing that AT&T doesn’t edit, censor, prioritize or sequence the content or call participants. Like a phone company, Silicon Valley companies contend they manage content with "algorithms"—and reluctantly, because of cost/human moderators—and still manage to maintain ISP status.
The algorithm is a mysterious sounding word that cloaks tech companies’ editorial control. I think of the algorithm as analogous to Coca-Cola’s secret formula—which turned out to be sugar. Algorithms are closely guarded with techs' go-to rationale for secrecy: We can’t tell anyone, because users will game the algorithm. Translation: leave the algorithm gaming to us.
An algorithm, when used to deliver digital content, news or information, is a set of rules written by human beings and executed by a machine rapidly and repetitively. Content metadata and keywords are matched to user data and behavior to determine what content users see—and importantly, what advertisement appears with the content.
The content each user sees on YouTube’s homepage has been curated by the algorithm based on their previous behavior, or content an advertiser paid to reach them, using their profile data. What one person sees is different from what another person sees. It is editing and curation on a massive scale by a machine, but based on rules established by human beings.
Now imagine New York Times executive editor Dean Baquet waking up tomorrow morning and saying: To hell with human editors reading and correcting every article, employing fact-checkers, crafting headlines and curating the paper’s layout. It’s just too damn slow and expensive. We need content at scale!
Baquet would inform Peter Baker and other New York Times White House reporters that content is now their responsibility. If Baker messes up, he indemnifies the Times and gets sued for defamation or slander. And to replace costly human editors, Baquet writes "rules" for engineers to convert into algorithms. Rule One: When ISIS attacks inside Syria, publish when the death count is at 25 or more. Rule Two: When ISIS attacks in Europe, publish in the International section when there are two or more deaths. Send the rules to tech and write more rules tomorrow. If ISIS uses a new method, publish in the Science section. And so on.
It sounds crazy, but what is happening on the platforms (Facebook, etc.) many people rely on for news is worse.
Now imagine a Russian company calls Dean Baquet of the Times and ponies up some money to push ISIS stories up a few pages when the attacks are in Eastern Europe. And Wendy’s will pay to move up good news on cardiovascular disease, so consumers eat more bacon-cheese burgers. And Baquet decides to call it all a newsfeed for good measure.
The fine line between editorial and advertising has been violated, and most news organizations now maintain brand-content creation divisions to survive, but it is only the platforms that also benefit from full-indemnification for content they deliver.
News Without Responsibility
The DMCA and CDA have birthed a media consumption environment where the bulk of the advertising revenue and the bulk of the content consumption funnel through Google search and Facebook, with no responsibility for content they deliver. Their power and a modest algorithm change can be devastating for the publishers doing the investigative work.
AlterNet.org built an audience close to 6 million unique visitors over a 20-year period, and in one month this past June, its traffic dropped by 50 percent from what it was at the beginning of the year. Why? The algorithm, of course.
While tech companies skim advertising dollars from legitimate publishers, even modest changes to their algorithms can doom an independent website or YouTube contributor overnight. Advertisers rely on "programmatic" advertising—another algorithm-driven Silicon Valley invention—to allow advertisers to reach audiences on thousands of sites. When bad publicity arose about advertising on so-called alt-right websites, corporate execs imagined Tide ads adjacent to “How to Keep your KKK Hoodie White” articles.
YouTube advertisers cut bait, and Google was swift in its response. Unfortunately, algorithms are clumsy at identifying context, so scores of YouTube contributors and websites presenting legitimate news content were caught in the net. The platforms that control the pipeline have no skin in the game, yet secure most of the ad revenue at scale.
Algorithm and Moderation: Blunt and Hard to Control
There is another reason to keep algorithms secret. They can’t really do the job professional editors and fact-checkers do. With each revelation, the companies add human moderators, which threaten their billion-dollar business models and inch them closer to losing DMCA and CDA status holding up their house of cards. And they certainly can’t do the job at the scale the DMCA and CDA unleashed for Facebook and Google content ingestion in the first place.
As Zeynep Tufekci pointed out in the New York Times, “Human employees are expensive, and algorithms are cheap. Facebook directly employs only about 20,658 people, roughly one employee per 100,000 users. With so little human oversight and so much automation, public relations crisis like the one that surrounded ads for hate groups are inevitable.”
In May 2017, when the Guardian published Facebook moderation guidelines, Facebook announced an increase in moderators from 4,500 to 7,500. Facebook had to assess over 50,000 cases of so-called revenge porn in a single month. And when announcing the increase, Mark Zuckerberg admitted Facebook reviews millions of reports every week. In August, according to CNBC, “Facebook closes more than 1 million accounts every day, with most of those created by spammers and fraudsters, security chief Alex Stamos says.”
Thomas Friedman in the New York Times noted, “One reason Facebook was slow to respond is that its business model was to absorb all of the readers of the mainstream media newspapers and magazines and to absorb all their advertisers—but as few of their editors as possible. An editor is a human being you have to pay to bring editorial judgment to content on your website, to make sure things are accurate and to correct them if they’re not. Social networks preferred to use algorithms instead, but these are easily gamed.”
At the recent congressional judiciary subcommittee hearings, Senator John Kennedy, R-La., pressed Facebook acting counsel Colin Stretch, “I’m trying to get us down from la la land here.” He continued, “The truth of the matter is, you have five million advertisers that change every month, every minute. Probably every second. You don’t have the ability to know who every one of those advertisers is, do you?”
Stretch reluctantly admitted it was true. Facebook can’t possibly evaluate the shell companies and identities of every advertiser each month—the advertising that accompanies your Newsfeed content. In its 2016 annual report, Facebook reported that only 1 percent of its monthly active users were fake. Let me put this another way: 20 million accounts are fake.
Google reports only 0.25 percent of its daily search results are false or misleading. Again, let me translate this for you: 22.5 million search results every single day are fake.
In the Russia probe, Twitter recently reported to Congress it was removing 300 accounts identified as fake. But a study by Alessandro Bessi and Emilio Ferrara, researchers at the University of Southern California, analyzed 20.7 million tweets posted by nearly 2.8 million distinct users from Sept. 16 to Oct. 21, 2016, and estimated “the presence of at least 400 thousand bots, accounting for roughly 15 percent of the total Twitter population active in the U.S. presidential election discussion, and responsible for about 3.8 million tweets, roughly 19 percent of the total volume.”
Twitter's own PublicPolicy statement about the Russian interference in the election noted, “On average, our automated systems catch more than 3.2 million suspicious accounts globally per week—more than double the amount we detected this time last year.”
They want you to believe they can solve this, but they can’t.
In the Verge article about Facebook’s moderation problem, Hany Farid, professor and chair of computer science at Dartmouth and senior adviser to the Counter Extremism Project, was not optimistic about machine learning saving the day any time soon. And he developed the photoDNA technology used to detect child-exploitation images. He said, “But a better algorithm can’t fix the mess Facebook’s currently in. This promise is still—at best—many years away, and we can’t wait until this technology progresses far enough to do something about the problems that we are seeing online.”
Things are going to get worse on the misinformation and news front before they get better, as new digital face and voice technologies roll out. It is now possible to alter a video clip of President Trump speaking and have him say, “We are bombing North Korea,” in perfect voice, with perfect facial movements that are impossible to detect with the naked eye.
Stop Calling It a Newsfeed
It is one thing for this massive, imperfect, Silicon Valley money-making machine to stumble when we are searching for new sneakers or sharing recent baby pictures. It is quite another when they deliver news, content-marketing disguised as news, or fake issue-ads by Russian trolls in a presidential campaign.
Imagine if 1 percent of the articles in your local, regional or national newspaper were fake. That would be one a day for most newspapers. Newspapers print minor corrections and retractions, but as publishers, they would lose subscribers, go out of business, and be buried in lawsuits.
Ponder how far we have traveled when discussing news and journalism standards. I grew up watching All the President's Men, with Washington Post executive editor Ben Bradlee frustrating Bob Woodward and Carl Bernstein as he demanded more sources before he would publish Watergate allegations. Publishers, in part because of legal exposure, maintain rigorous editorial standards and first-person sources with back-up and verification.
Today, 40 percent of Americans are getting their news from Facebook, a company that won’t legally stand behind its news content or admit it is a publisher. With Google and Facebook as the conduits, news publishers that accept responsibility for their content are now in the Silicon Valley version of the Roman Colosseum, racing to be first to deliver tabloid-headline-juiced news stories into Google search and Facebook feeds or suffer the consequences. It’s a demoralizing way to run the fifth estate.
Democrats would be wise to champion new versions of the DMCA and CDA. At a minimum they could promote one change: define news, and if you deliver news, you’re a publisher.
It would go a long way toward cleaning up fake news and protecting legitimate news organizations.
Democrats could frame their argument on bedrock conservative principles of personal responsibility. It was Mitt Romney, after all, who insisted that corporations are people, mostly to ensure free speech protections and Citizens United “money is speech” corporate involvement in politics. Democrats could press Facebook and Google to agree to be publishers when providing news, and admit they edit and curate content. Democrats could demand they stand behind content on their sites under dedicated news banners—change Facebook's Newsfeed to Feed, and isolate the news somewhere else on the page.
Silicon Valley can make changes, and accepting a designation as publishers for news and tightening the reins on intellectual property in search and social media are eminently doable. When it is in their interest, they control for child pornography and terrorist beheadings. Content ID is a system of content matching used by YouTube that primarily benefits larger media companies that can submit their content and have the staff and legal expertise to navigate the revenue sharing and take-down options, but as currently constructed, the system places too great a burden on the copyright owner. After the first notification of a particular piece of content, the burden should shift to the platform, not the copyright holder.
News and Content Straight—No Silicon Valley Chaser
Harvard political philosopher Michael Sandel recently said of tech companies, “They can’t have it both ways. If they claim they are neutral pipes and wires, like the phone company or the electric company, they should be regulated as public utilities. But if, on the other hand, they want to claim the freedoms associated with news media, they can’t deny responsibility for promulgating fake news.”
In copyright parlance, a print newspaper is a “fixed” work, put to bed the night before by human editors, writers and fact-checkers. The names of those making the editorial decisions are listed on the masthead on page two. Newspapers are publishers that accept responsibility for what they print, and even vet advertisers. You may not like the New York Times, but when a reporter makes a mistake, the editors print a correction. When it’s a big mistake, like faulty Iraq war coverage, they issue a full-blown investigation.
Publishers and content creators worldwide need to band together and fight for changes to the DMCA and CDA. Consumers need to break the Silicon Valley spell and go directly to publishers that edit, curate and legally stand behind their work.
If you buy sushi in a gas station, get your taxes done at the deli or receive your news from a platform that accepts no responsibility, you get what you deserve—and he’s in the White House.
I get my news straight from a publisher, no Silicon Valley chaser. If you want to save journalism and democracy, maybe you should, too.