The Question Isn’t If Facebook Can Fix Its Problems—It’s How Much It Wants To

By Cale Guthrie Weissman

Facebook is in a quagmire that only it can solve. Every day, new information comes to light about ways foreign entities used its ad platform, and others, to create tailored, targeted campaigns with the seeming intent of swaying elections. Facebook, for its part, says it is looking into it and, until earlier this year at least, had no idea this sort of thing was happening.

CEO Mark Zuckerberg says the company’s now working hard on the problem, fighting misinformation in German elections and cooperating with government investigators. It’s set to appear in front of Congressional investigators on Nov. 1. In a Facebook post this week he also recanted for his ridiculing last year of the idea that “fake news” can influence elections.

In the uproar, there is also a sense that there’s only so much Facebook can do. With the digital advertising genie out of the bottle, policing a giant system of algorithmically-sent content for nefarious intent is ultimately impossible. The idea is that, given technical limitations, no guidelines, no rules, and no regulations–a possibility currently being drafted in Congress–would really be able to stem the spread of fake news and inflammatory advertising. Last week, when Zuckerberg described how Facebook would commit itself to the task of “catching” the “bad” stuff, he added a caveat: “I’m not going sit here and tell you that we’re going to catch all bad content in our system.”

But amid the controversies and the angry demands of lawmakers and the press, perhaps we aren’t asking the right questions. As an ad tech veteran explained it to me recently: The issue isn’t can Facebook control the content on its platform–it’s, why would it? What would motivate it to really change a formula that up until now has been good for its $27 billion business?

Facebook’s entire business model is predicated on users sharing things and engaging with ads. More than 1 billion people take to the platform and share their thoughts and links every day. What works best are posts that are sensational, even inflammatory. And despite its work with some of the biggest companies in the world, some of Facebook’s most important customers are the millions of small advertisers who spend anything from $1,000 to $10,000 to publish targeted advertising.

Other big self-serve targeted programmatic ad platforms like Google and Twitter also rely precisely on these small-to-medium size advertisers to rake in the big ad dollars—and they too have their own soul searching (and congressional hearings) to do around Russian misinformation campaigns in 2016. The digital political ad business is particularly hot; one estimate says that digital advertisers received $1.4 billion during the 2016 election.

The idea that the problems are going to be the unfortunate side effects of this giant sprawling system–a system that currently falls outside the U.S. laws governing the rest of advertising–mirrors Mark Zuckerberg’s longstanding argument about his company, an idea itself mirrored across Silicon Valley, from the venture capitalists to the Ubers: Facebook is simply a platform. This idea handily suggests that a company like Facebook should not be seen as, say, one of the world’s largest media companies, a new kind of public utility, or even as an advertiser. A “platform” is handy: It doesn’t bear some of the limitations of those other entities, but it also honors openness and the freedom of speech (even though speech on Facebook is not protected by the U.S. First Amendment).

 

And on a technical level, in any case, “platform” suggests its own limitations: The system is designed in such a way–automated, with millions of users sending billions of pieces of content every second–that policing some of its emergent problems is practically impossible.

But as Erin Griffith argues at Wired, empirically, the idea that Facebook can’t control its algorithm is wrong. A few examples illustrate this. One: Zynga. In 2011, Facebook decided its popular games were impacting user experience, and began to limit their spread; Zynga and other Facebook games soon saw a rapid decline. Two: clickbait-y sites like Upworthy and ViralNova. Facebook became wary of these viral-minded sites and changed its algorithm so that certain headline constructions wouldn’t spread nearly as quickly as they had before. Facebook’s intent was clear: mitigate the spread of content it deemed potentially threatening to its business. (And this isn’t to mention all of the various hateful, terrorist, violent, pornographic and other unwelcome content that the company struggles to police.)

Of course, playing “whack a mole” with controversial political speech sent by foreign agents isn’t the same problem for Facebook’s humans or machines as preventing, say, alcohol ads. But with serious investments–and some of the things Facebook says it’s starting to implement–it won’t be impossible to improve the policing of these ads. The company is now at a juncture where it could truly do that.

Beyond handing over the details of what has already transpired on the platform, there are a few bigger steps the company could take:

  • It should consult the experts. Quartz reports (September 30, 2017) that there are number of people who could help Facebook fix this platform problem. There are lawyers who have worked on campaigns for decades as well as experts who know the ins and outs of the FEC that could help create the company understand the best ways to crack down on this kind of electioneering.
  • Beyond listening to the outside, insiders could also help (especially more of them).  The Verge spoke to a number of former Facebook ad moderators who described a busy, rapid process and instances where ads may have been foreign attempts at inflaming opinions. Because the company prioritizes quantity and speed, these employees said, even suspicious ad campaigns like these could easily slip through. “I know exactly what these guys did,” one former worker said about Russia-sponsored ad buyers. “It’s not hard to do.” (The company says it’s stepping up its human moderation efforts.)
  • Facebook should re-think its ethos of secrecy. It’s a company made up of hackers breaking things, and it’s become a no-doored windowless lockbox of information. Despite the tens of thousands of employees at the company, external perspective is what builds technology that works for the world at large. That’s why peer review exists in the academy. This, as another Wired article notes, is one of the big problems Facebook has: Its secrecy.  “No outside researcher has ever had that kind of access,” a former Facebook ads product manager told Wired. Though Facebook said it would give investigators the 3,000 Russian-linked ads its found—from 2015 and early 2016—it would not release them to the public, citing privacy rules.
  • Perhaps the biggest change Facebook can make immediately is to implement solutions at the technological level. Beyond the above solutions, which would help the platform be more accountable for the millions of advertisers on it, people with knowledge emphatically say there are technical ways to stop the spread of fake news and inflammatory or foreign ads on the platform.

In February, the tech giant announced that its systems—built by some of the world’s finest minds in artificial intelligence—can now identify specific actions and find objects in your and your friends’ photos.  As one ad executive told me, if Facebook is able to build a system that analyzes text in images in a matter of seconds, it is absolutely able to create a system that can analyze what sort of content is being shared and automatically scrutinize questionable posts from questionable sources.

There are other, tricky loopholes to consider policing. Even as Facebook has announced moves to improve its elections ads and limit the spread of fake news–and, in a separate move, barred advertisers from targeting according to terms like “Jew hater”–its ad targeting systems are still ripe for manipulation in questionable ways. Facebook bars advertisers from targeting according to political views, but it also allows advertisers to upload their own chosen lists of voters, and then look for “lookalike” audiences.

 

And Facebook’s own tools can be effective enough for nefarious and inflammatory advertisers: As the New York Times points out, an interest in the Facebook ad category “Confederate” or any number of German military terms may mean someone is a history buff, but it might also be a handy way of targeting people with racist views. In a way, Facebook has made an opaque system where its supposed hard-line rules can sometimes be easily circumvented, a system for which Facebook can eschew responsibility.

Hacker Ways

This is the ultimate issue Facebook faces: If it wants to address the threat, which everyone acknowledges is only going to get worse, it has to uproot the core tenets of its ad business. What works on Facebook are clickable things that get people fired up–the very kindling that made fake news such an everyday occurrence. Despite its insistences that Facebook wants to be a platform for communication and connection, its edge rank algorithm–which dictates what posts become popular–determines how shareable a link is. At the moment, Facebook makes hundreds of millions of dollars from this system, whatever the messages or whoever is sending it.

Facebook’s quandary is ethical. It has fought the flow of other “toxic” information before, from porn to clickbait; in the case of political ads, the lucrative business makes the case for better internal regulation harder. But the case is being made, by a lot of people, and it’s getting louder everyday.

Facebook—no doubt hoping to avoid new government regulations—now wants to fix its problems. To do that, it needs to think like the groups who have been gaming the platform–and like the hacker-focused startup that Mark Zuckerberg founded: It needs to “hack” itself. Facebook is already good at hacking. The question is, does it want to be the hacker with the white hat, or does it want to keep hiding under a grayer one?

 

 

Fast Company , Read Full Story

(20)