Facebook really doesn’t want another conversation about user data.
The social media platform, which has built its business model on the unique collection and use of personal information for advertising, is having to answer for how it allowed a third-party company access to user data that could have been used to influence the 2016 election.
As the upstart voter-profiling company Cambridge Analytica prepared to wade into the 2014 American midterm elections, it had a problem.
The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work.
So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016.
In the report, the Times cast the data access error as a “breach” and revealed that Cambridge Analytica still had access to the data.
Details of Cambridge’s acquisition and use of Facebook data have surfaced in several accounts since the business began working on the 2016 campaign, setting off a furious debate about the merits of the firm’s so-called psychographic modeling techniques.
But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove.
More damning for the company is that it knew about the breach as early as 2015, but took only limited action to resolve the problem.
Documents seen by the Observer, and confirmed by a Facebook statement, show that by late 2015 the company had found out that information had been harvested on an unprecedented scale. However, at the time it failed to alert users and took only limited steps to recover and secure the private information of more than 50 million individuals.
Facebook disputes the suggestion that it was breached.
The company claims that it wasn’t breached, and that while it has suspended Cambridge Analytica from its service, the social giant is not at fault. Facebook contends that its technology worked exactly how Facebook built it to work, but that bad actors, like Cambridge Analytica, violated the company’s terms of service.
On the other hand, Facebook has since changed those terms of service to cut down on information third parties can collect, essentially admitting that its prior terms weren’t very good.
[FREE GUIDE: 3 helpful tips for your crisis comms prep]
Executives took to Twitter to counter some of The New York Times’ narrative—but later deleted their tweets.
Facebook’s Chief Security Officer, Alex Stamos, tweeted a lengthy defense of the company, which also included a helpful explanation for how this came about. (He later deleted the tweets, saying he “should have done a better job weighing in,”[…]
“Kogan did not break into any systems, bypass any technical controls, our use a flaw in our software to gather more data than allowed. He did, however, misuse that data after he gathered it, but that does not retroactively make it a ‘breach.'”
The latest scandal has prompted some to question Facebook higher-ups’ intentions and competence.
Whether it doesn’t see the disasters coming, makes a calculated gamble that the growth or mission benefits of something will far outweigh the risks, or purposefully makes a dangerous decision while obscuring the consequences, Facebook is responsible for its significant shortcomings. The company has historically cut corners in pursuit of ubiquity that left it, potentially knowingly, vulnerable to exploitation.
Combatting its coverage in the media hasn’t helped its case.
And increasingly, Facebook is going to lengths to fight the news cycle surrounding its controversies instead of owning up early and getting to work. Facebook knew about Cambridge Analytica’s data policy violations since at least August 2016, but did nothing but send a legal notice to delete the information. It only suspended the Facebook accounts of Cambridge Analytica and other guilty parties and announced the move this week in hopes of muting forthcoming New York Times and Guardian articles about the issue (articles which it also tried to prevent from running via legal threats.) And since, representatives of the company have quibbled with reporters over Twitter, describing the data misuse as a “breach” instead of explaining why it didn’t inform the public about it for years.
TechCrunch listed other Facebook snafus, reinforcing its depiction of a company facing a public reckoning.
Facebook might have kept the story alive by moving to block whistleblower Christopher Wylie.
His attorney, Tamsin Allen, has stated that Facebook took a two-faced approach to Wylie’s revelations. It “privately welcomed” Wylie’s help, Allen said, but publicly suspended his account and criticized him. That was indicative of a company focused more on “damage limitation” than sincerely addressing the problem at hand, according to the attorney. We’ve asked Facebook for its response, but the accusations certainly don’t help its case.
Other journalists tweeted about Facebook’s actions:
— Jake Tapper (@jaketapper) March 19, 2018
Other users suggested they don’t trust Facebook’s leaders:
Facebook said in a statement Saturday: “We reject any suggestion of violation of the consent decree. Zuckerberg and Trump are one an the same: practitioners of the Bold Faced BIG LIE.
— Frank Schaeffer (@Frank_Schaeffer) March 19, 2018
Facebook faces an authenticity problem as experts suggest it could face legal penalties.
“I would not be surprised if at some point the FTC looks at this. I would expect them to,” said David Vladeck, a former director of the FTC’s Bureau of Consumer Protection. In that role, he oversaw the investigation of alleged privacy violations by Facebook and the resulting consent decree.
Vladeck said the law allows fines up to $40,000 per violation. With a reported 50 million people affected, he said, the “maximum exposure” could reach into the billions of dollars. It is more likely that, if the FTC found violations, Facebook would face far smaller but still substantial fines as well as other consequences.
As the charges mount, fewer people are buying Facebook’s attempts to brush off criticism.
Facebook tried to downplay the story as The New York Times was preparing its report and then attempted to preempt the news with its own press release. Reporters did not appreciate the move.
Facebook tried to get out ahead of the NYT and Observer investigations. The reporters were ticked off, and the trick didn’t work. Details in @ReliableSources: https://t.co/hAmjIBP6il pic.twitter.com/6CMcRYNb5G
— Brian Stelter (@brianstelter) March 19, 2018
How would you assess Facebook’s strategy and response, Ragan/PR Daily readers?