Hildegard von Bingen (1098-1179) German artist, philosopher, composer, mystic Cosmic Tree
All of a sudden, politicians in the EU, UK, and USA all want to talk to Mark Zuckerberg. That’s a bad enough sign all by itself. It means they all either have been asleep, complicit, or they’re not very bright. The media tries to convince us the Facebook ‘scandal’ is about Trump, Russia (yawn..) and elections. It’s not. Not even close.
If Zuckerberg ever shows up for any of these meetings with ‘worried’ politicians, he’ll come with a cabal of lawyers in tow, and they’ll put the blame on anyone but Facebook and say the company was tricked by devious parties who didn’t live up to their legal agreements.
After that, the argument won’t be whether Facebook broke any laws for allowing data breaches, but whether their data use policy itself is, and always was, illegal. Now, Facebook has been around for a few years, with their policies, and nobody ever raised their voices. Not really, that is.
And then it’ll all fizzle out, amid some additional grandstanding from all involved, face-saving galore, and more blame for Trump and Russia.
The new European Parliament chief Antonio Tajani said yesterday: “We’ve invited Mark Zuckerberg to the European Parliament. Facebook needs to clarify before the representatives of 500 million Europeans that personal data is not being used to manipulate democracy.”
That’s all you need to know, really. Personal data can be used to manipulate anything as long as it’s not democracy. Or at least democracy as the Brussels elite choose to define it.
First: this is not about Cambridge Analytica, it’s about Facebook. Or rather, it’s about the entire social media and search industry, as well as its connections to the intelligence community. Don’t ever again see Google or Facebook as not being part of that.
What Facebook enabled Cambridge Analytica to do, it will do ten times bigger itself. And it sells licences to do it to probably thousands of other ‘developers’. The CIA and NSA may have unlimited powers, but prior to Alphabet and Facebook, they never had the databases. They do now, and they’re using them. ‘Manipulate democracy’? What democracy?
Then: 50 million is nothing. Once the six degrees of separation giant squid gets going, there’s no stopping it. The Cambridge Analytica thing supposedly started with a few hundred thousand people who consented to having their data used for ‘academic’ purposes. From there it’s easy to get to 50 million. It’s harder to stop there than it is to go to hundreds of millions.It’s the six degrees of separation.
Facebook allegedly has over 2 billion user accounts, and their algorithms don’t stop there either. If anything, 50 million is a bit of a failure. What you should understand in this is that Cambridge Analytica are a bunch of loose cannons (yeah, yeah, those dark videos look so incriminating..) and nobody knows what they ever captured.
The real issue lies elsewhere. And we can figure it out. All we need is a few glances into the past. This first article is from June 30 2014. It contains all you read today, and more. Just a bit less Russia and Trump.
It already knows whether you are single or dating, the first school you went to and whether you like or loathe Justin Bieber. But now Facebook, the world’s biggest social networking site, is facing a storm of protest after it revealed it had discovered how to make users feel happier or sadder with a few computer key strokes. It has published details of a vast experiment in which it manipulated information posted on 689,000 users’ home pages and found it could make people feel more positive or negative through a process of “emotional contagion”.
In a study with academics from Cornell and the University of California, Facebook filtered users’ news feeds – the flow of comments, videos, pictures and web links posted by other people in their social network. One test reduced users’ exposure to their friends’ “positive emotional content”, resulting in fewer positive posts of their own. Another test reduced exposure to “negative emotional content” and the opposite happened.
The study concluded: “Emotions expressed by friends, via online social networks, influence our own moods, constituting, to our knowledge, the first experimental evidence for massive-scale emotional contagion via social networks.”
The question is simple, isn’t it? Do you want to provide a bunch of, well, geeks, with the ability to change how you feel, just so their employers can make -more- money off of you? That is 1984. That is thought control. And Facebook is some modern honey trap.
Lawyers, internet activists and politicians said this weekend that the mass experiment in emotional manipulation was “scandalous”, “spooky” and “disturbing”. On Sunday evening, a senior British MP called for a parliamentary investigation into how Facebook and other social networks manipulated emotional and psychological responses of users by editing information supplied to them.
Jim Sheridan, a member of the Commons media select committee, said the experiment was intrusive. “This is extraordinarily powerful stuff and if there is not already legislation on this, then there should be to protect people,” he said. “They are manipulating material from people’s personal lives and I am worried about the ability of Facebook and others to manipulate people’s thoughts in politics or other areas. If people are being thought-controlled in this kind of way there needs to be protection and they at least need to know about it.”
Um, so 4 years ago, there was a call for a parliamentary investigation in Britain and a member of the Commons media select committee proclaimed there should be legislation to protect people. Wonder how that panned out? Read the news today. Time stood still.
But there’s of course much more going on. You can claim that people should know about their thoughts being controlled, but that’s nonsense. Nobody in their right mind would, provided the arguments are honestly laid out, permit any such thing.
Moreover, it’s not just their own emotions that are being manipulated, it’s those of their friends and family too. If you are deeply unhappy, they may not see you expressing your distress; it can be easily filtered out so you appear in great spirits. Your friends feel good but someone wants you sad? No problem.
And there’s yet another aspect, one that Facebook may try to use for legal reasons: ever since the days of Edward Bernays, advertisements, and media in a broader sense, are shaped to influence what you think and feel. It sells soda, it sells cars, and it sells wars.
So yeah, people should know about all this, but the role of politicians and parliaments must also be to eradicate it altogether and forever from the societies that vote them in power. Or to tell their voters that they think it’s acceptable, and by the way, they too use deception to get more votes.
A Facebook spokeswoman said the research, published this month in the journal of the Proceedings of the National Academy of Sciences in the US, was carried out “to improve our services and to make the content people see on Facebook as relevant and engaging as possible”. She said: “A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow.”
But other commentators voiced fears that the process could be used for political purposes in the runup to elections or to encourage people to stay on the site by feeding them happy thoughts and so boosting advertising revenues. In a series of Twitter posts, Clay Johnson, the co-founder of Blue State Digital, the firm that built and managed Barack Obama’s online campaign for the presidency in 2008, said: “The Facebook ‘transmission of anger’ experiment is terrifying.”
He asked: “Could the CIA incite revolution in Sudan by pressuring Facebook to promote discontent? Should that be legal? Could Mark Zuckerberg swing an election by promoting Upworthy [a website aggregating viral content] posts two weeks beforehand? Should that be legal?”
The ‘transmission of anger’ experiment. This is the world you live in.
Well, no, none of it should be legal. And none of it would be if people knew what was going on.
It was claimed that Facebook may have breached ethical and legal guidelines by not informing its users they were being manipulated in the experiment, which was carried out in 2012. The study said altering the news feeds was “consistent with Facebook’s data use policy, to which all users agree prior to creating an account on Facebook, constituting informed consent for this research”.
But Susan Fiske, the Princeton academic who edited the study, said she was concerned. “People are supposed to be told they are going to be participants in research and then agree to it and have the option not to agree to it without penalty.”
James Grimmelmann, professor of law at Maryland University, said Facebook had failed to gain “informed consent” as defined by the US federal policy for the protection of human subjects, which demands explanation of the purposes of the research and the expected duration of the subject’s participation, a description of any reasonably foreseeable risks and a statement that participation is voluntary. “This study is a scandal because it brought Facebook’s troubling practices into a realm – academia – where we still have standards of treating people with dignity and serving the common good,” he said on his blog.
Ah, academia, you unblemished child. We never knew you. Incidentally, what appears to be creeping through between the lines here is that Facebook’s data use policy was prepared from the start, 14+ years ago, for exactly these kinds of ‘experiments’. Which gives a whole new dimension to the discussion today.
It is not new for internet firms to use algorithms to select content to show to users and Jacob Silverman, author of Terms of Service: Social Media, Surveillance, and the Price of Constant Connection, told Wire magazine on Sunday the internet was already “a vast collection of market research studies; we’re the subjects”.
“What’s disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission,” he said. “Facebook cares most about two things: engagement and advertising.
If Facebook, say, decides that filtering out negative posts helps keep people happy and clicking, there’s little reason to think that they won’t do just that. As long as the platform remains such an important gatekeeper – and their algorithms utterly opaque – we should be wary about the amount of power and trust we delegate to it.”
Robert Blackie, director of digital at Ogilvy One marketing agency, said the way internet companies filtered information they showed users was fundamental to their business models, which made them reluctant to be open about it.
“To guarantee continued public acceptance they will have to discuss this more openly in the future,” he said. “There will have to be either independent reviewers of what they do or government regulation. If they don’t get the value exchange right then people will be reluctant to use their services, which is potentially a big business problem.”
Feel a bit more awake now? Remember, that was a 2012 study. Let’s move on to 2016, when Shoshana Zuboff penned the following for German paper Franfurter Allgemeine. Just in case you thought it was all about Facebook. This is a bit more abstract, but worth it, in all its length (which I don’t have space for).
[..] The game is no longer about sending you a mail order catalogue or even about targeting online advertising. The game is selling access to the real-time flow of your daily life –your reality—in order to directly influence and modify your behavior for profit. This is the gateway to a new universe of monetization opportunities: restaurants who want to be your destination. Service vendors who want to fix your brake pads.
Shops who will lure you like the fabled Sirens. The “various people” are anyone, and everyone who wants a piece of your behavior for profit. Small wonder, then, that Google recently announced that its maps will not only provide the route you search but will also suggest a destination.
This is just one peephole, in one corner, of one industry, and the peepholes are multiplying like cockroaches. Among the many interviews I’ve conducted over the past three years, the Chief Data Scientist of a much-admired Silicon Valley company that develops applications to improve students’ learning told me:
“The goal of everything we do is to change people’s actual behavior at scale. When people use our app, we can capture their behaviors, identify good and bad behaviors, and develop ways to reward the good and punish the bad.
[..] There was a time when we laid responsibility for the assault on behavioral data at the door of the state and its security agencies. Later, we also blamed the cunning practices of a handful of banks, data brokers, and Internet companies. Some attribute the assault to an inevitable “age of big data,” as if it were possible to conceive of data born pure and blameless, data suspended in some celestial place where facts sublimate into truth.
I’ve come to a different conclusion: The assault we face is driven in large measure by the exceptional appetites of a wholly new genus of capitalism, a systemic coherent new logic of accumulation that I call surveillance capitalism. Capitalism has been hijacked by a lucrative surveillance project that subverts the “normal” evolutionary mechanisms associated with its historical success and corrupts the unity of supply and demand that has for centuries, however imperfectly, tethered capitalism to the genuine needs of its populations and societies, thus enabling the fruitful expansion of market democracy.
[..] the application of machine learning, artificial intelligence, and data science for continuous algorithmic improvement constitutes an immensely expensive, sophisticated, and exclusive twenty-first century “means of production.” [..] the new manufacturing process converts behavioral surplus into prediction products designed to predict behavior now and soon.
[..] these prediction products are sold into a new kind of meta-market that trades exclusively in future behavior. The better (more predictive) the product, the lower the risks for buyers, and the greater the volume of sales. Surveillance capitalism’s profits derive primarily, if not entirely, from such markets for future behavior.
And then we get to today. For more examples of the same, and for confirmation that even though all of this stuff was known -let alone knowable- years ago, nothing has changed.
Hundreds of millions of Facebook users are likely to have had their private information harvested by companies that exploited the same terms as the firm that collected data and passed it on to Cambridge Analytica, according to a new whistleblower.
Sandy Parakilas, the platform operations manager at Facebook responsible for policing data breaches by third-party software developers between 2011 and 2012, told the Guardian he warned senior executives at the company that its lax approach to data protection risked a major breach. “My concerns were that all of the data that left Facebook servers to developers could not be monitored by Facebook, so we had no idea what developers were doing with the data,” he said.
[..] That feature, called friends permission, was a boon to outside software developers who, from 2007 onwards, were given permission by Facebook to build quizzes and games – like the widely popular FarmVille – that were hosted on the platform. The apps proliferated on Facebook in the years leading up to the company’s 2012 initial public offering, an era when most users were still accessing the platform via laptops and computers rather than smartphones.
Facebook took a 30% cut of payments made through apps, but in return enabled their creators to have access to Facebook user data. Parakilas does not know how many companies sought friends permission data before such access was terminated around mid-2014. However, he said he believes tens or maybe even hundreds of thousands of developers may have done so. Parakilas estimates that “a majority of Facebook users” could have had their data harvested by app developers without their knowledge.
[..] During the time he was at Facebook, Parakilas said the company was keen to encourage more developers to build apps for its platform and “one of the main ways to get developers interested in building apps was through offering them access to this data”. Shortly after arriving at the company’s Silicon Valley headquarters he was told that any decision to ban an app required the personal approval of the chief executive, Mark Zuckerberg…
OK, to summarize: Mark Zuckerberg will be fine, apart from some stock losses. Facebook’s data use policies may not conform to every single piece of legislation in every country Facebook operates in, but they’ve been there since 2004. So lawmakers are as culpable as the company is.
There’ll be big words, lots of them. And there may be people leaving Facebook. But the platform is addictive, and 2 billion addicts is a very large target group. Some other company may develop a competitor and promise ‘better’ policies and conditions, but the big money is in the very thing discussed today: manipulating people’s data, and thereby manipulating their behavior.
Perhaps if news media and advertizers were so inclined, they’d explain to their readers and viewers exactly that, but in the end they A) all do it to some extent, and B) are all connected to Facebook and Google to some extent.
But the main driving force is and will remain the intelligence agencies, who have come to depend on ‘social media’ for the one thing they themselves were incapable of providing, but saw Alphabet and Facebook incite gullible people themselves to provide: an artificial intelligence driven database that knows more about you than you know yourself.
That the intelligence community today is powered by artificial intelligence is pretty out there to start with. That AI would give it the means to predict your future behavior, and manipulate you into that behavior seemingly at will, is something that warrants reflection.
George Orwell could not have foreseen this.