Social Media’s Endgame

Sharing is Caring!

by bch2478595


Ultimately what started out as open platforms for ideas, collaboration, and dissent have devolved into curated echo chambers where independent thought is wholly unimportant. The only true purpose of these platforms now are for some entities to push narratives, and other entities to induce consumption through clandestine means, meanwhile, users stay happily entwined as imaginary points (shares, likes, etc) create a dopamine feedback loop. And it’s going to get worse (much much worse).

The short-term, dopamine-driven feedback loops that we have created are destroying how society works: no civil discourse, no cooperation, misinformation, mistruth. And it’s not an American problem. This is not about Russian ads. This is a global problem.” – Chamath Palihapitiya

To understand this transformation I think you have to view it from the perspective of how modern marketing techniques have rapidly evolved over the last decade, and how the advertising dollars are now being spent. Recall that for most products, advertising dollars were traditionally spent on Radio, Cable Television, Film, Print, and endorsement deals. These are mediums that are easy to mentally filter.

Now that advertising dollars go toward acquiring more user data (“Get the app!”), pushing highly targeted ads based on data aggregated from users’ online profiles, paying “influencers”, bloggers, streamers, bot farms, and all kinds of sophisticated astroturfing efforts, it’s getting harder to mentally filter the messages designed to push a narrative or induce consumption.

Still, up until recently it’s been possible to spot the shills. Recall how obvious CTR shills were (they literally clocked in and out). But the human operators are being phased out. Now, every post on social media is scrubbed for sentiment analysis and these massive data sets are being used to further refine the language behavior of bots. Soon it won’t be possible to spot the shills any more. Pre-trained NLP models are becoming so good that the code is too dangerous to be made public.

“We have the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter.” -Jeremy Howard (deep learning researcher /

In the end, the bots get more and more sophisticated. In a few years (some people here already say it’s happening) you simply won’t even be able to discern if it’s a person or a bot posting.

So how will people react to this?

As long as it doesn’t f*ck with their dopamine feedback loop, most people will be perfectly fine with this.



Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.