Klaus Schwab’s WEF now wants to CENSOR the Internet with AI

  • Bad actors perpetrating online harms are getting more dangerous and sophisticated, challenging current trust and safety processes.
  • Existing methodologies, including automated detection and manual moderation, are limited in their ability to adapt to complex threats at scale.
  • A new framework incorporating the strengths of humans and machines is required.

As the internet has evolved, so has the dark world of online harms. Trust and safety teams (the teams typically found within online platforms responsible for removing abusive content and enforcing platform policies) are challenged by an ever-growing list of abuses, such as child abuse, extremism, disinformation, hate speech and fraud; and increasingly advanced actors misusing platforms in unique ways.

The solution, however, is not as simple as hiring another roomful of content moderators or building yet another block list. Without a profound familiarity with different types of abuse, an understanding of hate group verbiage, fluency in terrorist languages and nuanced comprehension of disinformation campaigns, trust and safety teams can only scratch the surface.

We are primarily funded by readers. Please subscribe and donate to support us!

A more sophisticated approach is required. By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision.

www.weforum.org/agenda/2022/08/online-abuse-artificial-intelligence-human-input

h/t luisaywhitehat

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.