Apple threw away years of carefully cultivated privacy reputation for this…

PRIVACY: We built a system like Apple’s to flag child sexual abuse material — and concluded the tech was dangerous.

Knowledgeable observers argued a system like ours was far from feasible. After many false starts, we built a working prototype. But we encountered a glaring problem.

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.

We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.

Apple threw away years of carefully cultivated privacy reputation for this.

We are primarily funded by readers. Please subscribe and donate to support us!

Background here.

 

 

h/t Stephen Green

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.