Apple threw away years of carefully cultivated privacy reputation for this…

Sharing is Caring!

PRIVACY: We built a system like Apple’s to flag child sexual abuse material — and concluded the tech was dangerous.

Knowledgeable observers argued a system like ours was far from feasible. After many false starts, we built a working prototype. But we encountered a glaring problem.

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.

We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.

Apple threw away years of carefully cultivated privacy reputation for this.

See also  Over the last 90 years, a recession was the only bullet-proof way to bring inflation down as much as the Fed needs today.

Background here.

 

 

h/t Stephen Green

Trending:
See also  Fed found problems with SVB four years ago, but still let the Black Swan event happen...

Views: 4

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.