Apple threw away years of carefully cultivated privacy reputation for this…

Sharing is Caring!

PRIVACY: We built a system like Apple’s to flag child sexual abuse material — and concluded the tech was dangerous.

Knowledgeable observers argued a system like ours was far from feasible. After many false starts, we built a working prototype. But we encountered a glaring problem.

Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.

A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.

We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.

Apple threw away years of carefully cultivated privacy reputation for this.

See also  Japanese troops begin nationwide drills for first time in 30 years…
See also  Nurse for 31 years giving testimony at a committee hearing.

Background here.

 

 

h/t Stephen Green

1,562 views

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.