Knowledgeable observers argued a system like ours was far from feasible. After many false starts, we built a working prototype. But we encountered a glaring problem.
Our system could be easily repurposed for surveillance and censorship. The design wasn’t restricted to a specific category of content; a service could simply swap in any content-matching database, and the person using that service would be none the wiser.
A foreign government could, for example, compel a service to out people sharing disfavored political speech. That’s no hypothetical: WeChat, the popular Chinese messaging app, already uses content matching to identify dissident material. India enacted rules this year that could require pre-screening content critical of government policy. Russia recently fined Google, Facebook and Twitter for not removing pro-democracy protest materials.
We spotted other shortcomings. The content-matching process could have false positives, and malicious users could game the system to subject innocent users to scrutiny.
Apple threw away years of carefully cultivated privacy reputation for this.
Background here.
h/t Stephen Green
- Ellen Brown: The Looming Quadrillion Dollar Derivatives Tsunami
- Janet Yellen Just Poured Lighter Fluid On Every Small Bank In America
- The Great Financial Collapse of 2023. Comparison of Bear Stearns’ collapse in March 2008 and Credit Suisse in March 2023.
- Ron DeSantis unveils legislation to BAN Central Bank Digital Currency in Florida, protecting citizens from a grave threat to civil liberties…
- Never in history have we had all three issues happening at once…
- Clearwater Mayor abruptly resigns… Council members left in stunned silence
- Sperm has been almost entirely replaced by spike proteins
- People are crashing…
- Armstrong: WOKE Culture is Destroying the Economy & our Nation
- 2023: A Year When Everything Is Suddenly Breaking Loose All At Once
Views: 4