Check out this article from the Council on Foreign Relations. The article purports to be about the problem of “deep fakes” or faked video, pictures, or audio. Supposedly they are concerned about someone using this to unduly influence an election, but in reality this is all a pretense leading up to the main part of the article, which is their desired “solutions”:
Ideally, this technology-driven problem could be addressed adequately through technological solutions. But though strong detection algorithms are emerging (including GAN-based methods), they are lagging behind the innovation found in the creation of deep fakes. Even if an effective detection method emerges, it will struggle to have broad impact unless the major content distribution platforms, including traditional and social media, adopt it as a screening or filtering mechanism.
Translation: we need to prepare an algorithm for Youtube and the other CIA front social media companies to remove any content that threatens our cartel.
The same is true for potential solutions involving digital provenance: video or audio content can be watermarked at its creation, producing immutable metadata that marks location, time, and place and attests that the material was not tampered with. To have a broad effect, digital provenance solutions would need to be built into all the devices people use to create content, and traditional and social media would need to incorporate those solutions into their screening and filtering systems.
Translation: we need to make it impossible to create audio and video files without a watermark. This needs to be built into the hardware. Also notice the false pretense: inherent to any software file is the ability to alter it; in other words, there can never be any fake-proof watermarks.
Another option would be for Congress to intervene with regulatory legislation compelling the use of such technology,
Translation: if the companies that aren’t run by the CIA won’t play ball, maybe we should see if we force them to do it.
Legal and regulatory frameworks could play a role in mitigating the problem, but as with most technology-based solutions they will struggle to have broad effect, especially in the case of international relations.
Translation: we don’t know how to stop the Russians and the Chinese from either making deep fakes, or hosting real content that we want to suppress by calling it fake.
Existing laws already address some of the most malicious fakes; a number of criminal and tort statutes forbid the intentional distribution of false, harmful information. But these laws have limited reach
Translation: it’s already a crime to make fake videos. This is pretense.
Social media platforms have long been insulated from liability for distributing harmful content. Section 230 of the Communications Decency Act of 1996 broadly immunizes online service providers in relation to harms caused by user-generated content, with only a few exceptions. Congress could give platforms stronger incentives to self-police by limiting that immunity. It could, for example, make Section 230 immunity contingent on whether a company has made reasonable efforts to identify and remove falsified, harmful content either at the upload stage or upon receiving notification about it after it is posted.
Translation: we can use this pretense about deep fakes to shut down any free-speech-oriented competitors to youtube and the social media giants.
Deep fakes do not always require a mass audience to achieve a harmful effect. From a national security and international relations perspective, the most harmful deep fakes might not flow through social media channels. Instead, they could be delivered to target audiences as part of a strategy of reputational sabotage. This approach will be particularly appealing for foreign intelligence services hoping to influence decision-making by people without access to cutting-edge detection technology.
But perhaps the most telling statement of all:
The United States should also improve its efforts to combat hostile information operations that target U.S. democracy and social cohesion, whether they feature deep fakes or not.
Translation: the internet is interfering with the brainwashing operations we use to create false social cohesion. We need to shut it down fast.
Also extremely telling is the following statement:
For some organizations and individuals, the best defense against deep fakes would be to establish highly credible alibis regarding where they have been and what they have been doing or saying. In practical terms, politicians and others with reputations to protect could have an increased interest in life-logging services.
If you have any familiarity with how extensive the blackmailing operations surrounding politicians are, you will know to read into this section. They seem to be hinting at two meanings here. First is the actual concern about foreign governments or other sophisticated actors making deep fakes of our public officials. The second is the possibility of scammed “alibi” material in case white hats or foreign powers come into possession of the material used by the bankers and other invisible powers to keep their blackmailed slaves in Congress in line.