New Data Shows Police State Facial Recognition Is WRONG Over 90% Of The Time

by John Vibes, Activist Post:

A police department that relies on facial recognition software has admitted that it has a false positive rate of over 90 percent. What this means is that nearly every person who is marked as a suspect by this system is actually an innocent person who will be interrogated by police, or possibly worse, because they were wrongly identified by this faulty technology.

According to a report from the Guardian, the South Wales Police scanned the crowd of more than 170,000 people who attended the 2017 Champions League final soccer match in Cardiff and falsely identified thousands of innocent people. The cameras identified 2,470 people as criminals but 2,297 of them were innocent, and only 173 of them were criminals, a 92 percent false positive rate.

According to a Freedom of Information request filed by Wired, these are actually typical numbers for the facial recognition software used by the South Wales Police. Data from the department showed that there were false positive rates of 87 percent and 90 percent for different events. Further, it is not clear how many of these suspects were actually nonviolent offenders.

The South Wales police have responded to these findings by issuing the following statement:

Of course no facial recognition system is 100 percent accurate under all conditions. Technical issues are normal to all face recognition systems which means false positives will continue to be a common problem for the foreseeable future. However since we introduced the facial recognition technology no individual has been arrested where a false positive alert has led to an intervention and no members of the public have complained.

We are primarily funded by readers. Please subscribe and donate to support us!

In relation to the false positives this is where the system incorrectly matches a person against a watch list. The operator considers the initial alert and either disregards it (which happens on the majority of cases) or dispatches an intervention team as the operator feels that the match is correct. When the intervention team is dispatched this involves an officer having an interaction with the potentially matched individual. Officers can quickly establish if the person has been correctly or incorrectly matched by traditional policing methods i.e. normally a dialogue between the officer/s and the individual.

The Wales Police don’t see a problem with targeting random people for search and interrogation, but these findings are raising obvious red flags for privacy advocates.

Silkie Carlo, the director of a group called Big Brother Watch, is launching a campaign against facial recognition this month.

“These figures show that not only is real-time facial recognition a threat to civil liberties, it is a dangerously inaccurate policing tool. South Wales’ statistics show that the tech misidentifies innocent members of the public at a terrifying rate, whilst there are only a handful of occasions where it has supported a genuine policing purpose,” Carlo said.

Read More @ ActivistPost.com

Views:

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.