Andrewoudot.com   Fulfilledinterest.com   Publicset.com    Zymernation.com   Wdphg.com   Newlintech.com   Zedmachinery.com

from the for-now… dept

At the end of last year, the National Institute of Standards and Technology (NIST) released its review of 189 facial recognition algorithms submitted by 99 companies. The results were underwhelming. The tech law enforcement and security agencies seem to feel is a game changer is just more of the same bias we’ve been subjected to for years without any AI assistance.

Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

The faces of African American women were falsely identified more often in the kinds of searches used by police investigators where an image is compared to thousands or millions of others in hopes of identifying a suspect.

Who were the winners in NIST’s facial recognition runoff? These guys:

Middle-aged white men generally benefited from the highest accuracy rates.

We have some good news and bad news to report from the NIST’s latest facial recognition study [PDF]. And the good news is also kind of bad news. (The bad news contains no good news, though.)

The bad news is that the COVID-19 pandemic is still ongoing. This leads to the good news: face masks — now a necessity and/or requirement in many places — are capable of thwarting facial recognition systems.

Using unmasked images, the most accurate algorithms fail to authenticate a person about 0.3% of the time. Masked images raised even these top algorithms’ failure rate to about 5%, while many otherwise competent algorithms failed between 20% to 50% of the time.

But that’s also bad news. This increases the chance of both false positives and false negatives. Both of these are unwelcome side effects of face coverings. The tiny bit of good news is that it generates mostly unusable images for passive systems (like those installed in the UK) that collect photos of everyone who passes by their lenses. The other small bit of good news in this bad news sandwich is this: face masks reduce the risk of bogus arrests/detainments.

While false negatives increased, false positives remained stable or modestly declined.

NIST also noticed a couple of other quirks in its study. Mask coverage obviously matters. The more that’s covered, the less likely it is software will draw the correct conclusion. But color also matters. Black masks produced more bad results than blue masks.

Companies producing facial recognition tech (89 algorithms were tested by NIST for this project) aren’t content to wait out the pandemic. Many are already working on algorithms that use fewer features to generate possible matches. This is also bad news. While the tech may be improving, working around masks by limiting the number of data points needed to make a match is just going to generate more false positives and negatives. But companies are already training their AI on face-masked photos, many of which are being harvested from public accounts on social media websites. Dystopia is here to stay. The pandemic has only accelerated its arrival.

Filed Under: face masks, facial recognition, nist, pandemic

Categories: Technology