By Chris Adams
How do computers recognize people? And what happens when they get it wrong?
In a session with National Press Foundation fellows, two experts from the Future of Privacy Forum, a Washington-based think tank, described the current state of facial recognition technology and where it is headed.
Brenda Leong (bio, Twitter) and Lauren Smith (bio, Twitter) first led fellows through the harms that can exist from automated decision-making. That includes individual harms – from those that are illegal to those that are merely unfair – and collective or societal harms.
An individual harm on education, for example, would be denying a student access to a specific school based on an algorithm (which would be illegal) or presenting ads for for-profit schools only to low-income students (which would be unfair). The societal harm would be ongoing differential access to education.
As for facial recognition technology, Leong detailed how current systems work – where they get their images, for example, and how images are turned into the digital files used for facial recognition comparisons.
According to the Future of Privacy Forum, facial recognition technology can help in small ways (helping users organize and label photos) and big ones (helping security officials spot known shoplifters). But the technology raises privacy concerns as well, since it involves the collection and use of sensitive biometric data, sometimes without consent.
Leong and Smith provided the fellows with a grid that detailed the uses of different levels of facial recognition programs, as well as the consent and privacy issues each one presents.