When it comes to photos destined for the web, I’d rather be behind the camera than in front of it. However, on a recent trip to Tokyo I was reminded that photos of me, and specifically my face, are often being captured and processed by systems that are increasingly being embedded in our modern life.
While waiting at my gate at LAX, I noticed a sign stating that photos of passengers may be taken for facial biometric screening. U.S. Customs and Border Protection (CBP) explains that during the boarding process, the passenger’s picture is matched against the passenger’s passport photo for verification. For U.S. citizens and certain travelers, CBP retains the photo for up to 12 hours; for other travelers, the photo can be retained for up to 14 days. It’s worth noting that this process is not mandatory for U.S. citizens (though if you opt out you need to seek an alternative means for verifying your identity).
Upon arrival in Tokyo, the immigration agent kindly directed me to look into a camera so that he could snap a photo of my travel-weary mug. Japan, like the United States, has been utilizing biometric facial screening for its citizens in some airports since spring of last year, and it was announced last fall that this program would be further expanded with demonstration trials of a new NEC facial recognition system scheduled to begin at Terminal 3 of Narita International Airport from April 2019. NEC has also stated that its facial recognition technology will be used for the 2019 Rugby World Cup to identify pre-registered reporters who have been granted access to the event. In an effort to allay privacy concerns, NEC states that the facial image captured by its system is only used for identification purposes and is deleted in an appropriate manner after use.
While in Tokyo, I also got the chance to ride in the new Japan Taxi that is currently being rolled out ahead of the 2020 Olympics. While this taxi was shiny and new, the more interesting thing (for a law and tech nerd at least) was a report on the testing of digital ad tablets inside Japan’s existing taxis that utilize facial recognition technology to identify the rider’s gender in order to personalize the ads displayed. Like NEC’s airport system, these taxi displays discard the captured image after processing so that neither the tablet nor a server records the data.
Upon arriving back at LAX, I decided to use CBP’s Mobile Passport app which also required me to take a photo of myself for submission to CBP. The Mobile Passport app doesn’t explain what is done with the submitted photo, though the app’s developer states on its website that facial photos could be used for facial recognition. While the app does not say whether a traveler’s facial photo is stored in a CBP database, it does state that it does not share a traveler’s personal data with third parties nor is personal data saved on a virtual system.
Now that I’m back in the United States, all the photos of me used by these systems should, in theory, no longer exist. So why are we hesitating to embrace facial recognition technology? Well, let’s first put aside legitimate concerns regarding companies (and governments) not always abiding by their promises regarding data security and privacy. Let’s also put aside for now fears that a system used to identify reporters for access to a sporting event could also be repurposed to deny those same reporters (or others) access to other newsworthy events. Instead, lets focus on whether the conclusions drawn by these facial recognition systems regarding identity even are trustworthy. As some of you may be aware, several recent studies have shown that facial recognition technology still has some serious flaws when it comes to identifying certain classes of people.
Specifically, recent research has shown that image recognition algorithms suffer from both race and gender bias. In one study by researchers from the University of Toronto and MIT, a piece of commercial facial recognition software used by Orlando police in a pilot program was found to perform better on male faces than female faces and on fairer skin tones than those with darker skin. Furthermore, the software was found to have a 31% error rate for darker-skinned women (compared to a nearly 0% error rate for lighter-skinned males). In another example, a study out of the Georgia Institute of Technology found that certain autonomous vehicle systems are better at detecting pedestrians with lighter skin tones. (The tested systems were five percent less accurate when detecting people with darker skin tones on average.)
In view of this research, it is important to consider what sort of (additional) latent biases we may be inserting into our everyday systems and how those biases may be affecting decisions, processes and, ultimately, people. While this does not mean we need to opt out of facial recognition technology, it does mean that a lot of work still needs to be done to ensure these software systems are bias-free. Failure to do so not only falls short of our moral obligations, but may also expose organizations that make or use such software to claims of unlawful gender and/or racial discrimination. While no system is perfect, no organization wants its face associated with a civil rights lawsuit either.