Exclusive: TSA to expand its facial recognition program to over 400 airports

 

By Wilfred Chan

The Transportation Security Administration (TSA) is preparing to expand its controversial facial recognition program to around 430 airports over the next “several years” after finding “extremely promising” results from its pilot program, an agency spokesperson tells Fast Company. The expansion comes amid allegations by rights advocates that the agency is improperly coercing travelers to participate.

According to a TSA assessment of a pilot program that’s now under way at 25 airports, the agency has determined that its facial matching algorithms are 97% effective “across demographics, including dark skin tones,” says TSA press secretary Robert Langston, adding that the agency would not be publicly releasing the final results of the tests, which span two years of data. 

The pilot program is officially voluntary and uses what’s known as 1:1 facial matching technology to verify that a traveler standing at a checkpoint matches the photo on their physical ID. “The comparison is extremely accurate,” Langston says. 

TSA is also running a smaller pilot at two airports of what’s called 1:n facial recognition, which matches a face from a government database of images. That pilot is currently limited to “trusted travelers” like those enrolled in TSA PreCheck, and allows participants to verify their identities without taking out a physical ID at all. 

TSA doesn’t retain the details of people’s faces—what’s called biometric data—after the comparison is made. “Biometric data is overwritten as soon as the next passenger steps up to the queue,” Langston says. “And then, when the technology is turned off at the end of the day, whatever storage system in there dumps completely. There is no saved image.”

But Langston acknowledges that, until this week, some of travelers’ biometric data was collected and sent to the Department of Homeland Security’s AI research wing to “determine the efficacy of the algorithms” that were used. That data was sent as encrypted code, not as image files, he says.

The agency declined to elaborate on what vendors are providing its facial recognition technology. Federal procurement data shows TSA has awarded tens of millions of dollars over the last three years to a Virginia-based biometrics firm called Dignari for providing digital identity services. A spokesperson for Dignari acknowledged its relationship with TSA, but declined to comment further.

While TSA frames facial recognition as a way to make airports more efficient while improving security, technology-ethics advocates say the agency’s claims shouldn’t be trusted—and that the program needs to be shut down. 

“TSA doing its own testing and not releasing the results publicly calls into question the quality of the testing and veracity of the results,” says Jeramie Scott, a senior counsel and director of the Electronic Privacy Information Center’s surveillance oversight program. “Given there are over 2 million airline passengers a day, a 97% effective rate means there would be over 60,000 people a day the tech doesn’t work on if fully implemented.

“Regardless of the results though, TSA should not be implementing the use of facial recognition. TSA’s claims of protecting privacy and the voluntariness of the program mean very little when the agency can change on a whim how the program is implemented.”

“It didn’t seem like there was a choice”

TSA has made no secret of its desire to scale up its biometric programs. In a roadmap published in 2018, the agency declared an intention to phase out even the use of physical IDs and rely purely on facial recognition. At an SXSW fireside chat in March, TSA Administrator David Pekoske said that “eventually we will get to the point where we will require biometrics across the board because it is much more effective and much more efficient.”

Advocates say facial recognition erodes civil liberties and exacerbates bias against marginalized people. And researchers allege that TSA’s pilot programs are already violating travelers’ stated rights to opt out. 

According to TSA’s website on its biometrics technology, the agency “notifies passengers using signage at the airport near dedicated test lanes . . . that participation is voluntary.” Travelers “may notify a TSA officer if they do not wish to participate and instead go through the standard ID verification process,” it adds. 

But that clearly isn’t the case, says the Algorithmic Justice League, a nonprofit research group that recently launched an online survey to collect travelers’ accounts of TSA’s pilot biometric checkpoints, the initial results of which were shared exclusively with Fast Company

Out of 67 responses collected by Algorithmic Justice League this week, 60 travelers reported that they saw no signs warning that they would be asked to submit to facial recognition, and 65 travelers said that TSA officers did not ask them for consent. 

“I didn’t know there was an option. It happened so quickly, and it didn’t seem like there was a choice,” one traveler wrote. “I hadn’t flown in 10 years and I was overwhelmed and didn’t realize what was happening.”

 

Another traveler said that they started to walk away after their traditional ID check was finished, only to be stopped and told to return to the camera. “I was not told I could disagree to it, and it was made to seem like it was a new procedure,” they wrote. 

A traveler who identifies as a Black woman said that “there didn’t seem to be an alternative option,” and feared that opting out could subject her to body frisks “as a punishment” for “being a problem.”

Another traveler said they only found out later through social media that they had the right to opt out. “I feel coerced into participating in something I did not want to,” the person wrote. 

When forwarded these accounts, TSA’s Langston dismissed their allegations. “While TSA cannot respond to a summary report that seems to lack statistical validity with 67 respondents, I can tell you that TSA’s two-year study was based in scientific rigor and that signs are posted at each podium highlighting the voluntary nature of participation and that if anyone expressed reservation at the podium, the officer is there to conduct a manual identification verification process with ease and without delay,” he says. “If somebody has any degree of reticence in the technology, all they have to do is speak up.”

Agents are “absolutely not” incentivized to pressure people to use facial recognition, he adds. “It’s passenger choice. And what we’ve heard anecdotally is that people appreciate the convenience of being able to get through security with the image capture. People love these things.”

Advocates say that’s a trap. Even if facial recognition is speedier, travelers must resist “embracing convenience shackles,” says Joy Buolamwini, the Algorithmic Justice League’s founder. Our facial data is especially valuable because facial recognition “does not require cooperation,” she says. “Once your face is visible and it’s exposed, and you have these biometric surveillance systems being used widely in public, we have the apparatus for a total surveillance state.”

“It’s not too late”

Facial recognition is also harmful when it fails. In February, Black asylum seekers were reportedly blocked from filling out applications through the U.S. government’s new mobile app after its facial recognition systems struggled to pick up their darker skin tones. And in 2020, Robert Williams, a Black man in Detroit, was wrongfully handcuffed in front of his daughters and jailed due to a facial recognition error.

Concerns over such bias were at the heart of a February letter written by Senators Edward Markey, Jeff Merkley, Cory Booker, Elizabeth Warren, and Bernie Sanders. In their letter to TSA Chief Pekoske in February, the five lawmakers demanded that the agency “immediately halt” the deployment of the technology, citing a 2019 study by the National Institute of Standards and Technology that found Asian and African American people were as much as 100 times more likely to be misidentified than White men by facial recognition technology. “Americans’ civil rights are under threat when the government deploys this technology on a mass scale, without sufficient evidence that the technology is effective on people of color and does not violate Americans’ right to privacy,” they wrote. 

The senators’ letter doesn’t seem to have slowed the agency down. In a response dated May 17 and shared with Fast Company by a Senate staffer, Pekoske defended the biometric programs but declined to include specific information about the systems’ real-world performance, calling it sensitive security information. Langston also disputes the senators’ arguments, pointing to industry reports that newer and more advanced algorithms are much less likely to perpetuate bias. 

But Buolamwini says TSA’s lack of transparency on its pilots is already a red flag. Next week, her group plans to meet with Department of Homeland Security officials to present their traveler accounts, and demand answers. “This is something we have to resist now, before it becomes the default,” she says. “It’s not too late.”

Fast Company

(13)