The problem is that governments are increasingly using biometric identification, especially facial recognition, for distribution of essential benefits like unemployment.
The failure rate of these technologies is concerning in and of themselves, but it endangers people who may be medically, socially or financially harmed by these failures. JL
Mia Sato reports in MIT Technology Review:
The pandemic accelerated the use of many biometric data collection tools—temperature checks at doors, thermal cameras in schools, face scans at airports. When it comes to benefits such as unemployment, state governments are turning to facial recognition, to verify people’s identity before releasing money they are entitled to. In recent months there have been reports of unemployment systems failing to recognize applicants’ face scans. Face recognition has been proved to be less accurate for people of color, and men are more likely to be accurately identified than women, according.At first glance, JB, an artist based in Los Angeles, perhaps doesn’t look much like the picture on their driver’s license. For one thing, the ID photo is from a few years ago. Hair that was once long and dark is now buzzed and bleached. And there’s the fact that JB is transgender and has been taking testosterone for over two years, which has led to changing facial features, thicker eyebrows, and acne that wasn’t there before. (They asked to be identified only by their first initials because of privacy concerns.)
JB lost a part-time retail job when the lockdowns hit last March and, like millions of other Americans, attempted to apply for unemployment benefits—never suspecting that their changing appearance would stand in the way. Months after submitting paperwork electronically, and making multiple calls to a hotline that went nowhere, JB was finally invited to use California’s facial recognition system to verify their identity. But even after multiple tries, the system couldn’t match JB’s face and ID photo, shutting them out of the benefits they qualified for. Eventually, JB stopped trying: the process was too frustrating.
Face recognition software started becoming more common years before the pandemic, and its potential flaws are well documented: journalists have revealed how police departments across the US use vast databases of faces in investigations, with questionable accuracy. Companies have stopped or limited use of the technology amid evidence that it doesn’t work as well on people of color. Even so, it continues to spread: other federal agencies plan on expanding its use, while it is also being used everywhere from shopping malls to concert venues. Macy’s was sued last year over its alleged use of face recognition on store customers. But pandemic-related uses of the tech to screen for things like benefits eligibility have critics especially concerned.Law enforcement and private businesses have used face recognition for years, but use of the technology in distributing government aid has expanded rapidly during the pandemic. States and federal agencies have turned to face recognition as a contactless, automated way of verifying the identity of people applying for unemployment and other public benefits.
Experts and activists worry that failures of this technology could prevent people from getting benefits they desperately need—and that it could be even more dangerous if it works as designed.
Using your face for the mundane
The pandemic accelerated the use of many biometric data collection tools—temperature checks at doors, thermal cameras in schools, face scans at airports. When it comes to benefits such as unemployment, state governments are turning in particular to facial recognition, to verify people’s identity before releasing money they are entitled to. The second wave of US stimulus funds, passed in December 2020, required states to verify people applying for Pandemic Unemployment Assistance—a federal pool of money.
Now 27 state unemployment agencies (California’s among them) are working with ID.me, a company offering face recognition technology, says CEO Blake Hall. The US Department of Labor also provided millions in funding to states to implement fraud prevention measures, which has pumped more dollars into facial recognition. In recent months there have been reports across the country of incidents in which unemployment systems failed to recognize applicants’ face scans, putting people like JB in precarious financial situations. The risk of misidentification is not equally distributed: face recognition has been proved to be less accurate for people of color than white people, and men are more likely to be accurately identified than women, according to a federal study published in 2019. The findings were explored further in a study last year.
Hall says in a sample of 700 users, ID.me did not find a correlation between skin tone and likelihood of failing the one-to-one matching step.
“What keeps me up at night is that with the pandemic accelerating things, we'll just start to see this pop up everywhere,” says Evan Greer, director of Fight for the Future, a digital rights group. “It will be in stores, and you’ll have the option to pay with your face. It will be normalized on public transit. It will be used in job interviews.”
More and more, it’s being used in what’s presented as the interest of public health. Australia recently expanded a program using facial recognition to enforce covid-19 safety precautions. People who are quarantining are subject to random check-ins, in which they’re required to send a selfie to confirm they are following rules. Location data is also collected, according to Reuters.
When it comes to essentials like emergency benefits to pay for housing and food, the first priority should be making sure everyone is able to access help, Greer says. Preventing fraud is a reasonable objective on the surface, she adds, but the most pressing goal must be to get people the benefits they need.
“Systems have to be built with human rights and with vulnerable people’s needs in mind from the start. Those can’t be afterthoughts,” Greer says. “They can’t be bug fixes after it already goes wrong.”
ID.me’s Hall says his company’s services are preferable to the existing methods of verifying identity and have helped states cut down on “massive” unemployment fraud since implementing face verification checks. He says unemployment claims have around a 91% true pass rate—either on their own or through a video call with an ID.me representative.
“[That] was our goal going in,” he says. “If we could automate away 91% of this, then the states that are just outgunned in terms of resources can use those resources to provide white-glove concierge service to the 9%.”
When users are not able to get through the face recognition process, ID.me emails them to follow up, according to Hall.
“Everything about this company is about helping people get access to things they’re eligible for,” he says.
Tech in the real world
The months that JB survived without income were difficult. The financial worry was enough to cause stress, and other troubles like a broken computer compounded the anxiety. Even their former employer couldn’t or wouldn’t help cut through the red tape.
“It’s very isolating to be like, ‘No one is helping me in any situation,’” JB says.
On the government side, experts say it makes sense that the pandemic brought new technology to the forefront, but cases like JB’s show that technology in itself is not the whole answer. Anne L. Washington, an assistant professor of data policy at New York University, says it’s tempting to consider a new government technology a success when it works most of the time during the research phase but fails 5% of the time in the real world. She compares the result to a game of musical chairs, where in a room of 100 people, five will always be left without a seat.
“The problem is that governments get some kind of technology and it works 95% of the time—they think it’s solved,” she says. Instead, human intervention becomes more important than ever. Says Washington: “They need a system to regularly handle the five people who are standing.”
There’s an additional layer of risk when a private company is involved. The biggest issue that arises in the rollout of a new kind of technology is where the data is kept, Washington says. Without a trusted entity that has the legal duty to protect people’s information, sensitive data could end up in the hands of others. How would we feel, for example, if the federal government had entrusted a private company with our Social Security numbers when they were created?
“The problem is that governments get some kind of technology and it works 95% of the time—they think it’s solved”
Anne L. Washington, New York UniversityWidespread and unchecked use of face recognition tools also has the potential to affect already marginalized groups more than others. Transgender people, for example, have detailed, frequent problems with tools like Google Photos, which may question whether pre- and post-transition photos show the same person. It means reckoning with the software over and over.
“[There’s] inaccuracy in technology’s ability to reflect the breadth of actual diversity and edge cases there are in the real world,” says Daly Barnett, a technologist at the Electronic Frontier Foundation. “We can’t rely on them to accurately classify and compute and reflect those beautiful edge cases.”
Worse than failure
Conversations about face recognition typically debate how the technology could fail or discriminate. But Barnett encourages people to think beyond whether the biometric tools work or not, or whether bias show up in the technology. She pushes back on the idea that we need them at all. Indeed, activists like Greer warn, the tools could be even more dangerous when they work perfectly. Face recognition has already been used to identify, punish, or stifle protesters, though people are fighting back. In Hong Kong, protesters wore masks and goggles to hide their faces from such police surveillance. In the US, federal prosecutors dropped charges against a protester identified using face recognition who had been accused of assaulting police officers.
“I think it’s understandable that we’re focusing on these flaws and biases, because this technology is being used right now,” Greer says. “But when you take a technology … and layer it on top of a society that’s deeply unjust, even if the technology itself is ‘neutral’ or it doesn’t have any kind of bias baked into it, it’s going to have the effect of automating and exacerbating that discrimination.”
Fight for the Future and EFF both support bans on government use of face recognition tools. And Barnett says that even when the technology is used, government agencies shouldn’t rely on a single system as the gatekeeper to access—especially to get people essentials they need to survive.
“It’s not a far stretch to imagine how, even if they are well intentioned now, [these technologies] can be weaponized against people for various purposes later,” she says.
For marginalized people, though, brushes with face recognition are already causing problems. More than a year after JB first applied for unemployment, the details of the arduous process are still seared into their memory. When the original claim expired this spring, it was a relief. They had recently gotten a new job, and things finally felt back on track.
“It was like, I’m finally getting my life back together,” they say. “I’m really glad that I don’t have to deal with that system. And I really hope I don’t have to deal with that system ever again.”
0 comments:
Post a Comment