- Amazon says it's putting a one-year moratorium on its Rekognition software.
- The facial recognition technology has come under fire in previous years for failing to identify non-white faces, which could have unintended consequences, like unjust imprisonment.
- Think tanks focusing on technology say legislation alone can't fix the problem of algorithmic bias.
Days after IBM took the unprecedented step to halt all development, research, and sales of its facial recognition technology over concerns that it unfairly targets people of color, Amazon said it would also put a one-year moratorium on Rekognition, its own widely implemented facial recognition software.
Amazon will stop selling the software to police departments, although it's unclear if this will impact departments that are already using the tool.
"We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge," Amazon said in a statement on Wednesday. "We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested."
But prominent technology think tanks like the Electronic Frontier Foundation (EFF) and Fight for the Future (FFF) have a bone to pick with Amazon. They say legislators won't be able to just regulate the technology's problems away.
"The fact that companies are stepping up and admitting how harmful this technology is should be a big red flag that the technology—regulated or not—can pose grave threats to civil liberties," Matthew Guariglia, a policy analyst at EFF, tells Popular Mechanics.
While EFF agrees the government must act on surveillance technology, Guariglia says, legislative intervention alone won't change the underlying issues inherent in government use of facial recognition tools.
"Even if the technology were to be regulated, its use by the government would exacerbate a policing problem in this nation that already disproportionately impacts Black Americans, immigrants, the unhoused, and other vulnerable populations," Guariglia says.
Meanwhile, the digital rights group FFF is taking things a step further. The group tells Popular Mechanics that Amazon's one-year ban is "a public relations stunt."
"[Amazon has] been calling for the Federal government to 'regulate' facial recognition, because they want their corporate lawyers to help write the legislation, to ensure that it’s friendly to their surveillance capitalist business model," says Evan Greer, deputy director for FFF. "It’s likely Congress will impose some limitations on police use of facial recognition soon. Amazon wants to pretend it was their idea all along. They’ll spend the next year 'improving' the accuracy of their facial recognition algorithms, making it even more effective as an Orwellian surveillance tool."
For years, critics of Amazon's Rekognition have pointed out its problematic algorithmic bias, something that plagues facial recognition technology at large. In July 2018, the American Civil Liberties Union (ACLU) conducted a test with the software and found it incorrectly matched 28 members of Congress as other people who were arrested for a crime. The ACLU said it used the publicly available tool to conduct this testing for just $12.33.
"The false matches were disproportionately of people of color, including six members of the Congressional Black Caucus, among them civil rights legend Rep. John Lewis (D-Ga.)," the ACLU said. "These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance."
In a scathing May 2019 report from Georgetown Law’s Center on Privacy and Technology (CPT), titled "Garbage In, Garbage Out," the authors detail how police departments across the country are feeding facial recognition software flawed data. When looking for a suspect, police will show an algorithm pictures of celebrities who they think share physical features with the suspect or composite sketches.
Then, in January 2019, Ghanaian-American computer scientist Joy Buolamwini published a study that found Amazon's Rekognition tool misclassified women as men 19 percent of the time, and the software mistakenly identified darker-skinned women 31 percent of the time.
Thank you @alfredwkng. This is a collective effort by not only researchers, but also civil liberties organizations, activists, employees, and shareholders applying pressure coupled with the tragic death of #GeorgeFloyd and tardy corporate acknowledgment that #BlackLivesMatter https://t.co/Sw8kLqhIHT— Joy Buolamwini (@jovialjoy) June 10, 2020
It's unclear what improvements, if any, Amazon plans on implementing to make the Rekognition tool more accurate. In a May 2018 letter to Amazon, members of the Congressional Black Caucus urged Amazon to hire more lawyers, engineers, and data scientists of color to aid in calibrating Rekognition to account for racial bias.
They wrote at the time:
"It is quite clear that communities of color are more heavily and aggressively policed than white communities. This status quo results in an oversampling of data which, once used as inputs to an analytical framework leveraging artificial intelligence, could negatively impact outcomes in those oversampled communities. We are seriously concerned that wrong decisions will be made due to the skewed data set produced by what we view as unfair and, at times, unconstitutional policing practices."
Over the years, Amazon has maintained its software doesn't have underlying problems, but those using the technology should pay more attention to its confidence threshold, which is basically a percentage assigned to a result that tells you how sure the system is that it has positively identified a person.
"[Amazon Web Services] takes its responsibilities seriously. But we believe it is the wrong approach to impose a ban on promising new technologies because they might be used by bad actors for nefarious purposes in the future," Matt Wood, general manager of artificial intelligence at AWS, said at the time. "The world would be a very different place if we had restricted people from buying computers because it was possible to use that computer to do harm."
You Might Also Like