Members of the Sisters of St. Joseph of Brentwood, N.Y., renew their religious vows during a Mass marking their patronal feast (CNS photo/Gregory A. Shemitz)

The heedless adoption of new technologies has led to fundamental concerns over privacy and power that are becoming more urgent with every new algorithm-based platform or smart device. Rushed to market by outsized companies that exhibit little regard for societal or individual consequences—and insidiously embedded in almost every aspect of our lives—such technologies have a way of becoming impossible to live without, or at least seeming that way. The question is: How can we live with them? In the last year, the rapid development and deployment of facial-recognition software has called special attention to this question, sparking fresh debate over privacy, civil rights, the responsibility tech companies have to make sure that their products are used well, and whether some technologies have an ethical use at all.

Amazon in particular has come under increasing scrutiny for marketing image-recognition technology to law-enforcement and government agencies. Its creepily named Rekognition software “detects objects, scenes, and faces; extracts text; recognizes celebrities; and identifies inappropriate content in images,” as described on the Amazon website. Rekognition’s seemingly more benign features include the ability to index large amounts of content—C-SPAN uses it to identify in real time who is speaking at Congressional hearings, for example. It can also track customer demographics and mood in retail stores. But given that one of its primary applications is surveillance, such technology has high potential for misuse and abuse—especially given what seems like the bias that may have been built into its underlying code.

The ACLU tested the software and found that it is more accurate in identifying white and male subjects, and less accurate in recognizing darker-skinned and female ones.

That’s gotten the attention of an unlikely group of Amazon shareholders: the Sisters of St. Joseph of Brentwood, New York. The Sisters are a member of the Tri-State Coalition for Responsible Investment, a group of Catholic investors who use their portfolios to work for economic justice along the lines of Catholic social teaching. In January, with the support of Open MIC, a nonprofit working for greater corporate accountability in media and tech companies, the Sisters filed a letter calling for a halt to the marketing of Rekognition to government agencies until it can be independently proven that the technology will not enable abuse or civil-rights violations. Sister Patricia Mahoney explained the motion: “The Sisters of St. Joseph of Brentwood have committed as a congregation to support immigrant communities and promote racial equity…. [W]e filed this proposal because we are concerned that Amazon has pitched facial recognition technology to Immigration and Customs Enforcement (ICE) and piloted its Rekognition with police departments, without fully assessing potential human rights impacts.” The letter also insists on a shareholder vote on whether the marketing of Rekognition to police departments, ICE, and the FBI should continue.

When the ACLU first revealed in May 2018 that Rekognition had been marketed to law enforcement, Amazon pointed out all of the benefits it could provide: busting human-trafficking rings, reuniting family members, solving cold cases. But the potential for abuse is plain. Technologies like Rekognition could be used to track anyone a government agency identifies as a “person of interest”—including those in vulnerable populations, like undocumented immigrants, or people who might be targeted for political reasons, like Black Lives Matter activists. Individuals could be identified without their knowledge, whether at public protests or at places of worship. Last year, Amazon employees circulated an internal letter demanding that Amazon stop selling Rekognition to law enforcement as well as ICE and agencies that partner with it.

There’s also concern over just how accurate Rekognition really is, and whether it’s reliable enough for use in high-stakes situations. The ACLU tested the software and found that it is more accurate in identifying white and male subjects, and less accurate in recognizing darker-skinned and female ones. ACLU investigators scanned the faces of all 535 members of Congress against a public database of mugshots, and Rekognition returned twenty-eight false matches; the false matches were disproportionately high for subjects of color. MIT conducted a study in January that arrived at similar conclusions. Amazon has disputed the methodologies of both studies.

All of these concerns motivated the letter filed in January by the Sisters of St. Joseph of Brentwood. Amazon attempted to quash a board vote on the issue, on the grounds that the proposal is “an insignificant public policy issue for the Company.” But in early April, the Securities and Exchange Commission ruled that the vote must be held. (Amazon attempted to appeal the decision and was again defeated.) It will take place at the annual shareholder meeting on May 22.

It’s good news that shareholders will get to discuss the ethical implications of making, marketing, and selling a technology like Rekognition. They may also be able to discuss what might be the problem underlying Rekognition’s misidentification of women and people of color. Much of the technology now in everyday use appears to reflect and reinforce the assumptions and biases of its mainly young, white, wealthy designers—blind spots that enable and perpetuate racism, sexism, and other forms of discrimination. Several examples have made the news: automatic sensors on sinks malfunction because they detect white skin better than black skin; the data-management system that automatically coded the title “Dr.” as male; Google Photos labeling pictures of black people as gorillas. Now imagine an aggressive or poorly trained police department relying on biased technology in a high-pressure situation, or when lives may be in danger. Suddenly the stakes are far higher.

Still, there’s another, larger issue to consider, one that Amazon employees addressed in that June 2018 letter to Amazon CEO Jeff Bezos. Even if this technology worked perfectly—identified every face correctly 100 percent of the time, regardless of race or gender—should Amazon be, as the employees wrote, “in the surveillance business” at all?

It’s a good question. But the handwringing seems a bit disingenuous. Amazon engineers and programmers must have known what goes into developing something like Rekognition and how it would be implemented, and must have had at least some inkling about the potential for abuse. They also must know how much power they have relative to almost every person whose face will be scanned on streets and highways, in parks, schools, malls, theaters, and sports arenas. “As ethically concerned Amazonians, we demand a choice in what we built, and a say in how it is used,” they wrote. Let’s hope they get to have that choice. Let’s hope they’ll also honestly face up to the responsibility that comes with holding such power.

It’s important to note that as of now, there is virtually no use of real-time facial-recognition technology like Rekognition by law enforcement. But adoption seems inevitable, especially as surveillance technology continues to creep into our lives. As the technology enhances itself via “deep learning”—continuous improvement through the automatic gathering and analysis of larger and larger amounts of data—it’s more important to identify and debate the ethical considerations. We shouldn’t let companies like Amazon, or even a group of well-intentioned employees, take the lead on this front. Look at where our unquestioning trust in tech has landed us today. 

Regina Munch is an associate editor at Commonweal.

Also by this author

Most Recent

© 2024 Commonweal Magazine. All rights reserved. Design by Point Five. Site by Deck Fifty.