Theses Doctoral

The Polysemia of Recognition: Facial Recognition in Algorithmic Management

Watkins, Elizabeth Anne

Algorithmic management systems organize many different kinds of work across domains, and have increasingly come under academic scrutiny. Under labels including gig work, piecemeal work, and platform labor, these systems have been richly theorized under disciplines including human-computer interaction, sociology, communications, economics, and labor law. When it comes to the relationships between such systems and their workers, current theory frames these interactions on a continuum between organizational control and worker autonomy. This has laid the groundwork for other ways of examining micro-level practices of workers under algorithmic management. As an alternative to the binary of control and autonomy, this dissertation takes its cue from feminist scholars in Science, Technology, and Society (STS) studies. Drawing on frameworks from articulation, repair, and mutual shaping, I examine workers’ interpretations and interactions, to ask how new subjectivities around identity and community emerge from these entanglements. To shed empirical light on these processes, this dissertation employs a mixed-methods research design examining the introduction of facial recognition into the sociotechnical systems of algorithmic management. Data include 22 in-person interviews with workers in New York City and Toronto, a survey of 100 workers in the United States who have been subjected to facial recognition, and analysis of over 2800 comments gathered from an online workers’ forum posted over the course of four years.Facial recognition, like algorithmic management, suffers from a lack of empirical, on-the-ground insights into how workers communicate, negotiate, and strategize around and through them. Interviews with workers reveals that facial recognition evokes polysemia, i.e. a number of distinct, yet interrelated interpretations. I find that for some workers, facial recognition means safety and security. To others it means violation of privacy and accusations of fraud. Some are impressed by the “science-fiction”-like capabilities of the system: “it’s like living in the future.” Others are wary, and science fiction becomes a vehicle to encapsulate their fears: “I’m in the [movie] The Minority Report.” For some the technology is hyper-powerful: “It feels like I’m always being watched,” yet others decry, “it’s an obvious façade.”

Following interviews, I build a body of research using empirical methods combined with frameworks drawn from STS and organizational theory to illuminate workers’ perceptions and strategies negotiating their algorithmic managers. I operationalize Julian Orr’s studies of storytelling among Xerox technicians to analyze workers’ information-sharing practices in online forums, to better understand how gig workers, devices, forums, and algorithmic management systems engage in mutual shaping processes. Analysis reveals that opposing interpretations of facial recognition, rather than dissolving into consensus of “shared understanding,” continue to persist. Rather than pursuing and relying on shared understanding of their work to maintain relationships, workers under algorithmic management, communicating in online forums about facial recognition, elide consensus. After forum analysis, I then conduct a survey, to assess workers’ fairness perceptions of facial recognition targeting and verification. The goal of this research is to establish an empirical foundation to determine whether algorithmic fairness perceptions are subject to theories of bounded rationality and decision-making.

Finally, for the last two articles, I turn back to the forums, to analyze workers’ experiences negotiating two other processes with threats or ramifications for safety, privacy, and risk. In one article, I focus on their negotiation of threats from scam attackers, and the use the forum itself as a “shared repertoire” of knowledge. In the other I use the forums as evidence to illuminate workers’ experiences and meaning-making around algorithmic risk management under COVID-19.

In the conclusion, I engage in theory-building to examine how algorithmic management and its attendant processes demand that information-sharing mechanisms serve novel ends buttressing legitimacy and authenticity, in what I call “para-organizational” work, a world of work where membership and legitimacy are liminal and uncertain. Ultimately, this body of research illuminates mutual shaping processes in which workers’ practices, identity, and community are entangled with technological artifacts and organizational structures. Algorithmic systems of work and participants’ interpretations of, and interactions with, related structures and devices, may be creating a world where sharing information is a process wielded not as a mechanism of learning, but as one of belonging.


This item is currently under embargo. It will be available starting 2025-04-05.

More About This Work

Academic Units
Thesis Advisors
Stark, David C.
Ph.D., Columbia University
Published Here
April 19, 2021