Engineering Bias in Facial Recognition: Workshop and Safe Experimental Space
08-15, 15:30–16:30 (US/Eastern), Workshop A / Tobin 219

This workshop will demonstrate the facial recognition visualizer, which extracts live inferences from images of the face, including estimated age, gender, emotional expression, and more. The existing facial recognition visualizer software will be expanded with a prompt engineering component, in which the participant can develop a prompt that makes further assessments based on the appearance of an individual. During the workshops, participants will be invited to try out this software to experiment with different prompts. This workshop will reveal how readily AI and machine learning systems can be trained to make absolute determinations based on input data, even when those determinations are based on biased, incomplete assumptions. Given that any video-based surveillance system can be expanded to make these kinds of inferences, this workshop intends to give participants firsthand knowledge of how effortlessly biased determinations can be made based on facial surveillance data - and how risky these determinations actually are.

Evan Light is an associate professor of policy studies at the University of Toronto’s Faculty of Information. He does research on state surveillance, biometrics, and creative approaches to policy. His Deobfuscating State Surveillance project aims to map out state surveillance in Canada.

Craig Fahner is visiting assistant professor in the Integrated Design and Media program at New York University. His research and creative work questions and reimagines the ways in which media shape everyday life. He is a co-investigator on the Mellon Foundation-funded Data Fluencies project, which seeks to establish a critical public culture around data-oriented technologies.

Past HOPEs: i ii iii iv vi vii viii ix xi xii xiii xiv xv