Mariel García-Montes (MIT),
“Privacy by [Electoral Circumstance]: Safeguards against Computer Vision in Mexico’s National Voter Registry”
This paper analyzes the creation of Mexico’s largest biometrics database in response to an electoral crisis in 1988. To counter narratives on fraud, electoral parties instituted a voter ID card with a photograph and fingerprint. The electoral institute, a public institution with autonomy from the government, was tasked with holding the national registry of voters that stores this biometric data. However, with the rise of computer vision in the 2000s, and the need for large databases to use as training data, the database became a site of contestation between the electoral authority and different public and private actors seeking access, but privacy has prevailed. Using interviews, archival documents, and white papers, this paper analyzes the national effort to convince 100 million Mexicans to have their biometrics captured, and the institutional design that prevents electoral data from being used for other purposes, such as a corpus for facial recognition technologies. This case study exemplifies an instance of privacy-preserving design that emerged from political circumstance, adding to literature on the sociotechnical measures needed to preserve privacy rights in the rise of computer vision.
Mariel García-Montes is a PhD Candidate in the History, Anthropology, and Science, Technology, and Society program at Massachusetts Institute of Technology.
Rin Huang (University of Washington, Seattle), “The Pervasive Images: Airport Security, Imaging Infrastructure, and Biometric Privacy in China“
Since the advent of modern imaging technologies at Chinese airports, the nation has navigated the fine line between enhancing national security and protecting citizens’ biometric privacy. China’s approach to airport security underwent significant development in the aftermath of the 9/11 attacks, when global security infrastructures were reshaped to prevent similar threats. The introduction of image-based security systems began in the early 2000s with the implementation of body scanners, advanced X-ray technologies, and millimeter-wave machines in major airports across China, including Beijing Capital International Airport. In 2017, the Chinese government rolled out the use of Computed Tomography (CT) scanners for checked baggage, paralleling developments in the United States, while also implementing new facial recognition systems to streamline passenger processing at check-ins, boarding gates, and immigration checkpoints.
This paper aims to trace the development history (1991-2020) of image-based security systems at Chinese airports, focusing on the growing concerns surrounding computational images, biometric privacy, and big data measurement in China. The paper argues that, on the one hand, the government justifies imaging security technology under the banner of national security and counter-terrorism. On the other hand, privacy advocacy groups like the Chinese Consumers Association have voiced concerns over the potential misuse and security risks associated with personal biometric data, especially considering recent data protection laws such as the Personal Information Protection Law (PIPL). Ultimately, the paper positions China’s airport imaging-based security infrastructure within the broader context of the country’s imaging infrastructure landscape, where public and legal controversies regarding national security and biometric privacy—particularly concerning facial recognition technology and computational imaging security technology—are central to discussions of China’s digital surveillance initiatives.
Rin Huang (they/she) is a current Cinema and Media Studies graduate student at University of Washington, Seattle. Participated also in the Science, Technology, and Society Studies Graduate Certificate program at UW, their research interests lie in the overlapping zone of Science and Technology studies, comparative media studies, and infrastructure studies from an East Asian perspective. Their recent projects include archive-based research on communication and image infrastructure of modern East Asia, and its cultural, political, and epistemological impacts/presentations. Before coming to University of Washington, they received a B.A. in English, Minor in Philosophy at Fudan University with a visiting year at Columbia University.
David Humphrey (Michigan State University),
“Seeing Attention: Line-of-sight tracking and marketing analytics”
This paper examines the shared history of attention research and computer vision in Japan through the lens of line-of-sight tracking and its use within marketing analytics. As elsewhere, the Japanese technology and advertising industries have turned in recent years to computer vision-based sightline tracking to monitor and study audience and consumer attention. Whereas eyeline tracking apparatuses have long existed, the devices were historically large and cumbersome, requiring the subject to be seated and their head demobilized. Computer vision-based eyeline tracking, on the other hand, promises the ability to study consumers and audiences in their natural environment, ostensibly freed from the constraints of earlier devices. Fujitsu and NEC, for example, supply retailers with analytics platforms that track customers’ visual interaction with in-store offerings and thus analyze and improve product placement. Similarly, the advertising research firm REVISIO promotes tools for television advertisers to track, through set-top devices, viewers’ attention to commercials via line-of-sight detection.
I argue that such examples manifest the persistence of an ocular-centric understanding of attention, throwing into relief the shared history of computer vision and line-of-sight research in Japan. In my analysis, I foreground computer vision and line-of-sight research’s shared roots in Japan in screen-based understandings of vision, as exemplified by overlapping work on the two subjects in the 1960s and 1970s at the NHK Broadcasting Science Research Labs. I contend that this shared history should guide our understanding and critique of present-day applications of computer vision and sightline tracking. As analytic and commercial systems become trained to see attention along highly constrained visual lines, so do the limits of acceptable attention, locking the attentive subject into, rather than freeing them from, the frames of media ecologies.
David Humphrey is an associate professor of Japanese and global studies at Michigan State University, where he serves as director of the Japanese Studies Program. He is the author of the book The Time of Laughter: Comedy and the Media Cultures of Japan (University of Michigan Press, 2023), and his research on Japanese media and digital studies has appeared in journals including Media, Culture & Society, the International Journal of Communication, and the Journal of Japanese Studies. He is currently working on a book manuscript on the intersection of artificial intelligence and media attention in Japan.
Mihaela Mihailova (San Francisco State University),
“Deepfakes in/as Critical Computational Art Practice”
My paper approaches deepfakes as a form of critical AI practice that enables media artists to think through, engage with, and sometimes subvert emerging applications of computational imaging in moving image production. This project explores the representational strategies, critical interventions, and activist discourses facilitated by deepfakes in the work of global multimedia creators, including Singaporean documentarian Charmaine Poh, German-Iraqi conceptual media artist Nora Al-Badri, and British-Tamil contemporary artist Christopher Kulendran Thomas. It explores how their projects model deepfake media’s capacity to comment on computational imaging culture itself, inspire experimental play with identity and gender stereotypes, facilitate documentary work, and invite political and ideological discourse. Poh’s installation GOOD MORNING YOUNG BODY (2023) employs the deepfake as a filmmaking tool for feminist and queer self-expression and for critical disruption of the enduring biases and oppressive practices perpetuated by digital technologies. Al-Badri’s AI video piece The Post-Truth Museum (2021) questions the ethics of European museum restitution initiatives and reimagines the postcolonial legacy of such cultural institutions. Kulendran Thomas’s film installations Being Human (2019) and The Finesse (2022), created in collaboration with Annika Kuhlmann, present a speculative historical account of the Tamil liberation movement and its artistic legacy in his family’s native Sri Lanka.
At the same time, these projects reveal the ideological and creative limitations and inherent ethical contradictions of deepfake filmmaking, including the co-option of celebrity images and the contested copyright status of deepfake content. My paper unpacks such contradictions and their implications for deepfakes’ place in critical AI practice, paying particular attention to the distinctive challenges that synthetic media poses to the application of existing media studies frameworks. It asks what is at stake—formally and ideologically—in emerging art practices for which deepfakes are not only a means of production but also a self-reflexive mode of critical engagement with social and political processes and with the outcomes of moving image cultures’ algorithmic turn.
Mihaela Mihailova is an Assistant Professor in the School of Cinema at San Francisco State University. She is the editor of Coraline: A Closer Look at Studio LAIKA’s Stop-Motion Witchcraft (Bloomsbury, 2021), winner of the Norman McLaren/Evelyn Lambart Award for Best Edited Collection in Animation. She has published in Journal of Cinema and Media Studies, The Velvet Light Trap, Journal of Japanese and Korean Cinema, Convergence: The International Journal of Research into New Media Technologies, Feminist Media Studies, animation: an interdisciplinary journal, Studies in Russian and Soviet Cinema, [in]Transition, Flow, and Kino Kultura. She has also contributed chapters to Animating Film Theory (with John MacKay), Animated Landscapes: History, Form, and Function, The Animation Studies Reader, and Drawn from Life: Issues and Themes in Animated Documentary Cinema. Her current book project, Synthetic Creativity: Deepfakes in Contemporary Media, was recently awarded an NEH grant.
Hamidreza Nassiri (Independent Scholar),
“Automated Orientalism or Intercultural Dialogue? Rethinking Computational Imaging and Generative AI Systems”
Generative AI systems are rapidly gaining popularity among the public. However, these systems risk perpetuating what I call “automated orientalism,” reproducing stereotypes and power hierarchies on an industrialized scale in everyday contexts. Building on Edward Said’s critique of Western depictions of the “Other,” I first demonstrate how GenAI-driven imagery for non-Western contexts—like Iran—reflects reductive tropes. This problem arises from biases in training data that favor Western viewpoints, obscured by GenAI’s facade of suprahuman neutrality and intelligence. Automated pipelines reinforce these biases while political constraints block the participation of local actors and appropriate their cultural heritage.
Yet computational imaging, in conjunction with GenAI, can catalyze intercultural dialogue. Drawing on the Silk Roads as a historical model of exchange and building upon scholarship on computational media and intercultural aesthetics (e.g., Laura Marks, Enfoldment and Infinity, 2010), I propose computational methodologies that connect and fuse diverse visual traditions. Tiles from the medieval Silk Roads exemplify early precursors of such “computational” fusion, as precise geometric motifs were algorithmically reproduced and adapted across contexts. Today, tools like Gen Studio (an MIT/MET/Microsoft prototype) and TouchDesigner enable quantified, reproducible analysis and generation of such cross-cultural syntheses. I apply these methods to medieval tiles and also train GenAI models on merged datasets of the tiles’ patterns across different contexts. Drawing on this work, I illustrate how combining GenAI and small-data computational imaging can transcend hierarchies, uncovering and revitalizing centuries-long global exchanges often overshadowed by neocolonial narratives and neoliberal globalization.
However, these opportunities are also tied to existing power structures. US-imposed sanctions, for example, restrict researchers and artists in Iran from accessing these technologies, while American institutions profit from their cultural heritage—often acquired through colonial or illicit means and stripped of local context. These disparities perpetuate global inequities and limit the potential to bridge cultures. By highlighting the dual potentials of GenAI and computational imaging—both as vehicles for “automated orientalism” and as tools for intercultural dialogue—this presentation proposes methods to counter the algorithmic flattening of cultures and to discover and revitalize cultural networks, while addressing structural inequities.
Hamidreza Nassiri is a media scholar, filmmaker, and digital media artist with a PhD in Communication Arts from the University of Wisconsin-Madison. He has taught at institutions such as UW-Madison, NYU, and Fordham University. His publications on the stratification of the Iranian film industry in the digital age and inclusive co-creation in media production pedagogy have appeared in the JCMS. Hamidreza has worked on innovative new media projects, including a text-to-image generative AI model trained on Safavid-era Iranian miniatures, showcased at the 2024 MUTEK Forum in Montreal, and the Urban Video Archive, an interactive repository of activist videos from Rio de Janeiro (2013–2023). He is currently developing an intermedia research-creation project that explores cultural exchange along the pre-colonial Silk Roads, drawing on the visual aesthetics and philosophies that emerged from these interactions—especially among Persianate and Islamicate cultural formations—to reimagine (synthetic) digital imagery in the age of AI. https://hamidrezanassiri.com/
Fabian Offert (UC Santa Barbara) and Thao Phan (Australian National University),
“Are Some Things (Still) Unrepresentable?”
“Are some things unrepresentable?” asks a 2011 essay by Alexander Galloway. It responds to a similarly titled, earlier text by the philosopher Jacques Ranciére examining the impossibility of representing political violence, with the Shoa as its anchor point. How, or how much political violence, asks Ranciére, can be represented? What visual modes, asks Galloway, can be used to represent the unrepresentable? In this talk, we examine two contemporary artistic projects that deal with this problem of (visual) representation in the age of artificial intelligence.
Exhibit.ai, the first project, was conceived by the prominent Australian law firm Maurice Blackburn and focuses on the experiences of asylum seekers incarcerated in one of Australia’s infamous “offshore processing” centers. It attempts to bring ‘justice through synthesis’, to mitigate forms of political erasure by generating an artificial record using AI imagery. Calculating Empires: A Genealogy of Power and Technology, 1500-2025, the second project, is a “large-scale research visualization exploring the historical and political dependence of AI on systems of exploitation in the form of a room-sized flow chart.
On the surface, the two projects could not be more unlike: the first using AI image generators to create photorealistic depictions of political violence as a form of nonhuman witnessing, the second using more-or-less traditional forms of data visualization and information aesthetics to render visible the socio-technical ‘underbelly’ of artificial intelligence. And yet, as we argue, both projects construct a highly questionable representational politics of artificial intelligence, where a tool which itself is unrepresentable for technical reasons becomes an engine of ethical and political representation. While images are today said to be “operational”, meaning that they no longer function as primarily indexical objects, AI images (arguably the most operational image) are now asked to do the representational (and profoundly political) work of exposing regimes of power, exploitation, and violence.
Fabian Offert is Assistant Professor for the History and Theory of the Digital Humanities at the University of California, Santa Barbara, with a special interest in the epistemology and aesthetics of artificial intelligence. His forthcoming book, Vector Media (Meson/Minnesota 2025), proposes a new history of multimodal models as media objects. At UCSB, he serves as director of the Center for the Humanities and Machine Learning (HUML) and principal investigator of the international research project “AI Forensics”, funded by the Volkswagen Foundation. Before joining the faculty at UCSB, Fabian served as postdoctoral researcher in the German Research Foundation’s special interest group “The Digital Image”, associated researcher in the Critical Artificial Intelligence Group (KIM) at Karlsruhe University of Arts and Design, and Assistant Curator at ZKM Karlsruhe, Germany. Website: https://zentralwerkstatt.org.
Thao Phan is a feminist science and technology studies (STS) researcher who specializes in the study of gender and race in algorithmic culture. She is a Lecturer in Sociology (STS) at the Research School for Social Sciences at the Australian National University (ANU) in Canberra, Australia. Thao has published on topics including whiteness and the aesthetics of AI, big-data-driven techniques of racial classification, and the commercial capture of AI ethics research. She is the co-editor of the volumes An Anthropogenic Table of Elements (University of Toronto Press) and Economies of Virtue: The Circulation of ‘Ethics’ in AI (Institute of Network Cultures), and her writing appears in journals such as Big Data & Society, Catalyst: Feminism, Theory, Technosocience, Science as Culture, and Cultural Studies.
Owen Leonard (UC Santa Barbara),
“The Global Perception Stack”
Recent discourse on computer vision has produced a wealth of terminology to describe digitally mediated perception: “invisual images” (MacKenzie & Munster), “invisible images” (Paglen), “technical metapictures” (Offert & Bell), “operative images” (Uliasz, Farocki), “networked images” (Dewdney & Sluis), and so on. Such formulations are indebted both to Flusser’s “technical image” and Virilio’s “automation of perception.” My contribution to the CPCI workshop would bring this scholarship on the computationalization of vision into dialogue with research that highlights the planetary distribution of digital infrastructure (e.g. Crawford, Bratton) and especially the role of the Global South (e.g. Starosielski & Bojczuk, Kwet)— bringing into focus a sociotechnical assemblage I call the global perception stack. Zylinska has usefully theorized a global “perception machine,” but a more explicitly infrastructural perspective (or “disposition”, after Parks) can draw attention not only to software and media technology but also to the myriad forms of labor that underlie planetary computation; digging for coltan in North Kivu becomes an image-making practice. Dobson, Kronman, and Malevé have already examined the entanglements between human and machine ways of seeing. But the legacy of Virilio’s “vision machine,” which conceives perception in terms of “delocalised teletopology,” has sometimes led subsequent critics to treat images as ephemeral and immaterial digital phenomena. An orientation towards infrastructure extends existing analyses of algorithms and datasets by examining situated material practices as part of a global perception stack, from mineral extraction to image labelling. Studying the globalized computational image requires inquiry into the sites of its infrastructural substrate—mines, smelters, factories, ports—and the new modes of distributed image-making they afford. As Van der Straeten and Hasenöhrl note, “it is not enough to (finally) expand our areas of reference to include the countries of the Global South”—we have to rethink our questions and our methods as well.
Owen Leonard is a PhD student in English at UC Santa Barbara and a former software engineer, with undergraduate degrees in both literature and computer science. His research examines the global cultural and infrastructural contexts of computation, particularly ML-assisted image and language processing, with special attention to oft-neglected dimensions of software and hardware architecture. Moving beyond broad critiques of computation in terms of ecological impact or labor exploitation, he emphasizes the situated harms and opportunities of AI and related technologies. His ongoing web project Ground to Cloud collects research and journalism into the infrastructures of AI as realized at specific sites, allowing users to visualize the material patchwork of planetary computation. He is also the graduate researcher for the newly established Center for the Humanities and Machine Learning at UCSB, where he develops software and studies the landscape for AI research in the humanities.
Ardalan SadeghiKivi (MIT) and Tobias Putrih (MIT),
“Computational Analysis of Color Semantics in Google Image Search: Exploring Socio-Political Dimensions of Mass Consumption through Regional Imaging Data”
Recent studies leveraging linguistic data to explore embodied cognition have demonstrated that color embodies both the logical and emotional dimensions of semantic domains. Notably, concrete concepts that are translatable across cultures tend to cluster together in statistically significant ways, while their associated color distributions exhibit unique variations in Google Image search results.
To investigate these phenomena, we developed a computational system to automate the extraction and analysis of image data associated with specific concepts or phrases using the Google Image search engine. Building on core arguments from Color Debates and the foundational principles of Focal Colors, our system is designed to be both language-specific and region-specific. By employing APIs from servers in different countries, we provide a nuanced analysis of color behaviors that situate computational imaging within specific cultural and political contexts.
One unique application of this system focused on the global food industry. We extracted thirty essential products that constitute a basket of goods, provided by the US Bureau of Labor Statistics, and analyzed them in four distinct regions—Iran, the United States, England, and Slovenia—each reflecting diverse cultural, regional, and political landscapes. Additionally, we examined the top chain supermarkets and brands in their respective markets, enabling us to explore how these regions converge or diverge in their design and packaging practices.
Our findings reveal intricate insights into the politics of color as manifested in these socio-politically diverse geographies. Furthermore, the study raises significant questions about the role of computational imaging in shaping and reflecting transnational consumerist cultures. This research demonstrates the potential of computational imaging to illuminate cultural and political dynamics while expanding the discourse on the intersection of technology and representation in regional and global contexts.
Tobias Putrih is a Slovenian artist and educator, whose practice invites viewers to reimagine spaces and objects shaped by the utopian and visionary ideals of architecture and design. He is a lecturer at the Art, Culture and Technology program at MIT.
Ardalan SadeghiKivi is an Iranian artist, writer, and computer programmer who probes the computation embedded in everyday objects and interactions through tracing how visual and informational structures are manifested materially, framed by institutions, and conditioned by ideologies. He is a liaison at MIT.nano and a lecturer at the Comparative Media Studies program at MIT.
Minji Chun (Oxford),
“Algorithmic Oblivion: Computational Erasure and Historical Memory in Jungwoo Lee’s Artwork”
This paper examines the intersections of computational imaging and historical memory through the works of South Korean artist Jungwoo Lee (b. 1981). By interrogating the selective processes of algorithms and their parallels to historical erasure, Lee’s art offers a critical lens on how computational practices reflect and shape postcolonial narratives in East Asia. This paper focuses on three key works: To Be Determined (2024), Because: Surrounded by Three Dimensions (2024), and Die Resistenz (2019).
To Be Determined initially aimed to document the planned removal of the Statue of Peace in Berlin—a monument commemorating Korean women forced into sexual slavery during Japan’s colonial rule (1910–1945). Lee used photogrammetry scans to capture the statue, but the algorithm’s filtering processes led to incomplete or omitted data. By rematerializing these omissions as 3D prints, the work draws attention to the parallels between computational omissions and historical silences. Because: Surrounded by Three Dimensions employs game engine technology to simulate Korea’s seas, introducing unprocessable data that results in distorted movements. This disrupts deterministic geopolitical narratives, challenging fixed understandings of Korea’s historical and geographical identity. Die Resistenz stages a symbolic drone funeral for the unfinished monument of the former South Korean dictator Park Chung-hee. The drone’s crash, caused by its obstacle-detection sensors, underscores the fragility of both technological systems and unresolved historical legacies.
This paper situates Lee’s works within the broader geopolitical implications of computational imaging, particularly in the context of East Asia’s contested postcolonial memory. By aligning Lee’s artistic explorations with technology, it argues that computational imaging is not merely a technical tool but a cultural practice that reflects and reshapes how histories are constructed, remembered, and contested, revealing the inherent selectivity and fragility in both technological systems and historical representation.
Minji Chun is a DPhil candidate in History of Art at the University of Oxford, specializing in Korean contemporary art. Interested in ways of interpreting narratives of unmentioned histories and spaces, she is currently conducting research on socially engaged art in contemporary Korea. She also works as an art critic, curator, and translator based in Seoul and Oxford. Her recent English and Korean art criticism has appeared in Burlington Contemporary, FIELD: A Journal of Socially-Engaged Art Criticism, Hyundai Artlab, ArtAsiaPacific, and Wolganmisool Magazine, among others. Prior to her doctoral studies, Chun worked at the National Museum of Modern and Contemporary Art, Seoul (MMCA), and the Korea Arts Management Service (KAMS).