Global Mediations Lab

Comparative Media Studies, MIT

Mariel García-Montes (MIT),
“Privacy by [Electoral Circumstance]: Safeguards against Computer Vision in Mexico’s National Voter Registry”

This paper analyzes the creation of Mexico’s largest biometrics database in response to an electoral crisis in 1988. To counter narratives on fraud, electoral parties instituted a voter ID card with a photograph and fingerprint. The electoral institute, a public institution with autonomy from the government, was tasked with holding the national registry of voters that stores this biometric data. However, with the rise of computer vision in the 2000s, and the need for large databases to use as training data, the database became a site of contestation between the electoral authority and different public and private actors seeking access, but privacy has prevailed. Using interviews, archival documents, and white papers, this paper analyzes the national effort to convince 100 million Mexicans to have their biometrics captured, and the institutional design that prevents electoral data from being used for other purposes, such as a corpus for facial recognition technologies. This case study exemplifies an instance of privacy-preserving design that emerged from political circumstance, adding to literature on the sociotechnical measures needed to preserve privacy rights in the rise of computer vision.

Rin Huang (University of Washington, Seattle), “The Pervasive Images: Airport Security, Imaging Infrastructure, and Biometric Privacy in China

Since the advent of modern imaging technologies at Chinese airports, the nation has navigated the fine line between enhancing national security and protecting citizens’ biometric privacy. China’s approach to airport security underwent significant development in the aftermath of the 9/11 attacks, when global security infrastructures were reshaped to prevent similar threats. The introduction of image-based security systems began in the early 2000s with the implementation of body scanners, advanced X-ray technologies, and millimeter-wave machines in major airports across China, including Beijing Capital International Airport. In 2017, the Chinese government rolled out the use of Computed Tomography (CT) scanners for checked baggage, paralleling developments in the United States, while also implementing new facial recognition systems to streamline passenger processing at check-ins, boarding gates, and immigration checkpoints.

This paper aims to trace the development history (1991-2020) of image-based security systems at Chinese airports, focusing on the growing concerns surrounding computational images, biometric privacy, and big data measurement in China. The paper argues that, on the one hand, the government justifies imaging security technology under the banner of national security and counter-terrorism. On the other hand, privacy advocacy groups like the Chinese Consumers Association have voiced concerns over the potential misuse and security risks associated with personal biometric data, especially considering recent data protection laws such as the Personal Information Protection Law (PIPL). Ultimately, the paper positions China’s airport imaging-based security infrastructure within the broader context of the country’s imaging infrastructure landscape, where public and legal controversies regarding national security and biometric privacy—particularly concerning facial recognition technology and computational imaging security technology—are central to discussions of China’s digital surveillance initiatives.

David Humphrey (Michigan State University),
“Seeing Attention: Line-of-sight tracking and marketing analytics”

This paper examines the shared history of attention research and computer vision in Japan through the lens of line-of-sight tracking and its use within marketing analytics. As elsewhere, the Japanese technology and advertising industries have turned in recent years to computer vision-based sightline tracking to monitor and study audience and consumer attention. Whereas eyeline tracking apparatuses have long existed, the devices were historically large and cumbersome, requiring the subject to be seated and their head demobilized. Computer vision-based eyeline tracking, on the other hand, promises the ability to study consumers and audiences in their natural environment, ostensibly freed from the constraints of earlier devices. Fujitsu and NEC, for example, supply retailers with analytics platforms that track customers’ visual interaction with in-store offerings and thus analyze and improve product placement. Similarly, the advertising research firm REVISIO promotes tools for television advertisers to track, through set-top devices, viewers’ attention to commercials via line-of-sight detection.

I argue that such examples manifest the persistence of an ocular-centric understanding of attention, throwing into relief the shared history of computer vision and line-of-sight research in Japan. In my analysis, I foreground computer vision and line-of-sight research’s shared roots in Japan in screen-based understandings of vision, as exemplified by overlapping work on the two subjects in the 1960s and 1970s at the NHK Broadcasting Science Research Labs. I contend that this shared history should guide our understanding and critique of present-day applications of computer vision and sightline tracking. As analytic and commercial systems become trained to see attention along highly constrained visual lines, so do the limits of acceptable attention, locking the attentive subject into, rather than freeing them from, the frames of media ecologies.

Mihaela Mihailova (San Francisco State University),
“Deepfakes in/as Critical Computational Art Practice”

My paper approaches deepfakes as a form of critical AI practice that enables media artists to think through, engage with, and sometimes subvert emerging applications of computational imaging in moving image production. This project explores the representational strategies, critical interventions, and activist discourses facilitated by deepfakes in the work of global multimedia creators, including Singaporean documentarian Charmaine Poh, German-Iraqi conceptual media artist Nora Al-Badri, and British-Tamil contemporary artist Christopher Kulendran Thomas. It explores how their projects model deepfake media’s capacity to comment on computational imaging culture itself, inspire experimental play with identity and gender stereotypes, facilitate documentary work, and invite political and ideological discourse. Poh’s installation GOOD MORNING YOUNG BODY (2023) employs the deepfake as a filmmaking tool for feminist and queer self-expression and for critical disruption of the enduring biases and oppressive practices perpetuated by digital technologies. Al-Badri’s AI video piece The Post-Truth Museum (2021) questions the ethics of European museum restitution initiatives and reimagines the postcolonial legacy of such cultural institutions. Kulendran Thomas’s film installations Being Human (2019) and The Finesse (2022), created in collaboration with Annika Kuhlmann, present a speculative historical account of the Tamil liberation movement and its artistic legacy in his family’s native Sri Lanka.

At the same time, these projects reveal the ideological and creative limitations and inherent ethical contradictions of deepfake filmmaking, including the co-option of celebrity images and the contested copyright status of deepfake content. My paper unpacks such contradictions and their implications for deepfakes’ place in critical AI practice, paying particular attention to the distinctive challenges that synthetic media poses to the application of existing media studies frameworks. It asks what is at stake—formally and ideologically—in emerging art practices for which deepfakes are not only a means of production but also a self-reflexive mode of critical engagement with social and political processes and with the outcomes of moving image cultures’ algorithmic turn.

Hamidreza Nassiri (Independent Scholar),
“Automated Orientalism or Intercultural Dialogue? Rethinking Computational Imaging and Generative AI Systems”

Generative AI systems are rapidly gaining popularity among the public. However, these systems risk perpetuating what I call “automated orientalism,” reproducing stereotypes and power hierarchies on an industrialized scale in everyday contexts. Building on Edward Said’s critique of Western depictions of the “Other,” I first demonstrate how GenAI-driven imagery for non-Western contexts—like Iran—reflects reductive tropes. This problem arises from biases in training data that favor Western viewpoints, obscured by GenAI’s facade of suprahuman neutrality and intelligence. Automated pipelines reinforce these biases while political constraints block the participation of local actors and appropriate their cultural heritage.

Yet computational imaging, in conjunction with GenAI, can catalyze intercultural dialogue. Drawing on the Silk Roads as a historical model of exchange and building upon scholarship on computational media and intercultural aesthetics (e.g., Laura Marks, Enfoldment and Infinity, 2010), I propose computational methodologies that connect and fuse diverse visual traditions. Tiles from the medieval Silk Roads exemplify early precursors of such “computational” fusion, as precise geometric motifs were algorithmically reproduced and adapted across contexts. Today, tools like Gen Studio (an MIT/MET/Microsoft prototype) and TouchDesigner enable quantified, reproducible analysis and generation of such cross-cultural syntheses. I apply these methods to medieval tiles and also train GenAI models on merged datasets of the tiles’ patterns across different contexts. Drawing on this work, I illustrate how combining GenAI and small-data computational imaging can transcend hierarchies, uncovering and revitalizing centuries-long global exchanges often overshadowed by neocolonial narratives and neoliberal globalization.

However, these opportunities are also tied to existing power structures. US-imposed sanctions, for example, restrict researchers and artists in Iran from accessing these technologies, while American institutions profit from their cultural heritage—often acquired through colonial or illicit means and stripped of local context. These disparities perpetuate global inequities and limit the potential to bridge cultures. By highlighting the dual potentials of GenAI and computational imaging—both as vehicles for “automated orientalism” and as tools for intercultural dialogue—this presentation proposes methods to counter the algorithmic flattening of cultures and to discover and revitalize cultural networks, while addressing structural inequities.

Fabian Offert (UC Santa Barbara) and Thao Phan (Australian National University),
“Are Some Things (Still) Unrepresentable?”

“Are some things unrepresentable?” asks a 2011 essay by Alexander Galloway. It responds to a similarly titled, earlier text by the philosopher Jacques Ranciére examining the impossibility of representing political violence, with the Shoa as its anchor point. How, or how much political violence, asks Ranciére, can be represented? What visual modes, asks Galloway, can be used to represent the unrepresentable? In this talk, we examine two contemporary artistic projects that deal with this problem of (visual) representation in the age of artificial intelligence.

Exhibit.ai, the first project, was conceived by the prominent Australian law firm Maurice Blackburn and focuses on the experiences of asylum seekers incarcerated in one of Australia’s infamous “offshore processing” centers. It attempts to bring ‘justice through synthesis’, to mitigate forms of political erasure by generating an artificial record using AI imagery. Calculating Empires: A Genealogy of Power and Technology, 1500-2025, the second project, is a “large-scale research visualization exploring the historical and political dependence of AI on systems of exploitation in the form of a room-sized flow chart.

On the surface, the two projects could not be more unlike: the first using AI image generators to create photorealistic depictions of political violence as a form of nonhuman witnessing, the second using more-or-less traditional forms of data visualization and information aesthetics to render visible the socio-technical ‘underbelly’ of artificial intelligence. And yet, as we argue, both projects construct a highly questionable representational politics of artificial intelligence, where a tool which itself is unrepresentable for technical reasons becomes an engine of ethical and political representation. While images are today said to be “operational”, meaning that they no longer function as primarily indexical objects, AI images (arguably the most operational image) are now asked to do the representational (and profoundly political) work of exposing regimes of power, exploitation, and violence.

Owen Leonard (UC Santa Barbara),
“The Global Perception Stack”

Recent discourse on computer vision has produced a wealth of terminology to describe digitally mediated perception: “invisual images” (MacKenzie & Munster), “invisible images” (Paglen), “technical metapictures” (Offert & Bell), “operative images” (Uliasz, Farocki), “networked images” (Dewdney & Sluis), and so on. Such formulations are indebted both to Flusser’s “technical image” and Virilio’s “automation of perception.” My contribution to the CPCI workshop would bring this scholarship on the computationalization of vision into dialogue with research that highlights the planetary distribution of digital infrastructure (e.g. Crawford, Bratton) and especially the role of the Global South (e.g. Starosielski & Bojczuk, Kwet)— bringing into focus a sociotechnical assemblage I call the global perception stack. Zylinska has usefully theorized a global “perception machine,” but a more explicitly infrastructural perspective (or “disposition”, after Parks) can draw attention not only to software and media technology but also to the myriad forms of labor that underlie planetary computation; digging for coltan in North Kivu becomes an image-making practice. Dobson, Kronman, and Malevé have already examined the entanglements between human and machine ways of seeing. But the legacy of Virilio’s “vision machine,” which conceives perception in terms of “delocalised teletopology,” has sometimes led subsequent critics to treat images as ephemeral and immaterial digital phenomena. An orientation towards infrastructure extends existing analyses of algorithms and datasets by examining situated material practices as part of a global perception stack, from mineral extraction to image labelling. Studying the globalized computational image requires inquiry into the sites of its infrastructural substrate—mines, smelters, factories, ports—and the new modes of distributed image-making they afford. As Van der Straeten and Hasenöhrl note, “it is not enough to (finally) expand our areas of reference to include the countries of the Global South”—we have to rethink our questions and our methods as well.

Ardalan SadeghiKivi (MIT) and Tobias Putrih (MIT),
“Computational Analysis of Color Semantics in Google Image Search: Exploring Socio-Political Dimensions of Mass Consumption through Regional Imaging Data”

Recent studies leveraging linguistic data to explore embodied cognition have demonstrated that color embodies both the logical and emotional dimensions of semantic domains. Notably, concrete concepts that are translatable across cultures tend to cluster together in statistically significant ways, while their associated color distributions exhibit unique variations in Google Image search results.

To investigate these phenomena, we developed a computational system to automate the extraction and analysis of image data associated with specific concepts or phrases using the Google Image search engine. Building on core arguments from Color Debates and the foundational principles of Focal Colors, our system is designed to be both language-specific and region-specific. By employing APIs from servers in different countries, we provide a nuanced analysis of color behaviors that situate computational imaging within specific cultural and political contexts.

One unique application of this system focused on the global food industry. We extracted thirty essential products that constitute a basket of goods, provided by the US Bureau of Labor Statistics, and analyzed them in four distinct regions—Iran, the United States, England, and Slovenia—each reflecting diverse cultural, regional, and political landscapes. Additionally, we examined the top chain supermarkets and brands in their respective markets, enabling us to explore how these regions converge or diverge in their design and packaging practices.

Our findings reveal intricate insights into the politics of color as manifested in these socio-politically diverse geographies. Furthermore, the study raises significant questions about the role of computational imaging in shaping and reflecting transnational consumerist cultures. This research demonstrates the potential of computational imaging to illuminate cultural and political dynamics while expanding the discourse on the intersection of technology and representation in regional and global contexts.

Minji Chun (Oxford),
“Algorithmic Oblivion: Computational Erasure and Historical Memory in Jungwoo Lee’s Artwork”

This paper examines the intersections of computational imaging and historical memory through the works of South Korean artist Jungwoo Lee (b. 1981). By interrogating the selective processes of algorithms and their parallels to historical erasure, Lee’s art offers a critical lens on how computational practices reflect and shape postcolonial narratives in East Asia. This paper focuses on three key works: To Be Determined (2024), Because: Surrounded by Three Dimensions (2024), and Die Resistenz (2019).

To Be Determined initially aimed to document the planned removal of the Statue of Peace in Berlin—a monument commemorating Korean women forced into sexual slavery during Japan’s colonial rule (1910–1945). Lee used photogrammetry scans to capture the statue, but the algorithm’s filtering processes led to incomplete or omitted data. By rematerializing these omissions as 3D prints, the work draws attention to the parallels between computational omissions and historical silences. Because: Surrounded by Three Dimensions employs game engine technology to simulate Korea’s seas, introducing unprocessable data that results in distorted movements. This disrupts deterministic geopolitical narratives, challenging fixed understandings of Korea’s historical and geographical identity. Die Resistenz stages a symbolic drone funeral for the unfinished monument of the former South Korean dictator Park Chung-hee. The drone’s crash, caused by its obstacle-detection sensors, underscores the fragility of both technological systems and unresolved historical legacies.

This paper situates Lee’s works within the broader geopolitical implications of computational imaging, particularly in the context of East Asia’s contested postcolonial memory. By aligning Lee’s artistic explorations with technology, it argues that computational imaging is not merely a technical tool but a cultural practice that reflects and reshapes how histories are constructed, remembered, and contested, revealing the inherent selectivity and fragility in both technological systems and historical representation.

Return to Schedule