Things I Did in 2024
Writing, Talks & Miscellanea
A collection of writing, art and sounds I've worked on this year.
Writing
I've written quite a few pieces for Tech Policy Press this year.
The Original Sin of Generative AI
Should images of child abuse circulated online be commercialized? If we agree it should not, then why do we allow vast copies of the internet to be incorporated into AI systems without interventions or oversight?
The first covers the presence of child abuse material in the training data for Stable Diffusion, and a slew of other open models. David Thiel at the Stanford Internet Observatory published his report in December, this was published on January 2.
LAION, the organization that created the dataset, took it down in December 2023, as it was in a gray area of legality (it is illegal to possess or distribute this material the US, and the dataset contained links to images, which some might argue was distribution). It then worked with various child protection groups and agencies to remove this content and create a new dataset, RE-LAION.
That model came with a bit of whinging about how they weren't told far enough in advance about the discovery by Thiel at Stanford. This is irksome because they elected to scrape 5 billion images without looking at what they got before distributing the dataset (knowing it would be used to train image models). To respond by pointing fingers at the people who took the time to do what the creators should have done is annoying. It is worth noting that Abeba Birhane and others had pointed out these issues in this exact same dataset in October 2021.
Context, Consent and Control
Anti-AI anxiety often presents itself through inexpressible discomfort. It rises up from the body of the AI skeptic as an instinctual recoiling, a physical cringe. But without articulating details of that anxiety, there’s no way to translate the cringe into remedies.
In June, I participated in a series of events linked to AI and policies to protect the images we share online. Though it has its place, I am wary of the copyright argument for protecting images. Instead, I am concerned with images and artworks in the broader context of data protection. I am troubled by this fundamental paradox that, before social media, publication was a way to verify the creation of an image, artwork, or text. Because of the privatized nature of social media networks, we have ostensibly given up a right – the right to determine how and when copies are made – to those networks.
In the 21st century, this has reshaped that exchange: give up your rights, or don't participate in the communication network. This makes as much sense as giving away your rights to the phone company if you read a poem to someone on the other end of the line.
This is legal, but it is not ideal. I would argue that certain rights – such as the right for expectations of privacy – ought to be respected with regard to what data can be re-sold by these companies, or used for purposes that aren't aligned with the sharer's intent.
"Context, Consent, and Control" was an attempt to translate into policy language the experience of "cringe" that many people have told me about the way their images are being used as training data. I acknowledge that "cringey" can't be translated into policy frameworks, but tried to articulate what "cringe" means to people. I break it down to three C's – people feel frustrated that images are being taken out of context, that they are being taken without consent, and that they feel like they have no control over personal sharing with friends and family.
Respecting these three C's, I argue, would create conditions where people are more content with the state of the industry, and encourage people to share and connect online in the best spirit of the Web.
Challenging the Myths of Generative AI
The digital world exists in our imagination, shaped by the people selling services. This has given rise to a mythology of technology that aims for simple, graspable explanations at the expense of accuracy.
"Challenging the Myths of Generative AI" was probably my most popular bit of writing since How to Read an AI Image in 2022. The idea of it is simple enough: engaging in public dialogue around AI and its role in education, government, philanthropy, the arts, academia, tech policy, tech regulation, and even tech development, I realized that people had been trapped in unhelpful and inaccurate frames for making sense of them. Even among people who were working critically with AI systems, this language permeated conversations and tactics.
In talking with Daniel Stone on a visit to the University of Cambridge this year, we talked a lot about how these frames operate. Daniel is interested in the language we use to frame AI policy, I was interested in the frames we use to describe AI itself. You can hear more about that in a great discussion between Daniel and Alix Dunn at Computer Says Maybe.
"Challenging the Myths of Generative AI" was my second nod to Roland Barthes to go viral, who knew that was the secret formula? In it, I try to examine not just the myths but the functions they serve – what it is that makes them convenient enough for the industry to believe themselves, or allow to linger.
The Ghost Stays in the Picture
From March through May I was a Research Fellow at the Flickr Foundation, and spent April in London chatting with archivists and researchers and the folks at Flickr, who are exploring what I would describe as decentralized archival practices.
This series was an exploration of how images in archives "haunt" generated imagery. Part one explores the conceptual shift of archives into datasets, and what we lose in that translation. In part two, I looked at a dataset of 99.2 million photos released by Flickr in June 2014 and the path it took from images, to archive, to dataset – a dataset that, unexpectedly, became one of two go-to references for calibration and testing of image recognition and then image synthesis.
In part three, I discuss the presence of a gaze and how it becomes embedded into AI generated images trained on datasets where that gaze has been preserved. Specifically, the gaze I address is that of the photography documenting the US colonization of the Philippines and how that gaze permeates AI-generated "stereoview" images.
Conversations with Maverick Machines
This is a piece on Gordon Pask, and what Pask's work in cybernetics around conversation and interaction can tell us about our relationships with today's generative AI systems. Written for the Adela Festival's 2023 Edition Digital Dish series, curated by Maks Valenčič in collaboration with Razpotja magazine.
The Age of Noise
A lecture-performance, of sorts, delivered as the opening talk for the Australian Centre for the Moving Image's FACT conference. In it, I trace the trajectory of information from signal into the role of information as noise, arguing that this shift marks the end of the information age and marks a different way of relating to the world.
This talk is also the subject of a recent podcast conversation with Alix Dunn at Computer Says Maybe, and made the year-end wrap-up of the podcast's highlights.
Other Key Pieces This Year:
Artworks
Notes from the Algorithmic Sublime
Expanded remarks from a seminar given at Frank Shephard's Algorithmic Sublime class at the New School for Social Research, asking whether an AI-generated image could ever create an experience of awe.
Tied to this is my uninvited response to a questionnaire for artists on AI and generative aesthetics, describing my adversarial relationship to the tools I use and the limits I think they impose (unless you break them), linked below.
SWIM
SWIM is an artwork that combines archival footage of a swimmer with AI-generated noise (the result of a glitched system). The film is slowed down and we watch it "diffuse" in a visualization of the training process for image and video production models, re-imagining this training process as a contemplation on cultural memory and loss.
SWIM premiered at the Australian Center for the Moving Image, Melbourne and has been shown at Unsound Music Festival, Krakow, was nominated for Best Artificial Intelligence Film, Cannes World Film Festival; and was selected for the Art After Dark series at Bunjil Place, Melbourne, Australia, 2024.
Chance Operations (For George Brecht)
A lecture contrasting AI art and the generative art and prompts in the work of George Brecht, an artist who worked in the 1960s alongside Fluxus folks and John Cage. The contrast aims to understand the role of chance in Brecht's work and the role of chance in determining what kind of image you get from a diffusion model. The presentation included an artwork which generated an image by rolling a virtual set of dice to determine the color of each pixel, contrasting it with the role that probability and statistics plays in crafting an image with an AI model.
Because of You (With Avijit Ghosh)
Because of You is a film about Henrietta Lacks, but also about the abstractions of our lives and bodies into data through systems of surveillance and research.
Because of You was the winner of the "Technical Community on Pattern Analysis and Machine Intelligence" award from the IEEE at the Computer Vision and Predictive Reasoning conference, which, while a mouthful, was a massive collection of AI-based artworks curated by Luba Elliot from 368 entries. The film was also presented at the Clapham International Film Festival event on AI and cinema at the Turing Institute in London; and was named “Best Film Reflecting Ethical and Legal Issues in the Use of Technology” at the 2024 CineTech Futures Festival.
Moth Glitch
A film that attempts to reconcile Stan Brakhage's exploration of the materiality of film with an attempt to explore the materiality of training data and generated video – "a deliberately wrong question." Moth Glitch combines images of AI-generated moth animations and images of noise created by glitched AI systems.
Moth Glitch was presented at the Light Matter Film Festival, New York; and was projected against the Vero Beach Museum of Art for its Art After Dark series.
Music & Music Videos
Sounds Like Music (Seminar)
Sounds like Music is an attempt at exploring the question of a "multi-modal media theory," an approach to understanding AI-generated content from diffusion models in inter-related ways. Specifically, this seminar at RMIT delved (!) into understanding generated music (and media generally) as a media artifact that is designed to be plausible rather than expressive.
Ars Electronica
This was written myself, as The Organizing Committee, and performed by Vocaloid (a Japanese voice synthesizer from 2018) and then the video was created for a talk and a "live" performance of The Organizing Committee at the Museum of Guelph for Arts Everywhere. It's a song about the complicity of the arts in technology, a complicity I often feel quite closely. So I made a set of deepfakes that merged my body with the faces of tech CEOs in order to sing a song about this complicity.
I always sort of wonder if it isn't mean to call out Mark Amerika. I was mad at him for calling me a brainwashed parrot that was only capable of regurgitating my media-studies training "for profit." If you don't know who he is, that's OK. I kind of regret it, but I also don't. This is as much a reminder to myself as anything:
"Art can sell systems of power, art can support technocracy. Beautiful pictures of control – art, alone, changes nothing; we, alone, can change nothing."
500,000 JPGS
I know people can be wary of using AI generated music or video, even if it is to critique AI. Much of my work in this space is based on a practical reality: I want to understand these systems and how they work in meaningful ways, so I may as well embed a critique into the products I make to learn them. This video, 500,000 JPGs, was an exploration of a then-new video generation model and then-new music generation models.
I was inspired to write the lyrics from a reddit post to an AI forum where a poster mentioned having 500,000 jpgs but didn't know what to do with them. It's a kind of ironic nod to AI slop made with the tools of slop production.
I've released some experiments glitching AI music systems as The Absentees, of course, I am also deeply conflicted about whether it should exist.
On Slop
Finally, to end with something very recent: a set of publications called "Slop Infrastructures," which aims to look at the phenomenon of overwhelming AI content from an infrastructural, systems-based critique.
Thank you for your readership and engagement and support in 2024! I have a lot of exciting things in the works for 2025 (and even 2026!) that I am looking forward to sharing. Already on the books: some in-person events in Brooklyn, Barcelona, Amsterdam, Melbourne, and Baltimore.
I'll likely take some time off writing for the first few weeks of 2025, so happy new year, and I'll see you then!