A Personalized Ecology of AI

Abstract clusters of texture like mold on a red and white carpet.

I'm three events into this tour talking about AI, and there's a question about the environment every time. What's the environmental impact of AI? In London, the question was a bit more pointed: How do I justify using AI as an artist, knowing the technology's environmental impact?

The work I present is critical, as all my work is. I use AI to understand it and make informed critiques, and all of my work takes a critical position toward the tools and the myths embedded in them. These are never sparing. But of course, I know that I am complicit in using tools that contribute to environmental harms. I want to be mindful of that complicity, because part of my practice is acknowledging our complicity with technology.

When you're asked, "What are you doing about the environmental impact of your work?" the question is leading. There is no good answer because, for many people, there is never a way to fully offset the use of AI for any reason. Many factors also influence an individual user's environmental impact, and these issues register differently at the personal scale than at a national or global scale.

That's understandable. It's fine to hold people using AI accountable for their use of AI. But I also have to assert that, as an individual, I am making a value judgement that my work is a valid use of the resources allotted to it, and that the questions I raise and the critical conversations it allows me to have justify the cost.

But to make that assessment, I can't just dismiss these questions. In this post, I will outline my environmental footprint as an artist who works with AI, in the service of framing what, precisely, my personal footprint is, and to identify what I can and am doing to offset that footprint. It is by no means scientific or exhaustive; not only that, but I am famously incapable of keeping numbers straight in my head. This means that I am open to corrections; I am also open to other assessments of the environmental impacts of AI.

I am not here to make a case about whether it's good or bad, I'm just trying to figure out what the numbers say. If you can help with that, reach out!

A Speculative Baseline

The environmental footprints of individual AI use are challenging to assess. First, we don't know exactly how much energy and water are being used to train AI models or for individual generations from those models. But we can draw on some estimates.

The most reliable and well-cited source I've seen is from Dr. Sasha Luccioni at Hugging Face. Here are some statistics:

For comparison, we can look at 100GB of backed-up data stored in a Google Drive (or something like it). One cloud user's power consumption can be between 60Kwh and 1600Kwh in a year. In the best-case scenarios, leaving a bunch of music and photos on cloud storage is about as much energy as creating 20,000 images with MidJourney.

It's also worth comparing to other, more traditional, forms of artistic practice. A ceramicist firing an electric kiln for 5 hours could use 27 kWh, or about 9,310 image generations.

CO2 Emissions

There is also the issue of CO2 emissions. A single image from a high-powered AI model is about the equivalent of driving 21.6 feet in a car. The Hugging Face report pushes this upward, framing it as 1,000 images, equivalent to driving 4.1 miles.

According to the US Federal Highway Administration (the nation in which I live), the average male age 35-54 (the demographic I belong to) drives 51 miles a day, or about 360 miles a week. I work from home, but drive approximately 36 miles to work, only twice a week. The gap between the two commutes is equivalent to about 72,000 images per week; if I did that, I would catch up to the typical American driver in my age range.

Of course, I don't see this as an allowance, and I generate nowhere near 72,000 images per week or even 9,310 images per week. If I am making a piece, I might generate about 200 images. In terms of electricity and CO2 emissions, for example, Moth Glitch would be equivalent to driving two and a half miles in a car or .6 kWh, about equal to 30 minutes of vacuuming my house.

Water Use

Estimates vary for water. An AI image production model can use between 1.8 and 12 liters of water per kWh. Let's assume the worst case of 35 liters of water for 1,000 images, or a third of a large bottle of water for every image. That water rate per image is far worse than traditional photography, where a gallon of water could develop about two rolls of film in a darkroom. That's 72 images for 3.8 liters, or about 0.05 liters of water per image.

Regarding personal accountability, I look to equivalent areas where I could reduce my water footprint. One case in point is that I don't eat red meat. The water cost of a single pound of beef is about 2,000 gallons, or 7,571 liters. One quarter-pound hamburger costs about 1,893 liters, or the water footprint of generating 54,000 images. Comparatively, we could eat soy-based burgers (and I do), which is a paltry 256 gallons (about 1,000 liters) of water for a pound of food.

Cynically, by eating a veggie burger instead of a steak-based quarter-pounder just once, I am creating an allowance for generating about 46,771 images. Based on per-capita burger consumption (the amount of beef divided by the US population), the typical American eats a shocking 2.4 burgers a day.

Let's assume I ate 3 burgers a week (I make the veggie version at home for lunch almost daily). That would be 140,313 AI images a week before my art making had an impact on my water consumption.

How many do I make?

In sum, this breaks down as follows. By comparing my personal stats to the average American consumer, this is the number of images I would have to create per week before my AI use offsets other decisions I made in my life:  

  • 79,429 images per week based on my personal CO2 emissions;
  • 140,313 images per week based on being a vegetarian vis-à-vis water consumption.

How many AI images do I make per week? It varies from project to project. For a short video like Dance Like, I generated 200 video clips and about 20 music clips before selecting the ones I would use. Video is a tough thing to find estimates for, because companies are not revealing the energy use of video models.

Let's do a brutal worst-case assessment. AI video generates 24 frames per second, each smaller than a Midjourney image. I run Sora's video frames at 480p (854×480) rather than 1080p (1920×1080). Let's call that half the size. We also aren't looking at the generation of every frame: significantly, these models create portions of the video, then interpolate those frames, then re-render the interpolated frames in more detail. That's still quite intensive, but not quite as intensive as generating a new image 24 times per second.

That would be about 216,000 images for a 2-minute, 33-second video. We can divide that by two because they are half as big, so 108,000 images. Then we could probably divide that by a quarter – but I'm going to be conservative, and say 50% – because we are interpolating, not generating from scratch. 54,000 images (assuming 1,000 images is 4.1 miles) is equal to:

  • Driving 221.4 miles in a car adds just over half of one American driver's average weekly commute.
  • It's about 1,900 gallons of water, about the same as eating a hamburger.
  • It's 162 kWh of electricity, or 6 firings of a kiln.

As an individual, creating a single short AI-generated video brings me to about par with the "offsets" that I've created through my everyday lifestyle decisions and privileges such as diet, and not driving to work daily. (This does not account for audio generation, which I can't find good numbers for).

What's the point of this?

There is always work to be done on climate consumption, and I'm not saying I get a pass. I am not constantly generating music, video or images – on an average week, I generate none at all. I do not use ChatGPT on a daily basis, either. When I use Gen AI it is primarily for research – to learn how they work, and shape more responsible policy and social literacy. The art I make is a result of that research process.

That said, people use energy, and they use energy for all kinds of pursuits. There are many good, strong cases against AI. Negligence and a lack of concern about their environmental effects is one of them: let's shatter the illusion that they are somehow neutral. But there are also individual decisions that we each have to make about the use of any technology, or energy, or resource; especially technology created in the pursuit of something like art-making. In a purely utilitarian world, we wouldn't expend energy for art making at all; this would be a dismal place to live.

The history of art is full of chemicals and emissions: look at photographic chemicals, the petrol involved in producing vinyl records, the energy produced for a kiln. In every case, energy assessments and mindful use is essential.

But there is no denying that collective energy use around AI is bad news, and likely to scale as we integrate less efficient generative systems into the surface of everything, from smart fridges to smart cars and emails and search results.

There are ways that I engage in more mindful uses of models: slowing footage down rather than generating longer sequences, for example, cuts the impact of some of my projects in half. Much of my work uses interpolated images, rather than AI-generated video, which means that the environmental impact of something like SWIM is less than something purely generated in Sora, like Dance-Like. I also use archival footage when I can: literally, not generated by AI at all.

To that end, we can look at my 33-minute film, Human Movie. It is not exclusively AI-generated video. Much of it is found footage or repurposed footage. The AI-generated video is created at a lower resolution; it's also usually slowed down, allowing the clips to run longer (and stopping me from generating extensions). I like noisy images, and use them as part of my visual vocabulary, so high resolution output is unnecessary. I have recycled and remixed generated clips from other projects, allowing me to make a film with video that is shown elsewhere, in other forms, rather than generating new video for every project. I will recolor these videos in Premiere and glitch the resulting files in different ways, allowing the same clips to be reused multiple times even within the same film.

In the end, I'm confident that my use of AI does more good – including the good of informing audiences about the need for more thoughtful use – than if I didn't use them at all. That's a personal decision, but it's one that I do consider and navigate thoughtfully. I'm mostly outside of the AI film-making and art making community, but I don't think it's a contradiction to critique the politics of the tools you use. The more AI-using artists who get on board with this critique, the more likely we can build pressure to force systems to change.


Noisy Human Tour

Film photograph of a bearded man in black glasses with his hand on his head, but his fingers have little plastic hands, as if he's from an AI generated image.

Rome: Human Movie Screening & Talk

May 12, Bibliotheca Hertziana 8:30pm

Eryk Salvaggio’s latest film, Human Movie: Six Meditations on a Compression Algorithm, has already won awards in AI and non-AI film festivals, and been presented at film screenings around the world.

Created from a blend of glitched AI-generated video, archival and found footage, Human Movie is not about machines at all, but rather, seeks to assert a humanist counterfactual to comparisons between human thought and the limited capacities of generative AI. The film approaches these metaphors at face value, but slowly peels back the superficial nature of such comparisons to examine the nuance, and appeal, behind the comparisons of humans and today’s computer systems.

There will be a performance at the Bibliotheca Hertziana in Rome at 8:30pm followed by a q&a, capping off the Machine Bildwissenschaft conference on Gen AI which runs from 11:00 to 19:15.


Zurich: Artist Talk

Eryk Salvaggio: Embodied Generativity: Critical AI, Art & the Body
May 13, 6:30pm, Zürcher Hochschule der Künste (ZHdK)

The interface of generative AI presents a passive mode of media production, modeled after the submission of a ticket to a tech stack, or a request for a chatbot helpdesk. What other modes of interaction might artists engage with to get at these tools beyond the keyboard? In a presentation of selected works, the artist, theorist and AI researcher Eryk Salvaggio explores positions, gestures and attitudes toward AI that reflect its brittle comprehension of creative logic and the world at large. This talk will present works and propose novel workflows that challenge ideas of "generativity" by moving it from the visual senses to the body, examining video, performance, dance, and puppetry.

In cooperation with the DIZH-Bridge Professorship for Digital Cultures and Arts with the Department Fine Arts (DFA) at ZHdK. The event will be held in English. Venue: Raum 6.K04 / Zürcher Hochschule der Künste, Pfingstweidstrasse 96, 8031 Zürich.


Gmünd, Germany: Artist Talk

Reshape Forum for Artificial Intelligence in Art and Design
Feel the Noise: Notes from an Adversarial AI Artist

May 15, Talk 4:45 PM  5:45 PM, Hochschule für Gestaltung Schwäbisch Gmünd // University of Applied Sciences (map) H2.10 (Aula) floor 2

Eryk Salvaggio's artworks use AI to work against AI. Favoring an adversarial, rather than "collaborative" position, creates ways of "removing the imagination" of what AI is, in order to see the truth of what it does and is more clearly. Beyond simple opposition, this practice creates opportunities for more nuanced claims about what defines the human when it is removed from the isolating context of rationality, efficiency, and productivity.