Generative and Adversarial: Art and the Prospects of AI
Uninvited Responses to a Questionnaire
The latest issue of October features a collection of artists responding to a questionnaire created by Michelle Kuo and Pamela M. Lee. I decided to answer these questions independently and prior to reading other responses.
If AI algorithms can analyze massive datasets and identify patterns, how has this capability influenced the generation of new artistic concepts and ideas?
Analyzing datasets and identifying patterns is a residue of the digital mediation of our lives in the 21st century. We live under surveillance, rely on online communication with tenuous privacy rights, and our lives are endlessly traced into data. I work with generated images, videos, and sound. Each is an artifact of the network. Media is the medium!
More than just pattern finding, applying these patterns to noise defines contemporary diffusion models. The noise in the system gives the output of generative AI its variety. This substitution of noise for what the humanities might call inspiration or creativity is telling. It limits the possibilities inherent in noise to known patterns.
Making images through the balance of noise and constraint, steered through data prediction and analysis, is worth contrasting with chance.
George Brecht’s essay on Chance Imagery predates his first meeting with John Cage. So “new” in this space might be harder than we think. Brecht wanted to get humans out of the way of discovering new visual modes. Today’s generative AI is designed for the opposite. It’s a horizon of chance (random pixels, the true chance art element of the machine) but it’s filtered through known references of visual or sonic culture. It restores precisely what Brecht wanted to remove: reference, and through reference, order.
The noisy jpeg is debris: the end of the training process. You strip information out of these digital images and analyze how that noise spreads. Then, the model learns how to repair new frames of static, following those patterns back to something referencing the key image without reproducing it.
Noise is also the seed of the generation process. You start with an image of noise, and then the model takes what it has learned from the dataset and walks it in reverse. We generate “new” images from a composite image full of that digital debris. Still, these images are isolated from any precise context aside from how they have been labeled — and how they are recognized by an image recognition system called CLIP. All kind of meaning is lost along the way.
If you look at a noisy jpg, or noise at all, you find an absence of discreet category. Everything is fuzzed, blurring into everything else — generated video uses blurred abstractions as their “noise” seed. There is possibility and potential in the blur.
In Gen AI, this potential is filtered by reference, not expanded. We steer the noise by comparison to one model of visual culture, derived from one set of images; then use an image recognition system to ensure its similarity, and then we say we have made something “new”. Would it be more appropriate to treat this “new” image as an artifact of that process — certainly not as a “photograph,” but not even as an image, but as one hypothetical image, one proposal for an image, or a video, or a sound?
Cecile Malaspina writes about a tension — between “information entropy” and whether it qualifies as “potential information.” She’s not writing about AI but about noise in other contexts. In her book, she frames this noise as containing certain “potentials.” For example, a child may have the potential to become an excellent piano player. But at each stage of success, the freedom of choice is stripped out of potential: the potential becomes a constraint as it becomes realized. Soon the child may sense they have no choice over whether to pursue something else. As things are defined, they can strip away choice. What we want, I think, as people, is the freedom to explore possible and undefined paths and to make sense of them in their own way. This is not the definition of agency but the conditions through which agency is exercised. With many paths comes many choices, noisy as they may be.
With the resulting AI images, as much as people push to see them as the opposite — as explorations of unseen “potential,” they actually tighten the walls quite a bit on agency. They flatten paths of discovery and choice replace them with paths of preference and reference.
Can we make art like that? Sure, but it depends on our relationship to the system. That is why I see my relationship to (against!) the system as the true seed of what I make with it.
How are artists collaborating with, changing, torquing, or critiquing AI systems?
I explore the way noise moves into and out of this system, with many of my techniques relying on an adversarial approach — prompt injection — to confuse the system into producing noise that can’t be stripped away. The machine can’t recognize noise, so it can’t define noise and can’t limit it.
We now see an artifact not of deference to representation as it is modeled by training data and the archive. Rather it’s an artifact stamped with the failure of the system to reconcile noise within this rather flimsy back-and-forth system we call AI.
Using noise this way is about a few things.
First, it’s about my position in relation to these systems: literally, ensuring that their interface does not limit me but that I can still find new paths of possibility with the system that transcend its stated intention, value systems, and all that.
Second, it strips away the mythology of these systems in the popular imagination. I tell stories with this noise, such as using images of Henrietta Lacks to reveal how the AI’s Henrietta Lacks looks nothing at all like Henrietta Lacks. And the AI-generated narration, trained on the authentic voice of a human narrator, sounds nothing like the actual human narrator, stripping away Dr. Ghosh’s accent in a sea of North American training data.
I hope that our imagination of these systems collapses as their inner workings are revealed. The story is told in the distance.
Third, the AI image as “noise” in a social context. AI images are an absence of information dressed up as a signal. They are a generic signal comprised by averaging specific signals. So the glut of AI content is also a kind of noise, and of course, what we do with that glut also poses a choice. Scale is everything.
Finally, there’s how it arrives at a text-to-image model in the first place. Much of this is well covered. There are consent issues around data. Art work is being taken without permission. Images of people’s kids and child abuse all hide in this dataset. I don’t want to make pictures with that material.
Kevin Baker once remarked that we are already “imagining” too much about AI. Utopian and dystopian scenarios all teeter on unrealistic trajectories and capacities. We’re asking if we should redefine the horse in the age of the automobile.
In a sense, AI is hopelessly human. Human data, human ideology, human economics. Much of it is ugly. The foundation of generative AI is tied to Eugenics. Francis Galton literally invented composite photographs and inferential statistics to serve an impoverished theory about the relationship between bodies and behaviors. These are the basis of diffusion models. But the third invention for which Galton is known is Eugenics.
So it’s helpful if artists can see Generative AI as an insistence on the stasis of models made by datasets and help reveal the danger of that logic. Models that assign weights are themselves heavy. And this logic is embedded into many algorithmic systems: reduce to data, abstract to a model, and predict a world. But it is a reduced, diminished world that rises from the past. If we confine our vision of the future to what fits into prior observed patterns, then we’ve given up.
To say humans think this way is misanthropic. Some humans think this way, when they are exhausted or afraid. That’s when people rely on their mental models to do all the work. Gen AI is a model of the mind of a frightened, anxious person, afraid of possibility and working to confine it to known references rather than exploring. It’s not limited to that, of course. But that’s how we’re making it, and how many use it.
This is philosophical but increasingly concrete: the more we build these models out into “agents,” capable of running digital errands or making increasingly complex decisions, as we see OpenAI and Anthropic working towards, the more this logic rubs up against the fabric of our lived experience. The more tightly we cling to old models in new contexts, the more chaos we unleash onto the world.
Compared to, say, the history of photography and chance, or art and systems, does artists' use of generative AI today represent a difference in degree or kind?
It’s a chance, but if you rely on it for pictorial representation, it’s always constrained by probabilities. So it is not aligned with the goals of chance art at all. AI does not always expand possibility if it is consistently relied upon for solutions. In many ways, automation reduces the breadth of true possibility for a person, or a society.
So it’s important to recognize what scale of the system we are looking at.
The noise in the system is the motion of particles in the universe—static, randomness, freedom, excitement, heat. The model applies constraints. That vibration becomes fixed, frozen. It’s not a pure chance operation. It makes sense of things in one way. That is its distinction from “human.” Humans, even one human, can think in all kinds of ways.
In contrast to the buzzy activity of a billion warm atoms, the version of human we identify with an AI system is built on just one way of thinking. It goes back to Marvin Minsky’s frames, a term that came from a psychological framework. Originally, frames represented a way of sorting the world to minimize attention to details and maximize attention to novel signals.
However, in the case of social relations or personal struggle, these frames construct a stubborn shorthand we might hope to transcend rather than settle into. A psychologically healthy individual can analyze the frames through which they tell themselves the story of their world. They separate from it, and acknowledge the story. Minsky saw frames as the key to AI, but his frames were consistent and reliable. By contrast, the artists who emphasized chance did so because of the freedom from these frames such randomness provided.
Looking at it from one frame, Gen AI can expand an archive into endless varieties, as Lev Manovich or Sam Altman might claim. We might envision an artist working in a different era, or from another country. This strikes me as nonsense. The machine cannot simulate any artist’s life experience nor visualize what an artist might do had they been born in Korea instead of New York. At best, it can rearrange some stereotypes: oh, here’s Monet, here’s Pop Art, a Kangaroo for the Australians. It tells us nothing new about things or people. But of course, it expands the archive: it fills up potential thoughts with empty signals.
When we allow Generative AI to reduce the number or type of decisions we can make, it does not expand the number of states available to us — it constrains them all to one state, the state of generative media production. This limits its possibilities to the fixed boundaries of the image archive it is trained on. It is combinatorial, which is easily mistaken for infinite, but run enough of these prompts, and you see redundancy, not novelty. The novelty comes once, through error, and then appears over and over again. Novelty is rarely found through the filtration of possibilities steered by predictability.
As the Vasulkas put it, about their pioneering video art of the ‘70s and ‘80s, the goal was to “get rid of the supremacy of the human eye, the inherited modes of perception, and … reach an alternative (let’s say “noncamera” or “non-human”) point of view.” You don’t get that from using these systems as intended. You might get them from glitching them, as I have done here.
How have artists probed the creative possibilities of generative AI? How have they condemned the biases, ecological impact, and military-industrial origins of AI?
Working with noise has been one way. I — and Caroline Sinders and Steph Maj Swanson — have a paper coming out in the Critical AI journal outlining strategies for creative misuse, borrowing Jon Ippolito’s term. That includes things like systemic interventions — noise prompting or negative prompting, but the intentional misdirection of the system.
This can mean a few things. First, there’s noise, which positions the artist as an adversary rather than a collaborator. Second is reappropriating AI outputs in order to expose those biases and gaps. Carme Puche Moré has a film, My Word, showing the impossibility of generating an image that looks like her based on the descriptions she gives it. The film's point is the gap between the model’s idea of these words and her idea of herself, a deliberate evocation of algorithmic irony. Find the distance between the world and its algorithmic representation and show it to us. Recognize how power moves into the system, and shapes what we see. Treat it like an artifact.
Because I work in policy spaces, I am often making sense of these powerful AI folks. They believe in many myths about their models and what will come from them. Many tell stories that conveniently erase the presence and mechanics of power within AI infrastructures. But artists can invent new myths and new metaphors, things that make the flow of power more transparent. (AIxDesign is exploring this idea of redefining GAI scales as well). The adversarial approach to generative AI exists in tension with it, and friction can produce something compelling. But it’s just one way.
What is no longer possible?
It’s all still possible. I think something is becoming less defensible, though, which is a separation of AI’s resultant aesthetics from politics. There is an aesthetic of default AI: it’s all averages. You can get away from that visually, but no matter how they’re made, AI images reliant on datasets are defined by an averaging of that dataset.
Even noise images are constrained in some ways and at some scales, but it’s not because of training data, it’s because of the limitations of the machine. Limits are natural: we don’t want to freeze the universe into utter predictability, nor boil it into perpetual disorienting change.
People can still use these tools, but they should be mindful. There is a kind of political residue. Being critical of your medium is not a contradiction, it’s an engagement that is bound to make the work more interesting. I am also inspired by some practices of institutional critique in the art world, in that the art-and-technologists really need to be more alert to their complicity in technopolitics. Many are!
What is human or machine, creativity or computation, in the first place?
There is this tendency, in a lot of conversations in America, to zoom out into the aggregate scale, name it a thing, and then treat the thing as an isolated object. All forest, no trees. So we respond to models, but not to the 5 billion points of data. We say “human,” as if there would ever be one definition, but not to the 8 billion people. How we define things depends on our sense of their scale at any given moment. Shall we look at AI through the scale of size? The scale of its training data? It’s social impacts? Our private emotional response?
Should we look at AI through a scale of an infinite horizon, placing the models on the spectrum of possibility measured not by 4-8 images but hundreds of them? There they are quite limited compared to the just-above-average human imagination. And human — what do we mean there? One (hypothetically average) person, one person with a specific way of seeing, or each of 8 billion people, or the shared capacities that emerge when 8 billion people are together on one planet? A person (or 8 billion of them) on a good or bad day? Have we all had coffee yet?
What about time? Doesn’t a model merit a distinction simply because of the speed with which it imitates us? What if the scale of time linked to human ideas and perception was inherent to a definition of creativity?
Or we might consider the emotional scales, also profoundly connected to humankind and our neighbors but not to machines. The diffusion of cultural memory, the transformation of the archive into training data on a personal scale, tells a different story than it does on an abstract scale. We may be relieved to see the horrors of history ground up into noise, to move on from trauma, and to create something new. But if you’re looking at noise as someone whose trauma has never been recognized, it might feel like an extension of abuse.
People want to say AI is human often want AI to be human without the mess. The mess is human! We can’t dismiss emotions as a human experience. Gen AI boosters want to pretend humans can exist in scales of time so fast or slow that we cannot comprehend them. The battle to define the “correct” version of human that an AI represents is a rejection of what is, at its heart, human, which is the plurality of interpretations and meanings of the world that we strive to learn and sometimes reconcile. That’s politics, and many want to get rid of that, too.
So, what is all this interest in making AI “human”? Why the need? When discussing the “human,” we usually discuss “people like us and the people who ought to think like me.” The “human” has already excluded black bodies and women’s bodies, and this has been used as a tool of colonization and oppression by all forms and shapes of people. We defined human as a set of universals.
When we redefine the category of human to reflect the ways machines function — “think,” —through comparisons to the so-called “human mind,” I get anxious. People say, oh, this is racism against robots, ableism against machinic neurodiversity. You’re not accommodating the intelligence of the machinic process! But that’s not even true. I’m deeply entangled in these logics, and much of my work — such as a deliberate machine-mushroom project — is about this tension.
The simple fact is that while AI might be a “non-human intelligence,” it is a distinctly human extension of specific forms of human thinking. A model of thought. Have we really exhausted the reach of empathy for the other intelligences we coexist with? Do we even understand a dog yet? AI is perhaps the least interesting, and least urgent, non-human intelligence we might strive to understand.
Whose minds are these like? In comparison and contrast of “AI and humans”, I fear that we create some vague mono-human oriented exclusively in logic, sequences, and orderly categories and ultimately use that against ourselves. Why compare ourselves to a machine? Or a horse? Or a river? Why not let these things be whatever they are? Defining and comparing the category of human is a dangerous business.
AI models are said to “reason” now, but they don’t mean it like we do for people or animals. So what does it mean? What’s more precise? Why be afraid of that precision? Why does the AI have to be like us to be validated? Which “us” is it meant to be like? Because it seems a lot like an “us” that embraces clear categories and labels, precise calculations, and stable ground from which to make predictions. Not all humans do, or have, those things.
What matters, I think, is preserving a sense of scale and how it transforms what we see, and to embrace the transformation of that vision, rather than finding one scale through which to filter everything. To that end, even the “AI is a human” metaphor has its uses. But it’s crowded everything else out. Let’s balance that metaphor with others and take them lightly.
Things shift in relation to how we choose to observe them. That flexibility is beyond what these computers can accommodate. I would challenge us all to cultivate it.
Cybernetic Forests is a reader-supported publication. To receive new posts and support my work, consider becoming a free subscriber — or upgrading to a paid subscriber, to help me continue this work!
Things I Am Doing This Month
Fantastic Futures Conference, Canberra: October 18
I’ll be attending the Fantastic Futures conference in-person in Canberra! I’m conversing with Kartini Ludwig, the Director and Founder of Kopi Su, a digital design and innovation studio in Sydney, and Megan Loader, NSFA Chief Curator, as part of the conference in Canberra, Australia, on October 18. I’ll follow up with some time in Melbourne and a potential speaking event with details to be confirmed. Fantastic Futures is sold out, but check it out anyway.
Exhibition: Poetics of Prompting, Eindhoven!
Poetics of Prompting brings together 21 artists and designers curated by The Hmm collective to explore the languages of the prompt from different perspectives and experiment in multiple ways with AI.
The exhibition features work by Morehshin Allahyari, Shumon Basar & Y7, Ren Loren Britton, Sarah Ciston, Mariana Fernández Mora, Radical Data, Yacht, Kira Xonorika, Kyle McDonald & Lauren McCarthy, Metahaven, Simone C Niquille, Sebastian Pardo & Riel Roch-Decter, Katarina Petrovic, Eryk Salvaggio, Sebastian Schmieg, Sasha Stiles, Paul Trillo, Richard Vijgen, Alan Warburton, and The Hmm & AIxDesign.
Film Festival: Uppsala! Oct. 26
ALGORITHMIC GROTESQUE: Unravelling AI, a program for the Uppsala Short Film Festival, focused on “filmmakers with a critical eye toward our algorithmic society.” Great collection curated by Steph Maj Swanson aka Supercomposite. With films by: Eryk Salvaggio, Marion Balac, Ada Ada Ada, Ines Sieulle, Conner O’Malley & Dan Streit, Ryan Worsley & Negativland.