Slop Infrastructures 1 & 2
"Maybe the Human Part of Human Connection is Overstated."
“Maybe the human part of human connection is overstated.”
– Anish Acharya, Partner, Andreessen Horowitz (WaPo)
In December of 2023 Nicolas Lund of the Audobon Society wrote about the increase in AI-generated imagery of nonexistent bird species posted to Facebook's birding communities. This would mark the start of a rising tide of what came to be called "AI slop." Much of this was centered on Facebook, leading to the “Shrimp Jesus” phenomenon of posting jarring AI content to lure engagement and AI-generated child victims posted during actual natural disasters.
Loosely speaking, AI slop is "low quality media, including writing and images, made using generative AI technology" used as "filler content that prioritizes speed and quantity over substance and quality." How one applies this term and to what one applies it can vary wildly.
2024 was the year of AI slop. Kate Knibbs at Wired reported that 54% of Linkedin Posts were written by LLMs. 404 Media reported that Google not only boosts AI images in image search, but boosts websites with AI-generated text over human content, too. AI Slop was so suited to social media algorithms that human effort has become frustratingly impossible to recognize.
AI Slop was brought to prominence by Simon Willison, though it has been around for some time. Initially, I was annoyed by the term, which has become a thought-terminating cliche among anti-AI activists. Someone will respond to an AI image, declare it "slop," and that's the end of it. It's like writing back to an email shilling discount pills to tell them that what they have written is spam.
Unfortunately, AI slop is worth considering more thoughtfully. In this piece, I will use AI slop to examine the current fusion of social media and AI-generated imagery in a year where diffused media systems clearly influenced national politics.
Contrary to its usual frame, I don't see AI slop as disinformation, but about the reality that information of any kind, in enough quantities, becomes noise. It is, I argue, a symptom of information exhaustion, and an increased human dependency on algorithmic filters to sort the world on our behalf.
PART ONE
AI IS IDEOLOGICAL
AI is Ideological
We ought to start with definitions. AI is many things. One of them is a set of beliefs about the world, which exists in isolation from the tech itself but structures the engineering of computer products claiming to be "AI." When I say AI is an ideological project, I mean that it is a way of imagining the world that becomes a shorthand for explaining the world.
In describing "AI ideology," I am not so interested in sorting through what is and is not "artificial intelligence." I am not sure it matters. In a recent essay, Ali Alkhatib writes:
I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.
I wonder if the ideology of AI even requires a connection to the amorphous definitions of AI as a technology. After all, generative AI is everywhere and nowhere, mostly a promise about LLMs and Diffusion models. But these models have some overlap with the logic of so-called "good" machine learning systems – systems typically used in highly controlled environments, with unambiguously consistent rules, such as chemistry labs.
For the sake of this analysis, AI as an ideology is less about what the tech does and is more about where the tech is doing it. I have less concern about an AI system that focuses on weather patterns. We need to be much more critical – down to abolition, if necessary – in using AI of any kind to manage social and cultural systems. This includes the social and cultural impacts of displaced labor. The stronger the interference, the more resistance becomes necessary.
Writing and creative labor are poor fits for automated predictive systems, as are justice, medical, and education systems. However, we might also find that people refer to basic parole and credit decisions as "AI," even if credit scoring algorithms pre-date machine learning systems. They share the ideology of AI, a belief in statistical analysis and prediction as a source of authority. Automation, in this ideology, is always an improvement, and optimization of a system's ability to assert authority is a shiny utopian promise of automation technologies.
Predictive policing, for example, extends behavior patterns and transforms them from analysis to prediction, reinforcing pre-existing biases, which is framed as unbiased and neutral. But the world is neither a lab nor a coded simulation with reliable limits. One aspect of AI as an ideology, then, is the scientific sterilization of variety and unpredictability in the name of reliable, predictable behaviors. AI, for this reason, offers little and detracts much from the vibrancy of socio-cultural systems.
AI, when introduced into socio-cultural systems, cannot be trusted as if it were deployed into a stable,
engineered environment.
To be clear, the ideology of AI isn't even universal among those who work with (or in) AI, but structures the project's goals and defines the options for reaching those goals at the highest levels of power. Those who work with or in AI may even be challenging this ideology through their work; indeed, any hope of developing technology well has resulted from recognizing and resisting this ideology.
Recent growth in the consumer category of AI products has expanded this faith-based AI initiative to anyone with eight bucks and an X account. The ideological belief in "AI" does not require a person to be an engineer – or to understand AI at all. It may be more likely among managers and AI users than those who work on coding and designing these systems on a technical level.
Though almost literally "autocratic," this AI-generated worldview is neither left nor right-wing, but a fusion of cyberlibertarianism and technocratic neoliberalism. Today it is a project with enough entry points to form its own agenda, priorities, myths and abstractions, rituals of inclusion and exclusion, and crucially, an invitation to enjoy its fantasy.
People participate in AI ideology by evangelizing its products, circulating its outputs, and bolstering its advancements to identify with that group. AI as an ideology shapes a worldview, and these explanations then produce its priorities through shared myths and common abstractions, some of which have reignited interest into a suite of bankrupted historical ideologies, such as TESCREALISM.
AI is a massive project requiring infrastructural investment, energy resources, and human talent. If the purpose of a system is what it does, then the purpose of AI is... slop.
PART TWO
SLOP INFRASTRUCTURE AESTHETICS
Much of our digital lives occur within slop infrastructure. Slop merits deeper engagement because it is a by-product of the same information system it is now clogging up.
So it might be worth asking: Who does AI Slop help?
AI slop is a symptom that emerges from AI infrastructure, and that infrastructure is a result of AI ideology. Slop is a problem that can persist through strategic negligence: there is no need to prioritize a problem if that problem is helpful to the people tasked with repair.
A recent essay on the infrastructure of critique highlights the importance of infrastructural frames:
In the mid- and late ’90s, anthropologists began turning their attention to infrastructures through the framework of inversion. “Inversion,” as originally coined by Geoffrey Bowker, describes a reorientation of background and foreground, so as to locate and diagram the political and ideological work undertaken by various systems of circulation.
Let's try. What might the image below tell us about the utility of AI slop in a broader ideological frame? We start by looking at the image and the context of its circulation, and then we ask why algorithmic media rewards its circulation.
What do we see here? It's an emotionally charged image of a girl whose face is unsettling – "heartbreaking," with a wet puppy to boot – suggesting extreme vulnerability. It is distorted in a way that invites me to respond to the girl as if this visual distortion is physically painful, triggering an extended, almost embodied sympathetic response.
The image here is paired with the words "Our government has failed us again," a vague bit of rage bait that is meaningless in isolation. What has the government done here, or not done? There is no person in the image for the government to have failed. Of course, we are meant to associate this image with the victims of Hurricane Helene which had just wreaked havoc on the US South.
But the image of vulnerability is all that matters: this girl is terrified because government has failed us again. It pairs an emotional translation of an imagined scenario with the perceived truth of an image; this image is then used to point blame at a target, "the ever-failing government."
Vague, low on information, but highly effective at eliciting emotion and anger, which leads people to share it: AI slop is extremely viral, and so is outrage.
How Does AI Slop Mean?
Technically, AI Slop is easy to define. Its social meaning is less so.
Is it bullshit? A paper from Alessandro Trevisan, Harry Giddens, Sarah Dillon, and Alan F. Blackwell poses the question, defining bullshit according to philosopher Harry Frankfurt:
A form of linguistic communication characterised by ‘a lack of connection to a concern with truth' – […] indifference to how things really are.
The authors write (emphasis mine):
There is not in fact an equivalence between a question under discussion in human inquiry, and a question prompt to an LLM-based chatbot. A question under discussion in human inquiry is part of mankind’s ‘cooperative project of incremental accumulation of true information with the aim of discovering how things are, or what the actual world is like’ (Stokke & Fallis 2017: 279). A question put to an LLM-based chatbot might seem to be doing the same thing, but in technical terms it is actually just a question about sequence prediction. ... the production of bullshit by LLM-based chatbots ... reverses the goal of inquiry, [which is] the discovery of what the world is like.
AI slop breaks down the inquiry and investigation into the world as it is, replacing the critical landscape with text and image fragments that affirm the world as it is imagined. In essence, it circumvents any desire to understand the world because it offers us the immediate satisfaction of having a feeling about the world.
AI slop breaks down inquiry and investigation into the world as it is, replacing the critical landscape with text and image fragments that affirm the world as it is imagined.
M. Beatrice Fazi puts it this way for text:
Large language models lack what linguistics calls a referent. Reference here is the relationship between a linguistic expression and what, in the world, that expression is supposed to represent.
The Aestheticization of Data
Like all AI images, AI Slop is an aestheticization of data – or, perhaps, an aestheticization of the consensus, reflected in the data, of what images are meant to look like. Borrowing from Kant, Charles Blattberg writes that aestheticization requires a kind of disinterest in what we look at:
There is disinterested imagining, as when we fantasize, letting our imaginations “run free,” unrestricted by fact; disinterested presenting, as when we put on an entertaining show or spectacle; and disinterested playing, as when we participate in games that are fun.
AI Slop meets all of these criteria. It reflects what Blattberg critiques about the aestheticization of politics: to become disinterested is to become passive. As AI slop aestheticizes our visual and political (and visual-political) culture, it too seems positioned to instill a kind of passivity.
The consumer of the AI image is disinterested in the image. There is no sense that the viewer can make an image – in fact, quite the opposite; AI slop can instill a sense that the viewer shouldn't bother to make a drawing (the AI can do it instantly) but also that the viewer should not bother to investigate images. Because of the scale of AI slop and its powerful infiltration of once "social" networks, AI slop suggests there is no need to create culture because there is already far too much of it.
For artists, specifically commercial artists such as illustrators, the hatred for AI slop is deep — often bordering on fanatical, and for good reason. AI Slop‘s disruptive presence on social media makes it seem impossible for people to express themselves, drowned out in a sea of mediocre images that are built on their uncompensated labor through the unlicensed usurpation of their work into training datasets.
Manipulating the world of data creates, and requires, distance from those the data claims to represent. The uncritical belief in these abstractions of data as analogues of the real world supports a fantasy that data is capable of explaining, but also enacting, far more than it realistically should.
AI Slop is a passively consumed, empty signal, a symptom of "the age of noise," in which there is so much "truth" from so many positions that evaluating reality feels hopeless. AI slop is utterly disinterested, and so are our interactions with it. Media can effectively be weaponized when people are resigned and passive to regulation and restrictions. So goes the slop.
In parts 3 and 4, linked below, we look at the weaponization of AI Slop in targeting young women, activists, and Taylor Swift.
Things I Did This Week
A Podcast!
I'm the guest this week on Alix Dunn's excellent Computer Says Maybe podcast. Recorded about three days after the election, we discuss the politics and myths of AI and touch on the Age of Noise. (I wrote "Resist the Coarse Filter" after our chat).
A Playbook!
If you're an artist annoyed at how AI is always represented in the media as godlike robots typing at desks and want to create more critical visions, the AIxDesign community & Better Images of AI has a fix. It's a free downloadable guide for using commons-licensed archival images responsibly.
It's also a great resource for students for thinking critically about AI and their art without using AI in their art. I'm a contributor, it's a lot of fun.
I recently migrated away from Substack. The new archive for this newsletter can be found here.
If you're looking to migrate from X, or join a new conversation space, I highly recommend Blue Sky. If you sign up through this link, you can immediately follow a list I've hand-assembled with 150 experts in critical thinking about AI. Hope to see you there!