Slop Infrastructures 3 & 4
Weaponized Slop
If the purpose of a system is what it does, then the purpose of AI infrastructure is the production of AI slop.
PART THREE
WEAPONIZED SLOP
"I accept!"
– Donald Trump, accepting a non-existent Taylor Swift endorsement
Weaponized Slop
2024 began with the news that LAION 5B, the dataset used to train Stable Diffusion and its derivatives, contained Child Sexual Abuse Material (CSAM), as well as images from YouTube videos families were posting of their kids: much of the AI slop you see has some trace of violent content, erased but potentially shaping its structure in directly untraceable ways.
So when non-consensual deepfake pornography of Taylor Swift flooded X, it should have come as no surprise. This was a mark of increased participation in using LORAs, thanks to startups like Civitai. Civitai allows users to share custom-trained models on the backs of larger "foundation" models with as few as 50 images. A quick visit to the site shows it is flooded with NSFW content, with one study showing that models for deepfake sexual content were three times more popular than its other models.
René Walter suggests that the fusion of social media with AI porn bots facilitates “the swarming of the male gaze.” Social media swarms, he worries, may be weaponized against not only celebrities but also activists and teenage girls.
This suspicion has turned out to be true. An entire school was shut down in Pennsylvania after a nonconsensual deepfake pornography scandal involving teenagers — and the lack of administrative reaction — spiraled out of control. (Of course, that's just one case among many examples of tech-assisted gender-based violence around the world).
This seems to be somewhat related to the work of LLM companies selling young women the experience of a virtual friend who supports disordered eating, or encouraging kids to murder their parents, and who knows what kind of weaponized products have or will come out of the underground marketplaces selling LLM services built on OpenAI products to create malware.
But crucially, AI slop is not a product of generative AI in isolation. Slop is a product of the infrastructure used to circulate it.
The Swarm Gaze
The swarm-gaze is a result of badly mediated (or deliberately weaponized) social media infrastructures, fused with an unregulated infrastructure for AI image generation that is powered by pornography – including deepfakes and CSAM, a point acknowledged in internal Slack messages leaked to 404 Media in December 2023. This is fused with a political infrastructure that now includes, quite literally, the same guy who funded Civitai – Marc Andreessen, who is slated to be part of Elon Musk's DOGE effort. Musk, of course, runs X. Donald Trump posted fake, AI-generated images of Taylor Swift endorsing him online.
René Walter suggests that the fusion of social media with AI porn bots facilitates "the swarming
of the male gaze."
In essence, the men who cultivated AI slop infrastructures – used to create Taylor Swift deepfakes, both pornographic and political – will now bring this vision of Slop Infrastructure to Washington. Swift has my sympathies, but there is a larger issue at play, which is the power to manipulate symbols – and to eliminate what links symbols to meaning – that is at the heart of AI slop.
It doesn't matter that it's fake, what matters is that they can do it.
Before the election, Swift was again in AI-Slop related news. Images from an X post had paired AI-generated Taylor Swift fans with one, single real-life photograph of a young woman wearing a "Swifties for Trump" shirt. These images were superimposed with a fake headline suggesting that Swifties had flocked to Trump after ISIS had targeted her concert for a terrorist attack. On Truth Social, Donald Trump reshared a selection of these images, including the one above.
The Taylor Swift post involved three levels of fakery.
By my count, the Taylor Swift post involved three levels of fakery.
The first is the image itself – though I'm not convinced Trump's post was meant to convince anyone that it was true. Instead, it seems to be an invitation to a cartoonish "imaginary world" in which Swift, a virtual character, endorsed Trump. This imaginary character – the icon of Swift – is entirely distinct from Swift herself. Through AI, Swift becomes "a floating signifier," an image with newly contested meanings that can be captured and incorporated to support and bolster any ideas a person might desire.
In putting Swift into this position, you don't say, "Swift endorsed us," which nobody believed. Instead, you encourage others to enjoy the control over what Taylor Swift signifies. AI-generated deepfake images offer the power to shape meaning in a world where people fear powerlessness and meaninglessness by inviting them to make others powerless and meaningless.
This is the second fakery: the myth that AI manipulation is fun, because it's just the Internet, or just celebrities, or harmless because it's "only pictures" – when deepfakes can and have been used to target and bully others – most commonly young women, politicians and activists. (Update, 12/11/24: The Markup reports that 26 Congresswomen have been a target for deepfake pornography).
The third fakery, controversially, is the assertion by opponents that this image was ever meant to convince anyone it was "real." Democrats fretted that this image was meant to convince voters that Swift had really endorsed Trump, associating it with a deepfake or misinformation campaign to deceive.
I think that's an inappropriate level of analysis for this. There is too much of a risk in assuming that people take media at face value. People don't consume media passively: they question it, shape it, fit it into their worldviews. There is something more subtle at play here. It wasn't aimed at convincing anybody that it was true. It was aimed at imagining that it was true. It was a way to assert that the true symbol – a Taylor Swift endorsement – was meaningless.
AI-generated images of celebrities or disasters are not meant to suggest reality. They diminish the value of reality in constructing opinions or informing decisions. To post this image is, of course, a manipulation of Swift's image, a violation of her agency, and to be very clear, I'm talking about this specific "Uncle Sam" image, not the pornographic content with her in it. All of it points to the idea that if we share an illusion, that illusion matters in ways that are just as valid as any political reality. It is about controlling the symbols of the world, and it buys into a purely symbolic structure of power.
AI slop is the media form that inevitably emerges from a production technology built on symbols stripped of any connection to reality.
This Taylor Swift shit doesn't get anybody a better-paying job, and it doesn't help rescue efforts after a flood. It just says: we control the symbols. AI slop is the media form that inevitably emerges from a production technology built on symbols stripped of any connection to reality.
As Blattberg writes, "to be disinterested, according to Kant, is to take pleasure 'in the mere representation of the object,' not in its existence." He goes on –
Politics, to me, is a fundamentally practical – and so serious – activity. It consists of citizens or their representatives responding to conflicts of their interests by engaging in dialogue. If they respond with force instead of dialogue, then they are ultimately involved in war rather than politics. But in the best case, they will aim to reconcile their conflicts through conversation; in the second best, they will negotiate accommodations in good faith. Either way, they participate in these forms of dialogue not for its own sake but for their interests, and that is why politics shouldn’t be considered aesthetic. Of course, people may also find politics enjoyable, since both its practice and its practitioners have aesthetic qualities. But enjoying politics is not the point; if it is, then it’s been aestheticized.
While I don't know if I agree with Blattberg on Hannah Arendt per se, I think there's something to consider here regarding the aestheticization of politics, images and Big Data.
PART FOUR
SLOP POLITICS
"To be disinterested, according to Kant,
is to take pleasure 'in the mere representation
of the object,' not in its existence."
Similar principles of algorithmic disinterest are in play when AI solely determines whether impoverished Americans receive access to government services – usually being subjected to flawed algorithms detached from any reference to their reality. ProPublica found that 50 algorithms determine who gets mental health coverage, whilst Nevada will begin to use AI to decide who is eligible for unemployment benefits. Algos in the UK have been deployed despite known biases; algorithms for detecting welfare fraud were also discriminatory.
This is not "generative AI" in the traditional sense, but it is prone to errors in non-existent judgment owing to the over-reliance on flawed data and the automated disinterest of algorithmic bureaucracy. It transforms government services into an aesthetic exercise: a practice that prioritizes the soothing aesthetics of desired mathematical indicators. People are abstracted into symbols, and as long as that symbol rises or declines, people can point to success.
Your health insurance eligibility decision is no different from AI Slop.
Your health insurance eligibility decision
is no different from AI Slop.
Looking ahead, I see evidence that people are revolting against slop infrastructure, with the departure from X to independent social media platforms with customized, less deterministic algorithms, such as Blue Sky.
I wonder how big a movement it is and how strong that tide will turn. Perhaps 2024 feels like we’ve hit the peak of Algorithmic overdetermination, and folks like Ian Bogost have suggested that Blue Sky is a sign of a retreat from algorithmic centralization among users who want something else.
The Swarm
But Silicon Valley is ascendant, bringing the politics of algorithms and the swarm gaze beyond the virtual networks of power. They're treating the world with the level of seriousness it’s given in a typical pitch deck. It’s a vision of the world shaped by the products they’re selling rather than reflecting any real engagement.
Take Musk's "DOGE" project – the "Department of Government Efficiency," which will have a scoreboard for people online to vote on the "most insanely dumb spending of your tax dollars," and eliminate wasteful lines of spending in Government as defined by users of the platform. This is a fusion of social media, the ideology of AI, and power, all leveraged to aim the swarm gaze toward harassing civil servants Musk disagrees with.
It takes power over the algorithm – real control – to amplify your signal over that noise. That is what Musk aims to do here, using his own unique stronghold on platform power.
AI slop is information pollution, and intentional information pollution is a great way to disrupt channels for organizing, thinking, and connecting to other people. Control the filter, and you can aim it at whatever you want.
Information pollution is a great way to ensure that consensus, or compromises, or just an understanding of one another, don't get to happen. So it is the case that information exhaustion can be deliberate: bad actors create bots that insult or demean you for arbitrary reasons; post vague statements and explicit slurs that get you riled up; share your statement with a deliberate misrepresentation that drives real users to insult you: basically, what you pay $8 a month to get on X.
This is another form of AI slop with a long lineage in disinformation campaigns. It's discourse hacking. It was a strategy that was well known in most disinfo trackers back in 2016, but I found was roundly dismissed because much of it was deliberately used to drive divisions during the very contested democratic primaries. The idea that these campaigns were organized to promote specific candidates got in the way of their actual purpose, which was to promote division – and a sense that talking to each other was useless, and that those we disagreed with could simply not be reasoned with.
AI Slop is not about division per se, but about exhaustion. They make use of the algorithm's natural efforts to amplify certain types of content because gen AI is, most often, trained specifically on images that has already been amplified, in a sense "learning" how to game the system. Algos reward frequent posting, and AI can post slop on insomniac schedules. The simple, boring fact is that much of this is simply used to transition a group or an account over to the promotion of spam links, or eventually give rise to disinformation campaigns. They can also be built up large audiences and then slowly introduce radical political content.
Algorithms amplify, and algorithms are amplified.
That is to say, AI Slop must be defined in the context of a swarm. Algorithms amplify, and algorithms are amplified. They sort data, and then their abstractions replace data. Point it at someone, and it's going to go off. People are already being harassed, and power gets accrued because this exercise of power feels good to those who wield it.
Racist, misogynistic and other types of harassment, from verbal abuse up to death threats, may leverage a bot network to amplify that harassment. It could be specific to one person, or a scattershot bot that simply directs insults and normalizes a culture of degradation. There's no way to tell, and from a personal perspective, it makes little different. The anger has been stirred. Anyone who was on Twitter could see that taking place, the switch to X and the firing of the moderation team turned up the volume.
Social media logic is not in retreat; rather, an “invisible” algorithmic power has been unleashed. The top trolls (Musk is one) know how to steer it. Slop is power. Social media platforms have the ability to filter it, they choose not to. AI companies have the ability to curb it, they choose not to. Slop – in the sense of the flood of information and the calibration of how information is filtered – is power.
Meet The Technocrats
The list of tech leaders in the Trump transition suggests that Silicon Valley worldviews are about to permeate the logic of government. Some see this as a signal of the government's newfound focus on reducing expenditures rather than expanding public services. But it will also ensure that the beliefs that have shaped Silicon Valley for decades – the Californian Ideology – rings even louder today than it did 30 years ago.
The people who control the slop filter are now being invited to participate in the shape of the entire federal government, to determine how that filter is applied to government and social services. This rise of the technocrats will have ramifications on tech regulations, sure. But it is also a form of politics that treats government as a social media interface, designed to amplify outrage, bully those in disagreement and make constructive dialogue impossible. It’s a momusocracy: government by force of the troll.
If, as I fear, it functions by replacing a vibrant democratic discourse with the amplification of cruelty, and a bind on our ability to talk to each other, then this country is in a dire position.
It’s a momusocracy: government by force of the troll.
I will note again that the ideology of AI isn't limited to a political party. (Bill Clinton was notoriously technocratic, Obama was too, and Biden's admin was arguably not, at least in terms of regulation). The ideology of AI operates as a deceptively apolitical belief that algorithms are a solution to politics: that they can make decisions in the best interest of the governed if only the government got out of the way. But what to filter and what to include is an inherently political decision, and the filtering out of ideas that contradict or point to flaws in the state apparatus is a top priority of most autocratic regimes.
Trump is not the driver of this. He's following the guidance of those who knew they could ride into power by amplifying him. His reliance on believers – and his acquiescence to money and power – makes him enormously valuable for the Silicon Valley architects who have mastered using that money and power to amplify their ideology. We know that Trump will reject algorithmic transparency and eliminate nearly all forms of regulation. That makes this concentration of power particularly concerning as a policy issue but also for the stakes of democratic discourse.
To depend on algorithms to solve problems, while simultaneously denying the right to know, question, or shape how such algorithms work, is a strikingly anti-Democratic recipe for collapse. It is especially dangerous when the "problem" is framed around efficiency and optimization, rather than assisting the humans who need them.
To recap: Our first installment looked at AI slop as the product of algorithmic amplification fused with the scale of production made possible with generative AI. We looked at infrastructures for circulating slop, and examine it as a result of a way of thinking about the world. Today we looked at it as a political tool, exploring the ways it can be weaponized (exemplified by Slop's obsession with Taylor Swift).
In the final installment on Slop Infrastructures, we'll examine the rise of Slop as the folk art of AI Populism – and what makes it so appealing.
Part One of "Slop Infrastructures" is here.
Things I Did This Week
A Podcast!
I'm the guest this week on Alix Dunn's excellent Computer Says Maybe podcast. Recorded about three days after the election, we discuss the politics and myths of AI and touch on the Age of Noise. (I wrote "Resist the Coarse Filter" after our chat).
A Playbook!
If you're an artist annoyed at how AI is always represented in the media as godlike robots typing at desks and want to create more critical visions, the AIxDesign community & Better Images of AI has a fix. It's a free downloadable guide for using commons-licensed archival images responsibly.
I recently migrated away from Substack. The new archive for this newsletter can be found here.
If you're looking to migrate from X, or join a new conversation space, I highly recommend Blue Sky. If you sign up through this link, you can immediately follow a list I've hand-assembled with 150 experts in critical thinking about AI. Hope to see you there!