Slop Infrastructures 5 & 6

Slop Infrastructures 5 & 6
An AI-generated Moo Deng Croissant shared on the Instagram page of a cafe that doesn't exist but was nonetheless named the top cafe on restaurant review sites in Austin. Via Grubstreet.

AI Populism and Slop as Symptom

đź’ˇ
This week was an experiment in multi-part publishing. Slop Infrastructures was shared in three parts, this is the third. You can read part one and two here, and parts three and four here.

PART FIVE

SPARKLE WITHOUT LIFTING A FINGER

"Now, I must ask – who among you has brought their sense of wonder? Show of hands, please!"
- Willy McDuff, the lead in an AI-generated script for the maligned 'Willy's Chocolate Experience' in Glasgow.

In Glasgow in late February, fliers advertised "Willy's Chocolate Experience," an immersive knock-off of Willy Wonka's Chocolate Factory promising "stunning and intricately designed settings inspired by Roald Dahl's timeless tale" and "an array of delectable treats scattered throughout the experience." A DALL-E generated graphic promised Encherining Entertainment and a Pasadise of Sweet Teats.

After shelling out $45 for tickets, attendees discovered a near-empty warehouse with scattered candy-themed lollipop decorations from a party store. Actors were present, delivering lines from a script generated by ChatGPT. The script including instructions to the audience about how they were meant to respond.

The Willy Experience was AI Slop manifested into life. One character, The Unknown, was "an evil chocolate maker who lives in the walls" and sought to steal a Gobstopper – "a sweet so powerful, it can make any room sparkle without lifting a finger."

A clown stands in the center of a collage of candy, holding a giant lollipop surrounded by sweets. A banner promises "Encherining Entertainment" (misspelled) and there is a list offering nonsense words.
An image from marketing materials for "Willy's Chocolate Experience."

Of course, Willy's Experience was a scam. In addition to attendees, it forced three actors into the uncomfortable position of accommodating machine-generated nonsense (and then not paying them for it) with no breaks; they were left to manage the angry crowds' demands for refunds. The promises made by AI had crumbled, with unpaid human labor left to clean up the mess.

Willy's Chocolate Experience may be a bit of forewarning for what our future of generative AI might look like. It is, to bring Kant back, disinterested, making sense only as a mangling of human-legible cultural references: "To be disinterested is to take pleasure 'in the mere representation of the object,' not in its existence." AI is not only disinterested, it is wholly incapable of interest.

Willy's was the work of an event planner ("House of Illuminati") who was also clearly disinterested in the proceedings. The goal was to produce the signifier that pointed to an event – the fliers, the script – hoping that the event would somehow emerge from the missing details.

Willy's Chocolate Experience emerged by predicting one word after the next in a sequence of words most likely to represent an experience. Then people manifested and experienced that prediction.

Willy’s Chocolate Experience was the future.

Pure Imagination

The appeal of AI is easy to understand. You need look no further than Sam Altman, the Willy McDuff of Generative AI, and turn off any critical capacity. You'll find his description of that future to be quite different — simultaneously comforting and secure: a world beyond our current crises, a world managed with the cool, confident assurance of a Silicon Valley Venture Capitalist.

Altman's information-age paradise is truly a world of wonders. Problems are solved by rational, dispassionate machines instead of people. It is a fair, all-knowing and ultimately disinterested machine, trained on all of the world's data to become the central font of wisdom.

As Altman defines in "The Intelligence Age," the world of leisure is just the beginning:

Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more. With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. 

Altman launched GPT-3 in 2020, a year of enormous shifts in how we worked and imagined presence in our communities. Accustomed to talking to each other through screens, GPT-3 would fit its text expansion system into the shape of a chat window. The rest was history.

But Altman has managed OpenAI alongside another venture, Tools for Humanity, since 2019. Tools for Humanity was founded in the waning days of Silicon Valley's second crypto boom.

It offers a product called World ID, which uses blockchain technology to provide "proof of human" – by scanning your eyeballs into a proprietary device, as described on the company's website, and allowing Altman's company to verify that whatever you share is not AI.

"A custom device called the Orb is used to verify you are a unique human.  ... Simply put, World ID grants individuals an anonymous way to prove they are human online in a world where intelligence is no longer a discriminator between people and AI."

This is all to say that if the information ecosystem is falling apart because we cannot discern humans from the chatbots flooding our screens, Sam Altman is happy to scan your eyeball into an Orb.

The rest of us are left to trudge through the slop.

The AI Experience

Today's information field is more diverse than it was in 2016. It is more platform-individualized (but ideologically centralized) than ever. AI is bound to fuel that, as it is a technology built on the excess information we are lost in with the intent of producing more of it.

There is a reason for the Generative AI's preoccupation with media: social media content is the most readily available source of training data. As a result, highly rated images from websites such as Reddit, Facebook and DeviantArt are prioritized for training. Once uploaded to the social media system by users like you and I, these images are recognized not only for their content but for the elements such as color and brightness.

Trained on viral content, it produces content that checks all the boxes for amplification. AI slop is thus a reflection of what our social media filters see, reversed to create optimized versions. When the algorithm has the result fed back to it, it recognizes them as more likely to spur engagement, and boosts it to more feeds (generating more engagement).

Owing once again to strategic negligence, AI slop isn't a priority for AI and social media companies because it isn't hurting them at all. In fact, Facebook's Content Monetization Program pays for highly engaging Facebook posts. That almost certainly means that Facebook is paying AI Slop creators, who flood feeds with this content to rack up engagement and get paid whether you love it or hate it – so long as you type that opinion into the text box.

AI slop is the aesthetic manifestation of algorithmically mediated culture. They are stylized through more than a decade of optimization algorithms learning what moves people to engage. With so much data, it’s now possible to simply point at “images” through loosely structured clusters of pixels, and have it duplicate the role of the content we have been responding to since the dawn of social media metrics. It’s an infrastructure scaffolded on the cynical belief that that's enough to make engagement thrive.

Depressingly, it's turned out to be true.

AI Generated Image of an Asian woman in a blue jumpsuit running in fear while holding hands with a shark that has no hands, while carrying the torso of a bearded man.

PART SIX

THE FOLK ART OF AI POPULISM

The Folk Art of AI Populism

If you are an artist who has tried to share work on Facebook or Instagram in the past 15 years, the rules are simple, and dull. You must post as often as you can, and the content ought to align with specific stylistic criteria. For the past 15 years, artists who have engaged with Instagram have burned out, because this is a complete misunderstanding of how art is made.

AI art, by virtue of its optimization to the invisible algorithmic selection criteria, and through its vast scale of production compared to hand-made art, skips the line.

This has given rise not only to scammers and misinformation, but to a more earnest strand of AI Slop: the work of untrained artists using a machine in a very straightforward way, benefiting from the baked-in algorithmic manipulation of social media that rewards them with attention for sharing it. For some who grew up in the algorithmic era, the Slop “aesthetic“ must seem very natural—a kind of aesthetic familiarity or even nostalgia, similar, perhaps, to the feelings associated with 8-bit video games among some children of the 80s and 90s.

AI-generated images can be read as folk art for AI populism. The tool becomes a promise of power: manipulating visual culture with "pure imagination." But they will typically focus on the images within the interface, and so they either don't see, or embrace, the consistent ideological context of making that work.

Stripping symbols of their relationship to reality to reorder them freely is at the heart of generative AI. It reflects a normalization of algorithmic scale, paired with media that is increasingly distributed as scattered fragments rather than unified wholes. People don't know movies but have seen the memes. AI Slop might just be the aesthetic reflection of the world that most teenage kids with smartphones know best. My generation seems to have a similar fascination with the aesthetics of the technology we grew up with, like the sounds of dial-up modems and Windows 95 scroll bars.

I keep returning to aesthetic detachment to understand the pleasure of manipulating symbols with AI. Art requires playfulness, and playfulness involves a lack of investment in the thing played with. We would not play absent-mindedly with a sacred object if we understood it was sacred. But history is filled with the sacred artifacts of a culture played with by those indifferent or hostile to that culture. Playing with those images inspires resistance.

On the other side of the AI image are its critics. A critical resistance to AI does not allow pictures to be viewed with "disinterest," in the same sense that some feel when looking at the images painted by, say, serial killers: the context of the image's production is simply too objectionable to be treated as a solely aesthetic experience. The distance is the problem.

The AI artist sees images full of whimsical serendipities and surreal gestures, while AI critics see only evidence of system failures. To the AI critic, referentiality is evidence, not an informed nod to a broader cultural context but of the model's theft of intellectual property.

The frame of AI is heavier than any art it might contain.

The image's simple existence is read as a privileged distance from the political contestation of AI around such issues as scraping the artistic expression of others, or pictures of exploited children in the dataset.

So while one group defines and relates to the image at the level of the medium – its infrastructure, politics, sources and impacts – the other group at the level of content – images, play, and reward. Of course, there is a spectrum between the two – and I've written plenty before about where I come down on all of this.

But I will propose simply that the frame of AI is heavier than any art it might contain. AI slop, in sum, lacks the possibility of proper detachment by the viewer because of its own total detachment of the creator — the system and the person who wields it.

As a result, any aesthetic appreciation of these outputs is limited to those who are generally unconcerned with AI's political position in the world. One needn't be fully invested in AI ideology to be this way. But for most critics, to participate in the production of AI slop is to participate in AI populism, and to perpetuate it through the ritual of making and circulating its imagery.

The Soul of Slop

Another pleasure of an aesthetic experience – at least for Kant – is the absence of a clearly defined category for which to connect and identify that experience. This means the experience briefly extends beyond our capacity to intellectualize it. If we accept that (to be clear, we've come a very long way from Kant), then we see one of the challenges of making AI art that transcends "slop."

The imagery produced by Diffusion models is predominantly a reference to a category, because it is a model that is built upon and accessed by naming categories: the prompt. The categories are, therefore, highly intellectual and nameable. They are also restructured references to, primarily, social media content, with the model making aesthetic choices based on what it has learned from the feedback loop of viral posts.

The result is often critiqued as soulless. AI-generated text and images suffer from the absence of the weight of the real. The AI slop of AI images and the AI slop of algorithmic decision-making have this in common: they can only point at data. They never base decisions on reality. Nonetheless, the decisions are rolled out into reality as if they did. An algorithmic decision about covering a cancer treatment and an algorithmic decision about the next pixel to appear are bound by the same absence of weight: both, of course, are defined as soulless. The emptiness is identical because the logic is identical.

As with any tool, creative and productive uses exist, and artists who know these tools can reframe their relationship to them. For this, an active rejection of the conditions that produce AI Slop may actually be helpful. There are a number of thoughtfully engaged and even critical users of generative AI who do significantly more than type words into windows. We don't need to pretend that this creative capacity absolves the technology of its many problems outlined above, but nor do we have to pretend that humans cannot find creative and novel uses for a technology.

Humans are creative, and we can be creative with anything we get our hands on. The question, really, is what our hands ought to touch.

Nonetheless, there is a specific strand of AI artist – the "folk AI art" – that swarms most social media users as a nuisance. Part of being a member of the folk-AI-art community is adopting, or at least being open to, some form of AI ideology.

For example, I often see the argument that generative AI models mirror analogues to human creativity. This is defended by reducing the work of human art to a process – assembling piles of training data and reproducing it with slight variations. To believe that, you would also have believe that culture is merely the endless recirculation and combination of logic, beliefs, and ideas that have previously existed: "Everything is a remix."

If everything is up for grabs, everything is transgressive, and nothing matters much at all. The entire landscape of our visual culture can become subject to a detached, aesthetic disinterest. Everything can be reduced to data to be manipulated. Once you believe that, you can easily come to believe wholesale in the ideological project of AI.

Truth (Social)

The information age has come to an end, and with it comes the end of any possible "objective," "neutral" definition of "truth."

The information age is over, and there is no point to nostalgia about its flawed systems. Past information regimes were centralized, whereas now they are endlessly diffused and prismatic. The information systems of the past have been undone by a reckoning that "truth" does not come from a single voice in a television studio. "Truth" is experienced individually, but we parse that experience through relationships with others.

Yesterday's media and communication systems could not support the full bandwidth of this collective experience. Unfortunately, AI marks an attempt to give us diffused and prismatic access to a range of subjective truths, but also moderates this to a center of its own definition.

Slop Aesthetics are a visualization of that algorithmically mediated center. It appears wildly diverse – sharks running with women! Taylor Swift's head is a hamburger! It's so random! – And yet, it is not random, but stochastic, infinite variety constrained by a narrow and consistent set of rules. This limits our ability to discover or invent new forms of culture. If everything was a remix, we would have never invented the sampler.

Images of AI Slop reflect an aesthetic of algorithmic power: the amplified feedback loop of social media and AI, of optimization and prediction. We have not trained our eyes to see this in the image yet, because we are so unaccustomed to the scale of what generative AI does – both within and beyond the images.

"Universalized" truths have always centered the powerful. We need systems that enable relationships and facilitate the ongoing project of defining and redefining the world in all of its unpredictable, uncontrollable noise. AI is not designed for this.

Gilles Deleuze warned that, in the transition to a control society, we must resist nostalgia for the old forms of power and not surrender to those who see it first: "There is no need to fear or hope, but only to look for new weapons."

YOU DID IT

IT'S OVER

Was 5,500 words not enough?

This is part of an ongoing collection of works addressing ideologies of AI, a collection I am curating solely by the shared sense of utter exhaustion I felt after writing each of them.


Things I Did This Week

A Podcast!

I'm the guest this week on Alix Dunn's excellent Computer Says Maybe podcast. Recorded about three days after the election, we discuss the politics and myths of AI and touch on the Age of Noise. (I wrote "Resist the Coarse Filter" after our chat).


A Playbook!

If you're an artist annoyed at how AI is always represented in the media as godlike robots typing at desks and want to create more critical visions, the AIxDesign community & Better Images of AI has a fix. It's a free downloadable guide for using commons-licensed archival images responsibly.

It's also a great resource for students for thinking critically about AI and their art without using AI in their art. I'm a contributor, it's a lot of fun.


I recently migrated away from Substack. The new archive for this newsletter can be found here.

If you're looking to migrate from X, or join a new conversation space, I highly recommend Blue Sky. If you sign up through this link, you can immediately follow a list I've hand-assembled with 150 experts in critical thinking about AI. Hope to see you there!