Some Things I Did in 2025
In this post I'm highlighting some of my writing, artworks and talks from 2025. If you've been following, I hope you find something worth revisiting. If not, I hope you find something worth checking out.
And if you find something of interest, please share it! One thing that's happened this year: after moving from Substack to my own host, new signups have come to a standstill. So if you want to let folks know to sign up, here's a big sign up button.
Tracking the AI Coup
In 2025 I started a Tech Policy Press fellowship, which gave me the time and resources to focus on the political and social ramifications of generative AI. The pieces that came out of that tracked a few threads, but none earned as much attention at my piece on the AI Coup.
I argued that the false promise of AI was being used as an excuse to justify the dismissal and automation of government workers, to be replaced by machines that could not do the work. The point, I proposed, was not that "doing the work" mattered at all, but that even the failure of AI systems would further embed the technical operators into governance, ie, replace the policy wonks with prompt engineers.
Along the way, the apparatus could be used to build a surveillance state, integrating government datasets into a single point of reference in order to generate "fishing expeditions" – with false positives being a feature, not a bug. False positives gave the government leverage: they could fix the problems if they got something out of it, or else allow the problems to fester: a weaponization of administrative error.
The coup was partly built by a $100 billion financial investment masquerading as a political victory for Trump, who was able to take credit as soon as he took office for a project that was already underway, and had no government support. What this infrastructural investment really created was a hype cycle, selling AI as a solution to all kinds of problems – initially as a means of growing investments, but eventually as a means of becoming so integrated into government and the economy that the AI industry could not be allowed to fail.
I was invited to address the topic at the Centre Pompidou in Paris this summer for an event called "Democracy and the AI Question," and I shared a draft of my prepared remarks.
I got to discuss all of this with Rebecca Williams, Emily Tavoulareas and Matthew Kirschenbaum in a podcast with Tech Policy Press.
AI Myths
At Tech Policy Press, I was also able to write more on the concept of AI myths, following up on a popular and still relevant piece I wrote in 2024, "Challenging the Myths of Generative AI." This year I unpacked at the myths of productivity and the Black Box myth.
I also spent a lot of time thinking about Artificial General Intelligence this year, thanks to my small contribution to a collective paper, "Stop Treating AGI as the North-Star Goal of AI Research," summarized for Tech Policy Press here (with a podcast, too, with myself, lead author Borhane Blili-Hamelin and co-author Margaret Mitchell discussing the paper with Justin Hendrix).
Just this week, I have a fresh piece of writing ("The Domesday Generation") on the political position AGI represents included in a new print volume, "Vectoral Agents: Power in the Age of Planetary Computation," out now from the Institute for Network Cultures out of Amsterdam. You can order it or read it as a pdf here.
Art & Other Works
Signal to Noise
On top of all this, I opened up an exhibition with Joel Stern and Emily Siddons at the National Communications Museum in Melbourne, which ran from April 12 to September 11. Here's a walk through and a conversation with the curators.
Human Movie
For the opening of "Signal to Noise" I created a 35-minute lecture-performance-film called Human Movie, which has gone on to win two awards and to be shown in a number of incredible places including the Jeu de Paume in Paris as part of the "World Through AI" exhibition. There's an excellent review in Found Footage Magazine by Michael Betancourt:
Noise is the central image of Eryk Salvaggio’s Human Movie: Six Meditations on a Compression Algorithm. Its sources are varied but familiar: instructional films, found footage from commercials, the detritus and AI-generated slop forced into a dialogue with the noise that infuses every shot. Brief moments of sharp clarity only make noise more apparent as the force lurking just under a veneer of recognition and familiarity, with the video clearly broken by intertitles into six sections which structure the noise that never diminishes. In Human Movie, this noise is the emblem of independence, the freedom that the video suggests is essentially human and which all these AI processes seek to contain and manage by claiming the digital machine is a mirror of the human mind.
You can still arrange for a screening or performance of the film, by the way. (It's currently not online, likely will be in 2026).
Noisy Joints
Noisy Joints is a lovely mess of a research week compiled into a zine, called Noisy Joints. The five-day experimental workshop was hosted by the Mercury Store in Brooklyn, NY and designed by collaborators Camila Galaz, Isi Litke, with lead artists Emma Wiseman and Eryk Salvaggio. The “Critical AI Puppet Workshop” focused on interrogating AI critically through the lens of puppetry, and recognizing the utility of both the metaphors and embodied experiences that emerge from puppetry and puppeteering in a larger conversation about AI, physicality, and the human imagination.
Isi Litke's ongoing puppetry and automation class at the Brooklyn Institute for Social Research is up if you are looking to take the class in-person or online though it seems to sell out fast.
One of the outcomes of this workshop was also a collection of short videos of hands, which were recorded and then used to spawn AI-generated replicas of themselves. The two hands are superimposed upon each other as they mediate a middle layer of digital noise, a replication of the human body touching its uncanny twin through the mediating layer of noising/denoising inherent to the training process of diffusion models. I'm hoping to do something more with this soon.

Camila Galaz was also able to host a second workshop in Melbourne – which I had to miss due to a family crisis – which we discussed with Emma Wiseman in a blog post.
Noisy Joints is something I'd love to work on somehow, and maybe there'll be more of this in 2026 as well.
Sound
I ended my five-years-long critical-AI psychedelic-noise-pop project, The Organizing Committee, with a final release called "Keeping Secrets from the Numbers." You can find it on Bandcamp (and most streaming services that aren't Spotify) or read more about the project below.

This Newsletter
You can of course just look at the archive page if you want to see what else I've written this year, but I wanted to bring some writing forward.
On Large Language Models: I spent a good amount of time thinking through Large Language Models and Meaning this year, starting with re-reading Roland Barthes on "The Death of the Author" and noting that the LLM inherits the myth of an author while depending on the idea that the author is dead, an inherent contradiction. I started a PhD this year, which I took as an invitation to reconnect to some of the technical underpinnings of LLMs and this phrase which I think holds true, that "a dog can 'go to church' but a dog cannot be Catholic. An LLM can have a conversation but cannot participate in the conversation." There was more on epistemic boundary-setting and a critique of my own assumptions about normativity about language-as-thought as someone who cannot do math and sees music as color. There's more to say, but I am writing it up for a paper.
On Being Human: After the death of David Lynch, I looked at what creativity is for some artists rather than what creativity is as defined by art-making AI systems. After my father died suddenly in June, I wrote a short series, which is ongoing, of "Human ______" posts which expands on the ideas of Human Movie but as short texts. Human Literacy was one of my most-read pieces this year, a short text to students on how to make sense of the world with and without AI.
This was followed up by Human Conversation, which tried to distinguish the nature and pleasures of conversation with people from the highly efficient and productivized vision of what we do with chatbots.
Social-Cultural Impacts of AI: Finally, two very recent pieces, the first on ChatGPT as a pandemic technology that keeps writing social isolation into its operations. The second on the subject of interpassivity at the heart of popular forms of making AI art, music, and writing.
Thanks for reading! If any of this resonates, please share this post and encourage folks to sign up. Link is here again.

