A Fork in the Road
AI is an excuse that allows those with power to operate at a distance from those whom their power touches.
Type the word “Resign” into the body of this reply email. Hit “Send”.
So ended an automated email sent to all 2.3 million government employees at the end of January, part of the Trump Administration's effort to reduce the cost of government services by ending them altogether. If any of us wondered how government might maintain even the illusion of functionality, a clue to the future of government came between the lines of two other notable announcements.
The first was that OpenAI was launching ChatGPT GOV, a version of ChatGPT "designed to streamline government agencies’ access to OpenAI’s frontier models." ChatGPT Gov is hosted by government agencies who can customize it to their own security requirements. "Additionally," they note, "we believe this infrastructure will expedite internal authorization of OpenAI’s tools for the handling of non-public sensitive data."
This came just two weeks after OpenAI removed a reference to "politically unbiased" output from its policy doc. Politically unbiased AI is impossible; the real problem for the Altman gang is that their model has been targeted for being "woke" and untruthful. Elon Musk has accused these models of reflecting "woke, nihilistic" ideals.
Speaking of Elon Musk, his announcement that Optimus, Tesla's humanoid robot will be sold in 2026 for $20,000 alongside the oft-promised fully autonomous vehicles. If this seems unlikely, Musk notes, that is because "human intuition is linear" while "what we're seeing is exponential," which sort of makes sense if you're stoned. Optimus, he suggests, is a $10 trillion business for Tesla in the coming years, eventually arriving at "100 million being manufactured per year" at a pace wherein the robots will eventually outnumber human beings, which I assume he thinks is desirable.
If you recall, Optimus was initially launched by bringing out a dancer in a body suit that pretended to be a robot. In October, Optimus was still being trained to navigate "non-flat terrain," and all of the footage in a promotional video was sped up, at least twice, but often up to 10x, at which point the robot moved at the speed of a cautious human.
AI is an excuse that allows those with power to operate at a distance from those whom their power touches.
That is to say, none of this is ready to launch. But technological capacity is not the point. The point is to justify decisions about personnel and the workforce as if they were replaceable, and to use the technology as a means of pushing employees harder and demanding cheaper labor. It is a strategy rooted in delusion, being executed through confusion and fear.
AI is an Excuse
To that end, Musk has now taken access of the Treasury Department's payment system, which allows him direct access to stopping payments that violate Trump's execute order on "woke," circumventing Congressional oversight. Essentially, this is a vision of government as an algorithmic process, a set of codes that can be deleted or consolidated, and one that is designed to eradicate any semblance of addressing social issues in science.
Already, the National Science Foundation has frozen all payments to researchers receiving grants and issued a freeze on all scientific research being published by the CDC – because diversity, equity and inclusion are apparently such an incredibly urgent threat to the US that all science research must be stopped immediately.
It's about to get worse. From the Washington Post:
The sensitive systems, run by the Bureau of the Fiscal Service, control the flow of more than $6 trillion annually. Tens of millions of people across the country rely on the systems. They are responsible for paying Social Security and Medicare benefits, salaries for federal personnel, payments to government contractors and grant recipients, and tax refunds, among tens of thousands of other functions.
Typically, only a small group of career employees control the payment systems, and former officials have said it is extremely unusual for anyone connected to political appointees to access them.
This is all deeply connected to the idea of AI as an excuse, one that allows those with power to operate at a distance from those whom their power touches. As Musk is rambling about the "inflection point" on the horizon in which "the future will be different from the past," OpenAI is pitching generative AI to a workforce under siege. Both carry forth the logic of computation toward what I consider to be the truly "nihilistic" philosophy of Musk's ilk: that we can automate human decision making, better sealing obedience to authority even when it is deployed immorally or, as of today, unconstitutionally.
How Humanity Disappears
The most substantial threat of Artificial Intelligence has nothing to do with any conclusions it might independently arrive at and execute. AI cannot do this. Rather, the "doomsday trap" of artificial intelligence is the dehumanization that this technology makes possible. AI is not only a technology, it's a new excuse to try bad ideas again – a spectacle designed as pretext to resist empathy and create emotional distance from consequences.
AI can facilitate work that feeling people would otherwise reject.
The ideology of AI is foremost a technology for rendering people, and labor (and thinking, and creativity) into objects to be tallied and sorted. By creating distance between any decisions we make about the unique value of a human being, or the unique circumstances that constrain them, we move closer to indifference to human suffering and need. When we make such decisions at a distance, our conscience preserves a claim to innocence from the guilt induced by the sight of consequences.
When tapped to provide critical services, generative AI isn't used despite its inability to empathize or respond to unique individual circumstances. It is used because of that: to facilitate work that feeling people would otherwise reject. It ensures that any dehumanizing policy will be implemented uniformly, without a human being able to intervene on behalf of decency. If we worried otherwise, it would not be placed into these roles.
An Automated Autocrat
I don't worry that AI will destroy humanity so much as I fear it will destroy the humanity within us.
It's clear to me that Musk and DOGE will embrace a shift in how we use AI outputs. This will move AI outputs from prescriptive – recommending a course of action, based on an analysis of data, such as flagging an insurance claim for review – to autocratic: a singular source of authority, where the text it generates becomes directly actionable as a decision without human intervention, such as rejecting an insurance claim outright.
Generative AI is not even remotely appropriate for the first case. Using it for the second case is only rational if you want those failures embedded into the system.
Text emerging from a blinking cursor can never justify human cruelty.
When AI is in autocratic roles, it generates commands, or orders – and also generates the pretext through which people obey those orders. It creates an illusion of distance, as if those who choose to use them can pass off their agency and accountability for the inhumane enforcement of unjust rules. Like a trio of executioners with just one real bullet, it is easy to comfort oneself with the idea that any one executioner might be blameless. Today, the bullet is a flashing cursor.
But text emerging from a blinking cursor is never an excuse for human cruelty, nor does it disperse accountability. It's merely a pretext designed to preserve a fragile conscience.
Reaching for Digital Distance
Distance has been built into the history of computation from the outset. In 1936, Alan Turing proposed a Universal Machine based on his observations of the human "computers" engaged in solving arithmetic problems. In this work, Turing assumes a certain mechanized, rote action of human thought that could be automated, if only one could transform it into the scanning mechanism that could calculate even and odd sequences.
Later, Pitts and McColloch would apply the Turing Machine's logic toward modeling the human brain, viewing the neuron as a mathematical receptor that would reach a counting threshold before firing. This model of the neuron went on to become the basis of neural networks. Decades later, this path lead to today's artificial intelligence. But there is a central confusion here, and machines built on a model of the mind based on a mathematical approximation of thought became mistaken for the mind itself.
ChatGPT is not a decision-making technology, it is a decision-removing technology. It creates text to fill the space where a decision is needed.
All of this is anchored in the belief that algorithms can sort the world in ways akin to human decision-making. But what each of these technological advances made possible was not merely complex mathematical tabulation. All technology creates a system of effects, rippling into the social world. Often, what technology makes possible is distance.
ChatGPT and other Large Language Models are not a decision-making technology, they are decision-removing technologies. They generate text, but most powerfully, they generate pretext. It creates text to fill the space where a decision is needed.
The true power of artificial intelligence is not that it automates decision making with any greater discernment than a human case worker. It is that the powerful can cultivate an illusion of distance from the system. Government workers will be replaced by automated systems, and the remaining government workers will be beholden to those systems. Researchers have warned, over and over again, that these systems are unsuitable for the task.
If we are about to diminish government services to interactions with ChatGPT, and treat American labor as if it is on the verge of being replaced by robots that will never exist, then we are hopelessly disoriented from what it is that a nation, or a government, is meant to be.
The Case for Diversity in AI
The most crucial work in AI right now fits into the category of Diversity, Equity, Inclusion and Accessibility. Machinic bias is well documented. Whenever these machines are deployed, social biases enter into the technical operation of the system at the most base level. Facial recognition systems misrecognize black faces, particularly women of color.
Recognizing and correcting issues of computational bias (and the systemic bias that the computers reflect) can solve real, documented problems wherein AI systems more frequently block women and people of color from obtaining home loans, jobs, and educations. Why wouldn't we want to understand and solve those problems? Even if we sought a "color-blind" meritocracy, we have a set of machines that are nowhere near "color-blind" or "merit-based."
Research into Diversity, Equity, Inclusion and Accessibility in AI systems has been ordered to a halt this month by the Trump administration. By banning this research while elevating the role of AI-driven services and the fantasists behind them, it feels like we are abandoning the work of "a government committed to serving every person with equal dignity and respect" in favor of deference to a machine that has been proven to do the opposite.
Efforts to recruit, train and hire a diverse workforce benefit everybody. Right now, in the AI field, 62% of Computer Science PhDs are white and 83% of Tenure-Track faculty in AI are male. In a world where we know that race and gender have no impact on one's intelligence, this seems like an unlikely outcome. Something seems to be in play beyond pure chance.
Much of that is the recruitment pipeline, and efforts to invite participation from more groups is ultimately the goal of diversity programs. Ask Trump and Musk, and they'll tell you these programs are handing out diplomas to random students, rather than trying to find ways to get talented people to apply.
DEIA adjacent research does not diminish dignity or respect for anyone. It is literally how we fix the problem.
In AI, DEIA adjacent research does not diminish dignity or respect for anyone. It is literally how we fix the problem. Diversity is a good thing. Equity is a good thing. Inclusion is a good thing. Accessibility is a good thing. The methods and systems we design to reach these goals can be contested, and in a democracy, they should be. The processes we put in place to reach these goals matter.
But these are a very standard, democratic set of values. They are not things to be afraid of or demonize. In working with AI, these ideas must be front and center at all times. The tendency for data to drift into generalizations – and then to operationalize those generalizations – is incredibly disruptive to real lives. We need to see things from as many angles as possible, not fewer.
It is self-evident that Gen AI is not ready to do work that demands social accountability of any kind, especially unsupervised, autocratic work. To do so is not an effort to solve a problem. It's an effort to create a digital distance from the consequences of systems and the accountability of those who deploy those systems with full knowledge that they are reaching beyond the capacity of text prediction.
But Silicon Valley thinkers have never been realistic about their technologies. Now, with $6 trillion in government funds at their fingertips, we're all about to discover the limits of their delusions.
Phantom Power Podcast
Really happy with this discussion of AI, art, music and noise on Mack Hagood's Phantom Power podcast on sonic culture. Video is above, but you can find the audio version wherever you get your podcasts!
AIxDesign Fest!
Excited for the upcoming AIxDesignFestival: On Slow AI which will happen in real life in May in Amsterdam! I'll be a speaker at the event and I am really excited for it. Right now they're also raising funds to support a livestream of the event – if you want to help support it, you can score some swag and the money will go toward your ticket!
Fun Bluesky Things
If you're on Bluesky, I've got some things that may be interesting for you.
- A starter pack of Critical AI thinkers from all kinds of perspectives, which I've promoted here for a while. But there's an expanded pack, with even more Critical AI folks, which is well worth a look.
- A similar starter pack for Artists working in Technology.
- A custom feed that shows you good, no-hype tech journalism. Pin this, and you'll have a tab on your Bluesky account that gives you access to tech journalists - minus the product launches and video game news.
- Clicking on any of those links will ask you to set up an account if you haven't already.
A Note to Subscribers
I recently cleared out a large number of accounts that subscribed to this email list but never read it, or had bouncing emails. It was a bit of a blow to the old ego. If you're a fan of the newsletter, please recommend it to someone who might dig it!
Here's a link to the archive, where people can subscribe. You can also sign up below!