Anatomy of an AI Coup
Elon Musk and DOGE's AI plans are not at all about efficiency, but about centralizing power to a small group of technical elites. A sample:

Not with a bang.
My latest for Tech Policy Press argues that the reported actions of Elon Musk and DOGE are not at all about efficiency, but about centralizing power to a small group of technical elites. A sample:
"While discussing an AI coup may seem conspiratorial or paranoid, it's banal. In contrast to Musk and his acolytes' ongoing claims of "existential risk," which envision AI taking over the world through brute force, an AI coup rises from collective decisions about how much power we hand to machines. It is political offloading, shifting the messy work of winning political debates to the false authority of machine analytics. It's a way of displacing the collective decision-making at the core of representative politics."
Things I've Been Up to This Week
Academic Pre-Pub:
"Stop Treating AGI as the North-Star Goal of AI Research."
How can we ensure that AI research goals serve scientific, engineering, and societal needs? What constitutes good science in AI research? Who gets to shape AI research goals? What makes a research goal legitimate or worthwhile? In this position paper, we argue that a widespread emphasis on AGI threatens to undermine the ability of researchers to provide well-motivated answers to these questions.
I'm part of the team behind this preprint of an academic paper from a supergroup of ethical / critical AI researchers. In it, we argue that setting sights on "general intelligence" – a term that is vague and undefined – is a fundamentally flawed strategy. We explain six reasons why, and then propose an alternative.
It's an academic paper, but I did a very informal summary from my own perspective in a thread over at Blue Sky.
Shout out to Borhane Blili-Hamelin, Christopher Graziul, Leif Hancox-LiHananel, Hazan El-Mahdi El-Mhamdi, Avijit Ghosh, Katherine Heller, Jacob Metcalf, Fabricio Murai, Andrew Smart, Todd Snider, Mariame Tighanimine, Talia Ringer, Margaret Mitchell and Shiri Dori-Hacohen.
Phantom Power Podcast
Really happy with this discussion of AI, art, music and noise on Mack Hagood's Phantom Power podcast on sonic culture. Video is above, but you can find the audio version wherever you get your podcasts!
AIxDesign Fest!


Excited for the upcoming AIxDesignFestival: On Slow AI which will happen in real life in May in Amsterdam! I'll be a speaker at the event and I am really excited for it. Right now they're also raising funds to support a livestream of the event – if you want to help support it, you can score some swag and the money will go toward your ticket!
Helpful Bluesky Things
If you're on Bluesky, I've got some things that may be interesting for you.
- A starter pack of Critical AI thinkers from all kinds of perspectives, which I've promoted here for a while. But there's an expanded pack, with even more Critical AI folks, which is well worth a look.
- A similar starter pack for Artists working in Technology.
- A custom feed that shows you good, no-hype tech journalism. Pin this, and you'll have a tab on your Bluesky account that gives you access to tech journalists - minus the product launches and video game news.
- Clicking on any of those links will ask you to set up an account if you haven't already.