Is the Media Studies Cabal in the Room With Us Right Now?

Is the Media Studies Cabal in the Room With Us Right Now?
Photo by Sigmund / Unsplash

I read Benjamin Bratton's book, The Stack, in 2020 as a grad student in ANU's Applied Cybernetics program. I give it credit for directing my attention to the interaction between layers of digital and physical infrastructures. Trained as a social scientist in media studies, my mind must have been filling in the blanks of Bratton's work: what were the politics and the economics of this Stack?

Bratton is a philosopher. I am not. I'm interested in both the theory and the social impacts of technology. To me, any philosophy of technology that is severed from the material impacts of the technology is interesting, but limited in utility: a thought exercise. Like all thoughts, they can be useful or distract from reality, depending on how skillfully we wield them. A great deal of suffering arises when we act on our solitary imaginations of the world, rather than working toward clarity through dialogue. As such, it's the role of those who care about a topic to articulate the contours of the space and listen to those who see it differently. AI is one such space. Debate, at the moment, is pretty vigorous.

In a recent piece, "Is European AI a Lost Cause? Not Necessarily," it's clear that 2025 Bratton is annoyed by intellectuals (he specifies media studies) who critique the politics, economics, and ideological assumptions that underpin the recent AI boom. This, he warns, is a distraction that risks imperiling European AI sovereignty, entrenching the AI industry within a regulatory superstate that hinders its development.

This imperative to build, rather than critique, was also at the heart of a 2020 essay by venture capitalist Marc Andreessen, "It's Time to Build." Andreessen argued that we don't build stuff anymore, because we are bogged down in bureaucracy and malaise, pointing to the traumatic COVID-19 system collapse and the lack of technological infrastructure in place to anticipate and prevent it. He wrote:

"Part of the problem is clearly foresight, a failure of imagination. But the other part of the problem is what we didn't do in advance, and what we're failing to do now. And that is a failure of action, and specifically our widespread inability to build."

Bratton charges that this dynamic is still in play in European AI. Bratton suggests that Europe is still not building AI, but is instead content only to regulate it, forcing AI into ever-narrowing pathways with no room left for innovation. Any attempts to build these multiple layers of technical infrastructure on European terms, he says, are therefore "backfiring in real time." As such, when technical infrastructure is built in Europe, it relies overwhelmingly on US and Chinese companies.

All of this is bog-standard anti-regulatory critique that Silicon Valley has been endorsing for years: "regulation prevents innovation." But Bratton then makes a deeply weird pivot in terms of assigning blame for Europe's regulatory environment.

The Media Studies Cabal

The target of Bratton's critique of AI regulation is not the overreach of tech companies or their deeply unpopular CEOs. While NVIDIA has more money and political power than nearly any other company on Earth, it's not even mentioned in examining the conditions that gave rise to the EU's regulatory position.

For Bratton, what stands in the way of the mass acceleration of the AI industry in Europe is intellectuals and artists. He focuses on what he describes as a "critique industry," a kind of media studies cabal that shifts attention away from wealth production in Europe. The intellectuals and artists of this critique industry have seized the public's imagination with its scrutiny of AI, which permeate universities and the arts. Such questions stand in the way of progress.

"The precautionary delay was successfully narrated by a Critique Industry that monopolized both academia and public discourse. Oxygen and resources were monopolized by endless stakeholder working groups, debates about omnibus legislation and symposia about resistance — all incentivizing European talent to flee and American and Chinese platforms to fill the gaps."

You may think you are seeing an endless sea of interviews and social media posts about Sam Altman, Elon Musk, and Mark Zuckerberg. But Bratton must be tuned to a different channel, one committed to nonstop praise and media attention to Kate Crawford, Emily Bender and Abeba Birhane.

Bratton's "critique industry" paragraph boldly denies us any citations, but as a media studies professional who is embedded in this "Critique Industry" I find it incredible to hear that a handful of experimental short films, "symposia about resistance," or even Crawford & Joler's installations at the Venice Biennale, have been so wildly successful as to drive European computer programmers to abandon their homes and move to China and California.

Who Flees to American Tech?

Lucky for us, the dreaded social sciences have ways to test the claim. It's clear to anyone that a large number of the people building the US tech stack are immigrants to the US. The numbers aren't perfect, but the American Immigration Council reports that 26% of computer and math workers in the US were born abroad. The most significant number of people building the tech stack in the US, however, are arriving from India and China. The next-largest contributors to tech immigration are Mexico, Vietnam, South Korea, and Canada; the first European country to appear on the list is Russia. Every other European country is clustered in with "other."

Looking at Silicon Valley alone, the combined national contribution of migrants from the UK, Germany, and Ukraine still amounted to just about 2% of tech workers. Still, locally in Europe, there is an exodus of 3.4% of trained tech industry professionals leaving for the USA. But the reason is less to do with the publication of Matteo Pasquinelli's "The Eye of the Master" and more to do with simple math: you will likely be paid a much higher salary and taxed 7%-10% less if you work in America instead of Europe. This dynamic extends not only to tech but to nearly every STEM field.

So, it's unclear what Bratton is blaming critical AI people for, exactly, but I think he overstates its role in the cultural diffusion of anti-tech sentiment. Yes, anti-AI sentiment is strong enough that even I am consistently harassed by anti-AI people online. Most of this online discourse is steered not by those targeted by Bratton, but by a deeply anxious response from illustrators and writers whose work was used in the training of AI models and who now perceive their careers are at risk. European academic discourse did not create that dynamic: labor insecurity and the AI industry did.

Even so, is there any evidence that such anti-AI sentiment is inspiring programmers to reject the tech industry? Notably, AI is not all generative AI, and any hostility to diffusion models and LLMs doesn't seem to be slowing anything down. About 3.5 million Europeans work in the tech industry, a seven-fold increase (7x, not 7%) since Bratton published The Stack in 2015. There is a wealth of talent to choose from, and the direction in hiring is only on the rise.

Yes, tech companies may relocate to the States due to regulatory hurdles and other issues that make the States more attractive for scaling, such as linguistic and regulatory conformity. Of course, an utterly free-market environment is an ideal place for tech companies to operate. But the free-market climate of the United States also creates and allows for things that Bratton dismisses as anxious hand-wringing by intellectuals and artists:

"Heckling from the front is a commentariat fixated on the social ills of AI, social media, data centers, and big technology in general."

How dare we heckle! Someone ought to stop us. Lucky for Bratton, JD Vance and Elon Musk are hellbent on dismantling the monopoly on power held by those convening "symposia about resistance."

I don't say all this to suggest media scholars do not have any impact on AI discourse: some do, and it is important that robust and skeptical debate occur about the systems we build. But Bratton's focus on those who critique AI, and his comparatively low engagement with those who build and deploy it, tell me he is less concerned with properly assessing levers of power so much as maintaining access to it.

Drama at the Piazza

In a bit of "l'esprit d'escalier," Bratton gets to the true heart of the issue in summarizing the positions of three fellow panelists during this summer's Venice Architecture Biennale, of which he was a part, alongside Evgeny Morozov, Kate Crawford, and architect Marina Otero Verzier.

"How to build the Eurostack? Their answers are: hold out for the eventual return of an idealized state socialism, declare that AI is racist statistical sorcery, "resist," stop the construction of data centers and, of course, "communism."

This uncharitable understanding makes sense when you examine the informal intellectual cloister Bratton is currently immersed in. Since 2022, Bratton has been enmeshed in a relatively removed world funded by the Berggruen Institute. Bratton has also become increasingly angry online and unwilling to engage in good-faith arguments with those he disagrees with. The Berggruen Institute has a location in Venice, which hosted his contribution to the Biennale. It funds Bratton's Antikythera programme, and it pays for the magazine in which his text was published.

This monoculture clearly has some advantages: he gets to dismiss those within traditional academia as "conforming to orthodoxy," which is his description of the collective process of building knowledge on top of previous knowledge.

I have no idea if Bratton is cultivating a "bad boy of AI orthodoxy" image as a kind of online brand or if this is just his personality. In any case, such bad faith makes it seem more like he's isolated from the world he's attacking, and therefore resorting to hurling sarcasm at oversimplified phantoms. As a result, his positions are increasingly difficult to reconcile with an expanding body of documentation (not theories) about the ways that AI interacts with society and the environment.

Why Critique AI?

When AI risks are real, they rightfully hinder expansion or push for different techniques or arrangements of tech stacks. How we determine which risks are real, and how to navigate those trade-offs, is through conversations: "the discourse," with critics largely pushing for more extreme degrees of social care.

This back-and-forth between builders and critics makes neither happy, but ultimately leads to compromises – some good, some bad, and I would argue, usually skewed toward "getting things built." But critics are under no obligation to drop the intensity of their critiques, because social harms demand loud voices.

For Bratton, though, it was European anti-nuclear activism that serves as a precedent, so we can engage with that. Bratton suggests that activists stood in the way of nuclear power, paving the way to the continued use of coal and the resulting environmental and health impacts. What he fails to mention is that Europe has 180 nuclear power plants, with eight more underway. The dreaded "discourse" did not stop the development of nuclear power; it instead lead to greater caution in how it was regulated, controlled, and built. After Fukushima, we have better stress testing; older plants were shut down at a faster pace. I don't think it was really all that bad.

For Bratton, this serves as a warning. The way we build AI is therefore to either stop discussing those risks at academic conferences; cut considerations of risk out of policy deliberations because they make things too difficult, or both.

I can't tell if he knows he is contributing to the same discourse he wants us all to stop engaging with. After all, his text is also an attempt to persuade us that his priorities ought to take precedence over others. But for him, the priority is building and the obstacle is other priorities. All this about the environmental impacts of data centers or automated racialized surveillance is dismissed as boring groupthink of the academic set. But specifically, what regulations does Bratton argue we keep, and which ones should we cut? Bad news, it's a trap. If he answers, he's entered into the dreaded "discourse."

To remain aloof, he focuses considerable ire on those who point out issues rather than those who create flawed systems. He misunderstands not only the point of critical AI discourse but also overstates its influence in the ecosystem of technology policy and tech development. He is punching down, but something makes him believe he is punching up.

Uncritical Antihumanism

In the essay, he lumps together several divergent views on AI in a pile of definitions stripped of context, presenting it as evidence that critiques of the AI industry are merely "verbalized nightmares of a 20th-century Humanism gasping for air." Bratton frequently has critiqued "critical humanism" as a source of real problems in building new, bigger tech companies to manage the automation of more of our planet while gesturing loosely at acknowledgements of capitalist excess.

I agree with him on this: many critical humanists examine AI and find parallels to the discourse and logic that motivated the technologically aided atrocities of the 20th century. Technology, on its own, does not create such atrocities. However, we also see dangerous echoes of the rhetoric of power and totalitarianism from 20th-century political systems in our present day, and so any conditions for the reunion of bad politics and bad tech is certainly worth asking questions about, even if some of them appear anxious.

Much of the current academic resistance to AI is based on fears stemming from past trajectories of power and technology. In attempting to make sense of an amorphous and constantly shifting term ("AI"), critical AI scholars aim to describe it as we see it and then defend that position. This is how knowledge is built. We don't just read The Stack and agree with it. We continue to question what the technology does, the claims made about it, and the histories embedded within it.

Among these concerns are concentrations of power and the abstraction of populations that come with scale, which Bratton is correct in attributing to the humanist response to "20th-century" horrors such as, but not at all limited to, the Holocaust. Critical Theory is, in part, a means through which to reject the rise of conditions which led to the Holocaust, and as such, yes, many of us are a bit tetchy. You'll have to forgive us if we're still hanging on to that one.

On the surface, Bratton's accelerationist bullying stems from a pragmatic urge to get things done. But get what done? In his dismissal of Joler & Crawford's "Calculating Empires," Bratton reveals he is literally incapable of conjuring a critical impulse: "one discerns that it simply draws arrows from your phone to a copper mine and from a data center to the police." That tells me a great deal about Bratton's inability to connect the dots between his beloved tech stack and its history to draw parallels to the present. Instead, he wants his future, now: to pretend that the soil is fertile and that nothing sinister lurks beneath it.

"Just as classical computing is different from neural network-based computation," Bratton asserts, "the socio-technical systems to be built are distinct as well." This is a bold assertion despite his claiming otherwise. The adoption of new technological forms shifts, but does not erase, the interest in techno-social purposes to which it might be directed. There are contexts from which this technology has risen, a beneficiary of existing political structures and previously accumulated wealth. Whether modeled on neurons or electrical pulses or carved out of wood, technology reflects the desires of those capable of building it.

There is No Ahistoric Tech

Good, rigorous debates in critical AI actually bring us closer to dismantling the 20th-century residue that Bratton finds so troubling. The concerns of the "Critique Industry" are not actually rooted in the 20th century but in having learned from its mistakes.

Bratton can dismiss critics of AI as "nOt UnDeRsTaNdInG tHe TeChNoLoGy" even though many of those he lumps together are literally trained experts on the subject. But he does not seem to understand what critique is, or what its practical limits are. You don't need to know what a neural net is to understand how power might abuse it.

If building guardrails slows us down, and if the democratic deliberation of trade-offs slows us down, then so be it. We go slow. History created AI systems, and pretending we can build the world anew by ignoring that history is the most outdated idea of them all.