Democracy & the AI Question

Remarks from an in-person debate, 'L’IA en question, questions à l’IA', a two-day event at the Centre Pompidou, Paris, on May 24/25. The event focused on debates about the role of AI in contemporary society.

A large building which appears covered in scaffolding, reflecting the unique "inside out" architecture of the French Contemporary Art Museum.
The Centre Pompidou, Paris.

Remarks from an in-person debate, 'L’IA en question, questions à l’IA', a two-day event at the Centre Pompidou, Paris, on May 24/25. The event focused on debates about the role of AI in contemporary society. I spoke at the panel on AI and democracy. I spoke alongside the “against” panel with Charleyne Biondi, Olivier Alexandre and Gaspard Koenig, moderated by Stéphan-Eloïse Gras.

Unfortunately, the remarks as prepared were not delivered in full due to the time constraints required for sequential translation.


Q: We often hear that AI is helping us to make better informed decisions, that it is helpful to create policies and drive strategies. But is AI really helping democracy?

That depends on what we mean by AI. The best use of AI is advanced data analytics. In that case, we collect data carefully and cater to the questions we seek to answer. Then, the data provides another layer of insight into a network of perspectives on the challenge we are addressing. This process is called data science; and to be clear, we have often done even this quite poorly.

But after multiple failed predictions of election outcomes, biased research results, and so on, we were beginning to learn how to frame data collection better: focused around specific inquiries, ensure our data collection matched our goals, test the assumptions behind our questions, and match our claims to the resulting outcomes. We began to accept that data-driven approaches had limits. This mindfulness of the limits of data as a window into the world provided a framework for better uses of data analytics, which sometimes used machine learning.

Today's use of AI is vastly different. Generative AI is not about carefully curated, constrained datasets to analyze data. Consider, for example, Large Language Models. These models are, by definition, large. They are trained on data from billions of sources, connecting words based on frequency, sometimes calibrated to specific kinds of text, for example, to resemble answering a question.

But they do not contain any insight into the nature of those words, nor do they comprehend the world this language describes. The industry designs them to produce plausible language. Plausible language means that a human reader can look at it and decide that a thinking person has written it. But plausibility does not mean truth or accuracy. I can tell you many things that sound like convincing language – I could tell you statistics and give you citations. But they may be fake. That is the difference between plausibility and simple, approximate, accuracy.

If we choose to use large language models for decision-making at the scale required of politics, we are automating bad data science. We are not building tools to answer specific questions. We are pretending specific questions can be answered based solely on how plausibly it crafts the sentence.

Furthermore, there are many ways to control and manipulate these AI outputs. For example, there is the "system prompt." This prompt is a set of instructions to the program that steers the language of its responses. A designer, such as a tech company, can use this system prompt to bend the model's output toward any decision it wants. Consider, for example, that OpenAI's early image generation model, DALL-E, would threaten to block users for requesting images of people of the same gender kissing one another. That political decision was invisible to most people: we can't see what controls they are imposing.

As such, Large Language Models pose a real danger to political deliberation in that they are offered to us as if they are "objective" or even "intelligent." They reflect biases, of course. However, tech companies can also invisibly manipulate the generation of this text. This manipulation is, of course, present in all media. However, we rarely had a regime insist, for example, that network television should replace government.

Today, states are turning to a corporate-owned media system called AI and deploying it in ways that undermine or replace political decision-making about laws and policy. In that light, this is not helpful at all. It is a fundamentally corrosive concentration of power into the hands of technicians.

Inside the Centre Pompidou

How is the current vision and organization of AI affecting the US administration? Is it reinforcing or weakening democracy?

Elon Musk's DOGE operation has accessed government agencies across the federal government by seizing a very limited authority to build better websites. The Obama administration initially created a small team over a decade ago to help government agencies streamline online interfaces and create better government tools.

Under this authority for the civil service's "digital transformation," DOGE has grabbed the authority to fire hundreds of thousands of government workers and close entire offices, such as US AID. These firings and layoffs are often done illegally, and much of it is being challenged in the courts.

DOGE has replaced these employees with Large Language Models in some cases, and strives to do so completely. That means that Silicon Valley products are now in a position of political decision-making, seizing the authority of Congress, which is elected to represent the people in how it distributes tax money. It is really inaccurate even to say that "the AI" is firing people or blocking funds. AI is a screen used to distance the people with real power from accountability.

We cannot forget that the people who built the AI are telling the AI how act and how to frame its responses. We have seen, for example, that Elon Musk's AI, Grok, was responding to many queries on the social media network X with unrelated arguments about white genocide. Someone might ask about a baseball player and would get a response about how the baseball player isn't talking enough about white genocide. This is because of the calibration of the system prompt. It's a decision made to change how it responds.

Most likely Elon Musk, a white South African, decided to advocate for white South Africans using the "white genocide" talking point popular with the right. Emphasizing these responses was a political decision imposed on the model, and it should not surprise us. Every LLM has system prompts, and every company decides what they say. We could just as easily make a model that replies with subtle persuasion toward casting a certain vote, or firing a certain kind of employee, and it would be difficult for anyone to know.

In DOGE, this AI has been used to evaluate emails from federal workers justifying their jobs—which they must write every week. The AI is fed these messages, and then it decides: even if it hallucinates or acts according to Elon Musk's system prompt, the worker can be fired. Or, using a list of terms, which includes words such as "bias," "communities," "under-represented," or even the word "women," federal grants have been halted or canceled. This barely requires AI at all. But AI becomes a tool of obfuscation, and its abuses can be too easily hidden: AI becomes the excuse, which is its main function today in the US government. It is a tool for making excuses, a tool for plausible deniability.

So, it has been very good for consolidating power into the technical class of Silicon Valley elites, and protecting conservative politicians from the consequences of far-right policy decisions. It has done very little toward building a better democracy.

How would you respond to some who claim we should replicate DOGE in other countries?

I note that the abuse of power in these systems is rampant. The engineers, the technical class, become a kind of elite, as policymakers and the general public can easily be misled about how these systems work and how they arrive at decisions. The changes put in place without our complete understanding will be remarkably challenging to undo. Even when AI systems fail, the AI industry can propose itself as a solution to that failure, such as charging the government for more engineers or demanding even more access to citizens' private data. So, even a system failure can be used by the tech industry to justify further investment.

Furthermore, the entire idea at the heart of AI in government — this speeding up of government efficiency — is misguided, depending on what we aim to make "more efficient." Democracy is, as the theorist Chantal Mouffe notes, not about agreement but about the process of disagreement and the often uneasy accommodation of difference. Democracy is a constant process of negotiation between competing interests. And so it becomes hazardous to speak to an efficiency of democracy in the way Silicon Valley describes it today.

It further points to this myth of Artificial General Intelligence, or so-called "superintelligence," which the industry assures us will make just, fair, or wise decisions. But fair to whom, wise from what perspective or goal? We will never arrive at a universal agreement about who holds power or how to use it. The universal agreement is not possible in a democracy. The end of the debate, if achieved through our deference to an automated decision-maker claiming to know what is best for all of us? That is the end of politics and the end of democracy.

I often hear that different forms of AI are possible. But if they were possible in this world, we would have them. Right now, the form of AI our world has made is based on data extraction, surveillance, and the concentration of power – because that is what our current system incentivizes. If we first remake the world, we may change the way we make AI. Otherwise, we risk changing ourselves for the way AI makes the world.


Signal to Noise: Curators Conversation

The National Communication Museum in Melbourne has shared the video of our curator's conversation about Signal to Noise, which also functions as a walk-through of the exhibition. There's a bunch of related material online at the website, as well – hope you enjoy it! It's a great tour if you can't make it to Melbourne (or if you can and want to do a deeper dive).