The Decision Not to Decide
Interfaces, and design of all kinds, exercise decision-making. Decision making is a form of power. What, then, is generative AI for design?

This is an expanded text based on previously delivered remarks to the Re Shape AI Forum, held in Hochschule für Gestaltung in Schwäbisch Gmünd, Germany in May 2025, and post-conference reflections.
What, to a designer, is AI?
Like any conversation about AI, the question requires some translation: as always, we need to ask what artificial intelligence means here. We also need to ask what we mean by design. There are many forms of design, from systems design to product design to print design, even the ways architects design buildings. The dictionary may be helpful: a link between these practices is design as a verb, "to conceive and plan out in the mind."
The role of design, therefore, almost always starts with the organization and presentation of information. Maps, charts, and graphic design might be the product or the end goal on the way to something beyond them. But a good building might tell visitors how to move through it. Good software helps users make sense of how to use it. In all cases, at this pure abstract level, design means to organize and structure concepts.
AI, then, has to be understood by designers as a tool for removing various degrees of decision-making in conceptualizing specific tasks and applications. When we hear that AI "saves time," what that means is that it gives designers the option not to think about parts of the process. It may automate a text description to fill in a page with plausible text or generate mock-ups of storyboards or snippets of code for a specific function.
When we hear that AI "saves time," it means designers have the option not to think about parts of the process.
In animation and design schools, I hear about students using Gen AI to create storyboards, making decisions that require thinking through things such as which line to draw next – and how that line would connect to the character or the world they're building.
Of course, this is an ideal. Students, in particular, never really do this, at least not at the beginning, and let's be frank: neither do busy design professionals. Students focus on the abstract stuff and the high-level movements from frame to frame. Part of teaching is to develop a level of intentionality, to ask: "But why did this happen?" or "Tell me about the decision you made here?"
When we think about everything as designed, we practice thinking about information hierarchies. That includes writing fiction or essays to architecture to posters for a lost cat. Design is not only about outlines, but the orchestration of details.
Generative AI takes these conversations away: "Oh, I don't know; the AI just put that there," and so on. It's as if the decision wasn't significant enough to them and could be made later. This is sometimes true! However, making those decisions in the planning phase helps ensure the decision gets made at all. The earlier details are considered, the more the total design can make sense of and integrate them. When an intention is articulated before production, it assures that the decision gets made at all rather than forced into being.
I think students often don't realize that there is a decision to be made at all. This is what worries me the most about the use of AI in this thing I am calling design, which is, I argue, the organization of structures of meaning in ways that convey that meaning externally.
If the work of design here is around conveying meaning, then details allow a greater density of information to be expressed simultaneously, more efficiently, and even unconsciously to the audience. A designer who fails to pack meaning into tight spaces leaves the signal empty. In that case, arbitrary information creeps in, diluting the strength of the signal of that intention.
Decisions are a Form of Power
Designers who have been doing this for a while understand the exercise of power involved in these decisions. Decisions themselves are a form of power. Designers exercise power over the user's access to information through different mechanisms called affordances. These affordances exert control over access to possibility by limiting the imagination of what's possible in, for example, software and the social imagination.
Limiting the imagination might be a strange reversal of how people typically think of design. Consider graphic design. The designer needs to limit the imagination of what the reader encountering the graphic might assume its message to be. So, they prioritize the most essential information in ways that restrict the user's frames of reference. Put a big picture of a cat on there, and the user is less likely to imagine this is a poster about the solar system. Put the word "missing" in a large font, and you will create a direct understanding of the situation: the poster is not just telling people about your awesome cat, but requesting their help in finding it.
Designers exercise power over the user's access to information through different mechanisms.
The user – the reader of the poster – sees those two things and immediately knows that a cat is lost, so they intuitively respond. They'll watch for the cat. The social context of the missing cat poster offers some degree of assistance to the designer. The designer has provided enough information and reduced the reader's imagination so that the user can intuit, based on a social understanding of cat poster logic, that this is the message.
Such principles apply to all kinds of design thinking in various careers. Policy 'design', for example, is constantly at war with the imagination. When voters fail to imagine the world of possible policy, they may resign themselves to cynicism. The imagination can also be tapped to activate interest in or fear of policy decisions.
Consider the phrase 'defund the police', and its role in conveying a particular kind of world. Voters and resistant politicians often filled in the consequences of that world by conjuring the imagination of anarchy. What if it just meant managing a city's homeless population without putting it on the list of things police have to do? Moving jobs unrelated to public security away from police and toward a civic community task force paints a different, more precise image of the policy imagination at work here.
In a more banal way, graphic software interfaces also constrain the imagination of what is possible. For example, code is flexible. If you can write code, you can write different functions for exact tasks. But most people don't want to look at or learn how to write the lines of code needed to get specific, highly customizable tasks done. So, designers create interfaces to simplify the activation of specific possibilities by designing buttons that automate the activation of specific code.
So, interfaces and design of all kinds exercise a certain kind of decision-making, and decision making is a form of power. Enter generative AI, and let's go back to our question: What, to a designer, is generative AI?
Automating Decisions Gives Power Away
To reiterate: AI is a tool for removing degrees of decision-making in conceptualizing and implementing specific tasks and applications. Removing decisions "saves time," giving designers the option not to think about parts of the process. In turn, AI fills in for these decisions with statistically plausible details drawn from similar vectorial representations in the AI model. It's filled in with the average of other people's decisions.
Many assert that averaging works: what it replaces was probably ignorable already. A missing cat photo can show a picture of a cat sitting on a porch and nobody pays attention to the porch, so no harm in replacing it with an average porch, or removing it altogether. But this does convey information and context in subtle ways. The cat's size, where it hangs out, and its coloration: Is the cat brown like wood or darker? You may not need this in your cat poster, but that is a decision.
When the stakes are higher (though perhaps no stakes are higher than one's missing cat), the decisions about the details become even more crucial. Humans have an ever-decreasing degree of patience for their own attention. Information environments are flooded with information people do not pay attention to, which is preferable to information they pay just minimal attention to. If you have a limited window of opportunity to convey information through noise, every aspect of design ought to be considered against the message you intend to send and how it is efficiently transmitted.
Carefully navigated simplification is at the heart of good design. AI itself is anchored in a human desire for simplification of the process of communicating ideas and information. AI is rooted in metaphors of reduction, generating simplistic, rather than simple, formats for information delivery. AI relies on activating the socially constructed comprehension of high-level information: understanding what a building looks like, as opposed to what a specific building looks like—understanding that a cat poster has a cat photo, but neglecting to pay attention to the specific cat.
When AI is used as a tool to simplify complex (or even specific) imaginaries, the designer hands over enormous power and opportunity. This can alienate us from grappling with the loss of whatever AI replaces. This is often described in terms of labor—fairly! I am all for people keeping jobs. But the focus on missing jobs is more than a dismissible number on a jobs chart. The job represents a series of human decisions that will no longer be decided, but assumed.
What do we lose when we hand power over design decisions to algorithmic conjecture?
What We Trade Decisions For
By automating these decisions, we replace them with references to generalized, previously established, socially constructed imaginaries.
- Generalized, in that the AI relies on plausibility rather than specificity: does this look like the background of an architectural schematic? Does this look like a piece of functional code? Does this look like a cat poster?
- Previously Established, because it has trained on a library of images, code snippets, or writing that has already existed and extends the patterns found in the dataset rather than uncovering new ideas.
- Socially constructed, in that the designer using AI often relies on genre tropes clearly understood by 'most' people, which is why the AI produced them in the first place.
Now, arguably, this might have a place. It may not matter, to the people looking at an architectural schematic, that the trees in the courtyard are a native kind of tree or that the silhouettes have a reasonable number of arms. They create a reasonable facsimile of detail in which a person does not need to research what kind of tree might grow best in this courtyard. The issue is that somebody, somewhere, has to decide not to decide. The decision not to decide, paradoxically, requires enormous expertise. The cultivation of that expertise comes through the experience of making decisions.
By automating these decisions, we replace them with references to generalized, previously established, socially constructed imaginaries.
The decision not to decide requires the skill to imagine what is not in front of you and the recognition that the people you design for will not care much or see their imagination drift due to it not being presented or presented in an abstract sense. Without this experience, the imagination of what AI can do closes off how we might create work without AI. We can automate a decision when we know we don't need to consider it.

Plausible Perspectives
At a recent conference, I watched a presentation by an AI lead at a major tech company. They were using a pipeline of generative AI to write texts about speculative design following a standardized futurist playbook.
He created an RSS feed about technology and design to identify what futurists call "signals" - what happened today, and what future might it lead to? The first part of his AI pipeline skimmed and summarized that day's news feed.
The designer had linked that to a chatbot for which he had created a series of personas. These personas reflected an invented person, including details such as career status, line of work, gender, nationality and race. The designer created a system in which the text was generated from the "perspective" of those personas. He suggested this was a way to build diversity in the speculative design imagination. By asking the model to produce text from combinations of specificities, he argued, he made more realistic representations of marginalized people and points of view (the designer himself came from a region of the world under-represented in AI and design).
I found the entire enterprise to be earnest but philosophically misguided. The designer had replaced human decision-making from the pipeline. Here, they had deferred to the Large Language Model's statistical representation of decisions people with various experiences might make. To reiterate, by automating these decisions, he had replaced them with references to generalized, previously established, socially constructed imaginaries, centering the resulting hypothesis on the automated imagination of often marginalized people.
The process was fascinating and troubling. The resulting speculative design proposals, ostensibly at the core of this project, were about as uninteresting as you might expect. The view counts on the posts suggested nobody was paying much attention to them. But this idea of automated representation – that people's identities can be modeled and predicted in ways that displace their actual experiences – saturates the tech industry's AI promises. It is there in discussions of AI for governance, for art, for design.
This idea of automated representation – that people's identities can be modeled and predicted in ways that displace their actual experiences – saturates the tech industry's AI promises.
Low intentionality has low stakes in low-stakes design, and people have made bad posters forever. However, low intentionality is a side effect of laziness, of a lack of attention or interest in specific details. Details, though, are at the core of design as a practice. Low intentionality poses enormous risks in scaled software, and applications, and especially in policy and systems design.
Recently, a health report from the Make America Healthy Again commission, led in the US by Robert F Kennedy Jr, was found to have made up citations in a pattern indicating the use of a Large Language Model. This is telling: it was policy in which evidence was not really a factor. The use of AI to write the report signaled, paradoxically, high intentionality in the gesture of publishing a report, but low intentionality in considering the details of scientific evidence.
There's a false sense among the AI booster community that AI can generate entire products without the process of thought that goes into them. This can be images, code, and text. But images, code, and text are pieces of larger processes, and they reflect the deliberation and attention to detail that ensure those larger processes function as intended.
So, what, to a designer, is AI? I propose that it is a tool that can be implemented cautiously, with an awareness that it is substituting for decision-making. At best, AI can fill in details that don't matter, things that knowledgeable people can discern are ignorable.
The greater degree of your product design pipeline you draw from AI, the more ignorable it will be. The more you remove elements of decision-making, the less intentional the work will be. The more dependent upon these decisions a system is, the more important those decisions are. In handing away power over these decisions, we need to consider the weight of these decisions, not least of which is the decision not to decide.
This Week in Tech Policy Press
Musk, AI, and the Weaponization of 'Administrative Error'

With Musk and Trump in a post-DOGE meltdown, it's tempting to think this marks some end for DOGE's AI takeover of the federal government. But as I wrote this week (before the social media wars), Elon Musk (and "AI") are best understood as accountability shields. With his departure, the AI takeover continues, but implementation shifts to Palantir, founded by Peter Thiel, friend of Musk and JD Vance.
The Trump-Musk breakdown adds to destabilization as the heart of Trumpism: creating conditions through which "administrative" or "programming" or "formatting" errors, or "rogue employees" like Musk or any other number of fired admins, are always there to blame for any crisis. These crises, and the distancing Trump builds into them, ultimately give Trump a unique form of political leverage.