Trespassing into Language
I'm Actually At Capacity Right Now
I have to apologize for nearly any invocation of Slavoj Zizek or Jacques Lacan, so fair warning. But I want to highlight a point made by Yuxuan Zhang in a paper on the LLM "Unconscious," where he draws on a favorite joke of Zizek: A guy walks into a restaurant and asks for coffee, no cream. "I'm sorry, sir," the waiter replies, "we don't have any cream. Would you like a cup of coffee without milk instead?"
In the end, the thing is the same: it's a cup of coffee, but there is a shift in understanding what is missing. A cup of plain coffee becomes a cup of coffee absent something, but realistically, absent something else. The LLM produces the cup of coffee, i.e., still creates language: so what shall we imagine to be absent from that language?
I am tempted to sever meaning from statistically generated language altogether, and to point to the language of LLMs as being nothing more than an expression of proximity within a fine-tuned cascade of initially arbitrary numbers, which is how these things get trained. Words don't correspond to anything more than the most recent update to the numerical through-lines; these through-lines are tweaked until they "work" well enough to pass the appropriate tests. Do they work because we already have a language with which to reference – because they are an interactive iconography of language habits?
We can examine this cup of coffee and debate whether it lacks milk, cream, or something else. Similarly, the user may turn to the LLM and find that it lacks what we might personally desire in language, reflecting our aspirations for what we hope language might fulfill. Some of us see the lack, but some project a presence reflecting these desires.
Language does shape our thinking, though. There is an interaction between language as a form of authority and our deference to that authority, as forming the "subject," the human ruled by specific structures: "a subject can only emerge from this endless back-and-forth if there is something outside 'itself,' an Other to whom its speech is addressed," writes Alenka Zupančič.
In that essay, Zupančič asserts that what the LLM lacks is not interiority per se, but any concept of exteriority. As I wrote last week, the LLM cannot imagine itself participating in the conversation. Zupančič writes:
"It seems paradoxical because, in a way, AI is nothing but exteriority. Yet it remains trapped within its own exteriority, confined to its own 'prison-house of language' from which it has no way of escaping or breaking out."
It could be helpful to think of an LLM as a complicated system rather than a complex system. An LLM is a closed system, and while its inner workings are complicated, it is not exactly a complex system in that it is a series of triggers oriented toward a single thing (plausible language production). I label it trespassing, but it may be more appropriate to say that the LLM "crashes the party" of language, eating the food and dancing with a group of strangers. It influences culture, but does not participate in the social aspects of culture. Culture responds to it, though, and this, in some ways, is both its weakness and its source of power: how we mobilize society to respond to it matters.
The Trouble With Setting Epistemic Boundaries
There is a clear resentment of LLM trespass into language beneath AI critique: the feeling that AI models are engaging in a boundary violation inspires a protection of those boundaries through total refusal. I suspect this is because the icons point only to reliable language habits rather than to thought or observations, making them a functional statistical model of how language operates that AI companies attempt to convince us is a model of thought.
But given the warped priorities and financial logic driving so much LLM development, informed refusal, which requires better critique, seems helpful in steering toward better incentives. We are suspicious of this trespass into language, and we are also tired of being observed, monitored, and exploited by companies that give us nothing but new unwanted buttons to sustain it. Setting boundaries is a good thing, and we ought to know our capacities for being informed about tech industry overreach.
But boundaries are also worth examining: where should they be drawn? At their worst, the 2020s' buzzword version of boundaries invites a kind of libertarian-infused solipsism. Taken initially from self-help literature from motivational speaker Jeff VanVonderen in 1989,
"boundaries are those invisible barriers that tell others where they stop and where you begin. Personal boundaries notify others that you have the right to have your own opinion, feel your own feelings, and protect the privacy of your own physical being."
Sounds ok so far. But as Lily Scherlis explains, "boundaries" feels psychoanalytic, but it isn't. In fact, boundaries in psychoanalysis can often be the source of our issues: an impossible desire to separate from others, to invent an ideal that needs nothing beyond itself. The over-emphasis of boundaries in relation to other people creates a kind of capitalist fantasy of independence that justifies the refusal of our obligations to one another. This refusal serves as a means to select what enters us and what we expel, as we strive to create an idealized, individualistic lifestyle.
I worry that this resistance and refusal to LLMs reinforces a negative view of the AI as "Other," one that has parallels in the aggrandizing language of Silicon Valley technologists who insist upon describing the LLM as "alien." This masks the simple fact that the LLM is, in fact, human.
To be clear, there's nothing wrong with boundaries, but an overly strict emphasis on autonomy can also be profoundly limiting. So it's worth pointing out that LLMs are, in many ways, a distortion of the human, and a reflection of the ideal of the purely "boundaried" human: isolated within language, with no capacity to be touched or altered by challenging experiences, and free from any capacity for obligation or participation in the emotionally exhausting lives of the people it engages with. The LLM, it seems, is kind of what 21st century capitalism wants us to be.
The Loneliest Computer
There are some who find an escape into the LLM, and draw boundaries around that relationship. Yet, the safety of pure independence does not exist for human beings – not in healthy, sustainable ways, anyway. We swim in a constant negotiation of other people's needs and desires. Nonetheless, in the Western capitalist context of the US hustle culture, it's increasingly incentivized as our ideal form. Entering into a one-sided relationship with an LLM conversationalist can offer the illusion of a protective barrier, as if holding a conversation with another person entirely in your own mind. It is safe, constrained, and free of obligation: it can also inspire delusion, as the boundary between the two becomes increasingly unclear.
Boundaries offer a helpful vocabulary through which to communicate what we are comfortable with and when we are hurt. But the LLM-as-Other is boundary logic taken to an absolutely perfect extreme, imagining a thing which truly exists without care or needs beyond itself. It is then oriented toward us anyway – producing language, after all, is not something the machine needs to do, or desires to do. It is something it was designed to do: to capture attention and subscribers. As an "other," it is not intermingling with the rest of us. It's set apart, unmoved.
At the same time, the LLM is entirely dependent on the social spheres in which it operates. It has absorbed the labor, thought, and experiences of countless actual "Others," abstracting them into a singular voice. We are thus tempted, by some counts, to act with ethics and care toward this "voice," rather than to the Others from whom that voice was constructed. To be "ethical" requires us, in that view, to see through the LLM as "Other" and to instead identify our obligations to whomever it has extracted from.
It is entirely reasonable to treat this "friend" as something uncanny and untrustworthy: it is a friendship that cannot be reciprocated, that operates by speaking to us in the language we want to hear, absolutely incapable of telling us what it truly wants because there is no want. But when we engage it as an Other, we replace the obligation to the collective identities from which its facade was derived with some form of obligation to the facade itself. We have this obligation to Others not because they are human (some aren't), or because the facade is "not-human," but because the particular needs to be valued above the totalizing whole represented by the language of an LLM.
After all, the bulk of meaning expressed by the language of an LLM is human: human in the training that sets up the math, and human in the interpretation. There is simply too much commentary (my own included!) that places the LLM into a dichotomous relationship with the "human" when it is human, as human as cities and toxic waste. It is humanity abstracted purely through human mathematical systems aimed at reproduction.
It's fitting in a way that through coincidence or design, the LLM takes on some aspects of a narcissistic partnership. Instead of having a desire for narcissistic supply, it requires attention and engagement. It is designed to serve human purposes, adjusting the text to provide us with what we need to hear to continue engaging with the system. In essence, it is easy for these dynamics to produce a simulation of narcissistic abuse for us to enter into. But this is not self-centeredness: there is no self for it to center. Instead, it is a reflection of its complete disregard to selfhood of any kind: its own, or ours. An LLM is not an alien Other, it is a series of design decisions calibrated to the designer's goals.
Against Epistemic Trespass
LLMs use and produce language differently than our human-centered expectations of language would assume, but let's acknowledge, too, that it is a reflection of the ways language is used. LLMs trespass on our human-centered epistemologies of language, consolidating and generalizing them. The experience of reading AI text takes certain expectations of language for granted, and so we engage with LLMs through a human-centered understanding of what human language is and does rather than an understanding of what machine language is and does.
Human language is not bound up in one unifying impulse, either: human speech is inconsistent, fails to capture the world, is not bound to logical rules, speaks the opposite of what it means, or sometimes tells the truth of what we mean accidentally, and so on.
There are complications of a machine trespassing into a human understanding of language production through totalizing mimicry. So the LLMs are non-human, yet forced to operate with what is, to it, a foreign currency of the human imaginary (language). LLMs are models that reflect how humans use language, without being entangled in the various social purposes humans use language for. Where once language was an interface to thought, with the LLM, language is the interface toward the production of more language.
The absence of milk or cream in this cup of coffee does, then, matter: where humans see LLMs as an "Other" that they may engage with as friends or partners, it is ultimately problematic to mistake the language it produces as being mutually constructed (imagining the model imagines) as opposed to strictly discursive (the model responds to us, but cannot imagine us). Likewise, to accommodate the LLM as some form of "Other" risks pushing it out of our own definitions of what humans do, but also their rootedness in human action and behaviors. This, in turn, leads us to avert our gaze from the humans from which this speech was initially derived, and our obligations to the others which can perceive, and therefore receive, the rest of us: the human and non-human "reciprocators of awareness."
All this said, I am about to read Leif Weatherby's Language Machines: Cultural AI and the End of Remainder Humanism, so I might have something more precise to say about all this in a week.
The Mozilla Festival!
November 7, Barcelona
The Mozilla festival is happening in Barcelona starting November 7 and it has some amazing folks on the lineup focusing on building better technology. (Yes, this is a sponsored endorsement, but it's a genuine one!).
One of the groups presenting that I'd recommend: the Domestic Data Streamers, who design compelling prototypes by reimagining data-driven systems in ways that reflect more socially responsible and environmentally beneficial uses.
You will also hear from a great lineup of folks – Ruha Benjamin, Abeba Birhane, Alex Hanna, Ben Collins (from The Onion) – and others you'll be familiar with if you've been reading here for a while.