Fear of AI is Profitable

Welcome to Butopia

Fear of AI is Profitable
Midjourney: Image Without Data

The newsletter continues, but I have been quite busy these past weeks, and have neglected to update the AI Images class syllabus. If you’re following along, the most recent class links and descriptions are below. If this is the first you’ve heard of the AI Images class, you find the full syllabus so-far over on the website.

A lot of people were talking about the Future of Life Institute’s Open Letter asking for a moratorium on AI research. (Full disclosure: I worked with FLI for about a month before resigning). The letter calls for “AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

On the surface this might sound reasonable, if a bit naive. The letter was written by FLI, an organization that was created with funding from Elon Musk, and it proposes a moratorium on competitors to GPT4, which was developed by OpenAI —an organization that was created with funding from Elon Musk. Among the list of signatories: Elon Musk. And it’s more than Musk. Leading experts in the development of large language models (LLMs) are on board with building the exact same systems they claim to want to stop.

It’s not a contradiction. It’s more like a cabal.

In 2017 I wrote about claims toward an “existential risk from AI” as a profitable marketing strategy, pulling the following lines from Kate Darling, a researcher at the MIT Media Lab and Berkman Center at Harvard. Machine Learning had just broken a few barriers — playing video games successfully — and in Silicon Valley, this was a kind of test run for the GPT hype cycle, including fears of sentient AI dominating humankind through, I don’t know, seizing our Game Boys:

“We’ve made some pretty cool breakthroughs in AI and now people feel these fictional threats are becoming a reality.”

Darling brought a very pointed critique of Silicon Valley’s AI hype, suggesting that much of the fear on the horizon about AI and killer robots is coming from “a certain type of person,” specifically, the wealthy investors occupying “an insane” amount of influence in funding and policy for AI. There’s a lot of money to be made in pushing the narrative that AI is an existential risk requiring immense resources, she says. That’s cause for skepticism, and emphasizes the importance of sober assessments of AI’s actual capabilities. That requires understanding what successes in AI and machine learning actually mean. There was a big splash when machines learned how to play classic Atari video games, but AI was only memorizing successful patterns — button mashing, then throwing out those that didn’t work. Notably, she said, even Pac-Man was too difficult a problem to “solve.”

Rather than focusing on the agenda of self-actualizing robots, she said, we should be focusing on the agendas of humans who design them.

Part of it is that powerful AI systems are developed by powerful tech companies. Restricting development of these tools to specific companies — ones that make claims to being trustworthy, having the resources to keep it safe, etc — means cutting out competitors. Another part of it is a strategy of distraction: if you talk about these things as someday becoming powerful enough to destroy humankind, and push to constrain those risks as a core national security and privacy concern, well, you get to do all kinds of other weird stuff in the background.

We saw a bit of how the lobbying power of fear works last week when a Democratic senator, Chris Murphy of Connecticut, post a widely mocked assessment of what GPT4 can do, writing:

“ChatGPT taught itself to do advanced chemistry. It wasn't built into the model. Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made its knowledge available to anyone who asked. Something is coming. We aren't ready.”

The idea that ChatGPT “decided” anything independently is wrong; so is the idea that chemistry wasn’t built into the model. Nobody programmed it to learn chemistry: it inferred chemistry through an analysis of chemistry texts, pairing common word pairs next to one another and, it should surprise nobody, can generate “answers” to chemistry questions but also consistently generates incorrect answers. It possesses no internal model of the chemical world or universe: it clusters words together, and we all hope that the words found in these clusters are clustered because they are correct.

It’s easy to pick on people for not understanding AI, and I think that is unfair. But what is telling about this Tweet from a sitting US Senator is what it reveals about who he’s getting his information from. The “we aren’t ready for what’s coming” lobby, and the “Pause large language models” lobby, are advocating from a fear of sentience: that large language models eventually make so many linguistic associations that real knowledge of self emerges.

Stable Diffusion: Image Without Data

This is a weird theory that many take for granted — I am just a humble artist, but to me, this is extraordinarily speculative. I don’t know of any other sentient being that came to its sentience by learning language first.

There seems to be confusion in whether or not algorithms that drive these large language models are descriptive or enacting. In other words: we understand the math that makes a flower bloom, but that is a far cry from saying that math is what makes the flower bloom. We can describe the principles of the universe, but so far, nobody has been able to convince me that math brings the universe into being.

Language, on the other hand, is tricky. Expression, not feeling, is constrained by the language we have: we are, frequently, overwhelmed by the indescribable. In those moments, we babble, sob, moan, squeal, or simply say: “I can’t describe it,” “I have no words,” etc. This is true of moments of awe, sublime beauty, horror, but also, let’s remember, things like galling personal behavior.

The idea that a system could learn to model human consciousness so perfectly that it becomes conscious itself is a massive, unproven speculative exercise. That’s important to remember, because hype, like conspiracy theories, assumes one “what-if?” to be true and then extrapolates new “truths” from that false ground.

Sentience is just one piece of the apocalyptic AI fear. Other concerns abound on a spectrum of reasonable to unreasonable. But it seems important to note that some are calling for moratoriums because they fear sentient, conscious AI rising up and usurping us, based on a very sketchy philosophical pondering that things can be described into existence if we get enough of the details right (or fast enough GPUs to speed up neural networks into a superconsciousness).

It seems important to me that if sitting US Senators are listening to people, there are more pressing issues that are far less speculative. The backlash to the Future of Life Institute’s letter is largely because the letter leaned into the dog-whistles of the “sentient AGI” crowd rather than the pressing, immediate, and well-documented impacts of AI on real people in the real world happening in real time.

That’s concerning. When policymakers, public intellectuals and media figures spend time discussing future, potential AI risk, it is at best a speculative exercise. Mitigating immediate harms, on the other hand, requires sustained political attention and work.

Midjourney: Image Without Data

My bias comes from human rights workshops aimed at exploring the harms of AI right now, and how often those convos were derailed by “one day, AI will be able to XYZ” — it’s easier, and more fun, to talk about. It’s abstract, and you don’t have to have the awkward conversations about race or disability that are so often absent from these AGI conversations.

You can open a beer and ponder what the AI will do someday and whether we should give it citizenship or whatever: because “by then we’ll have solved all of those problems“ (using AGI, of course).

Somehow, this “utopia, but” is considered a serious space of inquiry, when we know nothing about how these hypothetical systems will work or what they will actually do. Therefore, any action we can take on them that is not strictly rooted in present-day assessment of their harms is purely abstract.

“Butopia” is a crass, but maybe helpful way to think about this: a speculative framework where all problems are solved, but one, which uses that “but” as an argument for prioritizing whatever they imagine to be the single exception. In Butopian thinking, AGI sentience solves all other problems — therefore we must focus on the problems associated with AGI sentience.

I’ll be generous and say I don’t think anyone rationally believes this. It’s merely the priorities that are revealed when engaging in this kind of derailing thought exercise. There is a lot of hostility to thinking about challenging concepts that get embedded into our technologies: addressing the risks of automated policing, enforced human categorization, the colonialism inherent in data extraction and deployment, automated racism and misogyny, and many others on the list: environmental concerns, job loss, and the disruption to artists and the way we think.

Plenty of folks don’t want to spend money, or an afternoon (or I suspect, a six month moratorium on AI research) talking about that stuff. Instead, they want to hypothesize a future where problems don’t exist, and then invent new problems to focus on. But any problems of the future will be determined by how we implement responses in the short term.

Midjourney: Image Without Data

AI is already entangled with the world. It mediates the way we perceive things, shapes our interactions with each other, deploys its own push into the complexity of natural, social, and other systems. We don’t need to wait for sentience or full consciousness to advocate for policies that guide these relationships.

The immediate responses necessary in the short term pose a danger to many of the companies that work on this stuff and the profits they’re directed toward. Distracting politicians with fears of sentience leaking into our toasters like toxic waste is an excellent distraction from meaningful policy and regulatory frameworks that might constrain those profits; restricting smaller organizations from developing competition is a great way of protecting them.

Proposing a 6-month moratorium on development of AI could be a well-intentioned proposal, but when it’s framed in language such as “powerful digital minds … beyond our control” or questions like “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” we are rooted purely in forms of speculative, abstract philosophical inquiry at the expense of meaningful regulatory policy.

“AI safety” in that sense is oriented not toward protecting people from the consequences of current tools and their deployment, but to assure that future AI is “contained” (ie, the sentience doesn’t leak someday). That requires the conviction that sentience is even possible. It focuses on the programming decisions that keep future AI in line with “human values,” rather than examining effects of real, existing AI on people’s lives right now. How the two are somehow separated is through the same kind of wild tricks that allow the people building large language models to sign letters suggesting we stop building them.

Any conversation about the future that ignores the present is fantasy. The future is not a place we show up at through teleportation: it emerges from decisions. It is not, and cannot be, isolated from the present.


HERE COMES THE ART CLASS


Critical Topics: AI Images
Class 16: Your AI is a Human

We talk about datasets and how they're assembled, and how they are "seen" (or not seen). Behind this system, we also want to acknowledge that human labor is often behind even the most fundamental technologies we describe as "automated," including the datasets we're looking at. That includes the workers hidden away behind interfaces and content moderation systems (thanks to Sarah T. Roberts for the title and the reading assignment this week) -- to the role of automation and the labor it replaces, to the humans behind the art we treat as data, to the question of where the human fits into AI creativity at the copyright office.


Critical Topics: AI Images
Class 17: Show Me How to GAN

An informal technical walk-through of RunwayML and how to train GANs once you have a dataset. No theory, just pure walk-through, showing where to find the right tools, how to prep your dataset, and what you get from it when you do. Worth watching if you have a dataset you want to play with, but not a lot of fun otherwise. :)


Critical Topics: AI Images
Class 18: Artist Talk with Dr. Eleanor Dare

Dr Eleanor Dare is an academic and critical technologist who works with Game Engines and virtual spaces. Eleanor now works at Cambridge University, Faculty of Education, as well as UCL, institute of Education. Eleanor has a PhD and MSc in Arts and Computational Technologies from the department of Computing, Goldsmiths. Eleanor has exhibited work at many galleries and festivals around the world. Eleanor was formerly Reader in Digital Media and Head of Programme for MA Digital Direction, at the Royal College of Art.


Things I’m Doing This Week

You can join our first <Story&Code> artist talk and workshop with Erik Peters, part of our first SubLab + AIxDesign residency for storytellers and creative technologists. The workshop is filling up fast but if you sign up you’ll also be linked to a recording (or wait and I will share it here!)

Erik Peters (he/they) is an interdisciplinary artist and designer engaging with the worldbuilding potentialities seeded in the act of storytelling, uncovering how speculative fiction can germinate new universes of being. Their research-based and collaborative practice is situated in an interdependent web of ecologies and technologies, human and non-human beings.


As always feel free to share and circulate these posts (with credit, please!) if you like them. I rely on word of mouth to get the word out as I have zero funding. If you want to support me, though, you can always upgrade your subscription from free to paid, but everything posted here is available to everyone! Find me on Twitter or Instagram or Mastodon.