Things I Read in 2024

A Partial List of Writing on Generative AI

There has been some great critical writing about Generative AI this year. This is a long post with many quotations that tries to give a snapshot of what people – or at least I – was thinking about in 2024.

I've broken it up with subheds to make it easier to skim, and in most cases have tried to summarize and quote the pieces being linked. This is a lengthy post, and I highly encourage you to scroll through, find a heading that matters to you, and check out the section.

It's also bound to be incomplete, and I am sure I missed a lot of important lenses and writing about AI. Part of the loss this year is because I deleted my X account and any writing I shared and bookmarked there. If I missed you, it's not a snub.

💡
Many of these came from a Blue Sky thread of recommendations, go check out the whole list, it's an incredible resource.

Stuff I Wrote This Year

If you'll forgive me, I will get my own stuff out of the way first. That's the privilege of writing a newsletter.

If I can talk shop briefly: next year is the fifth year of Cybernetic Forests, and the first year hosted on my own site, rather than Substack. While this has been good for many reasons, it's also reduced the flow of new readers.

Cybernetic Forests is not anywhere near the scope or professionalism of a site like Tech Won't Save Us nor does it have the reach of professional journalists; the weird fusion of avant-garde digital art cultures and immersed tech criticism are certainly niche.

So, if you dig this newsletter, and you dig any of the pieces below, please consider sharing it with folks, as I am relying heavily on word of mouth to grow.

This year I was delighted by the circulation and response to three pieces:

  • The Age of Noise, a lecture and video essay delivered at the Australian Centre for the Moving Image in Melbourne and for the Unsound Festival in Krakow. A philosophical statement of sorts, this argues that the information age has culminated in an era where information is simply too prevalent, and we are now turning to filters to sort it out, shifting our relationship to information. This, I argue, marks "the age of noise" of which generative AI is a leading symptom.
  • Challenging the Myths of Generative AI, for Tech Policy Press, was a summary of some shared myths that pull people's thinking about generative AI into unproductive directions, even amongst those who aim to bring a critical lens to the conversation.
  • A Critique of Pure LLM Reason, an issue of this newsletter, looked at ChatGPT4-o and what was being touted as a "reasoning model," and why it was, in fact, still a stochastic parrot – and also, why so many tech leaders are obsessed with disproving the "parrot" frame.

I was also very proud of what seemed to be a strong response to last week's multi-part installment on Slop Infrastructures:

So with that said, if you'd like to subscribe to this newsletter: you can! If you're feeling especially motivated, would you consider upgrading to a paid subscription? Because writing this stuff takes a lot of time.

Below the sign-up box, we'll dig into some of the writing that's helped me make sense of Gen AI this year.

AI and Anti-Humanism

Addressing the confusion of AI and human thought.

From May, Shannon Vallor writes about a confrontation with "Superhuman AI" evangelist Yoshua Bengio:

I had thought it was a fairly obvious — even trivial — observation that human intelligence cannot be reduced to these tasks, which can be executed by tools that even Bengio admits are as mindless, as insensible to the world of living and feeling, as your toaster. But he seemed to be insisting that human intelligence could be reduced to these operations — that we ourselves are no more than task optimization machines. I realized then, with shock, that our disagreement was not about the capabilities of machine learning models at all. It was about the capabilities of human beings, and what descriptions of those capabilities we can and should license.

This points to what Timnit Gebru and Émile P. Torres have identified as a suite of historical beliefs rooted in eugenics that have shaped multiple streams of AI development, which they've given the acronym "TESCREAL" –

While tracing the origins of the AGI race through analyses of primary sources, we found eugenic ideals to be central to this work: such ideals are often explicitly stated, and in some cases first-wave eugenicists are specifically referenced, as we discuss throughout this paper. In describing this influence on leading figures and organizations in the AGI race, we found ourselves constantly referencing seven ideologies: transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism, and longtermism. ...

The argument is that building a machine akin to a "God" to improve upon the productive capacities of humankind – through augmentation or replacement with a bunch of computer programs – is intrinsically dehumanizing.

Iris van Rooij, Olivia Guest, et al. argue that AI has drifted from a science of cognitive models to a tech tool for automating "agency," and it's a maladaptive fit. I think about this passage often:

This means that any factual AI systems created in the short-run are at best decoys. When we think these systems capture something deep about ourselves and our thinking, we induce distorted and impoverished images of ourselves and our cognition. In other words, AI in current practice is deteriorating our theoretical understanding of cognition rather than advancing and enhancing it. 
💡
Reclaiming AI as a Theoretical Tool for Cognitive Science. Iris van Rooij, Olivia Guest, et al.

From a profile of Emily Bender, an author of the "stochastic parrots" paper, by Elizabeth Weil in NY Magazine:

“This is one of the moves that turn up ridiculously frequently. People saying, ‘Well, people are just stochastic parrots,’” she said. “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”
💡
You are Not a Parrot. Elizabeth Weil.

In "Friend or Faux?", a great recent piece from Josh Dzieza in the Verge, we get a nice concrete description of how AI chatbots work while asking what, exactly, our relationships with a program is "with." Explaining what a chatbot is, they turn to a 2023 paper on the topic from Murray Shanahan:

To use Shanahan’s example, when you ask a person, “What country is to the south of Rwanda?” and they answer “Burundi,” they are communicating a fact they believe to be true about the external world. When you pose the question to a language model, what you are really asking is, “Given the statistical distribution of words in the vast public corpus of text, what are the words most likely to follow the sequence ‘what country is to the south of Rwanda?’” Even if the system responds with the word “Burundi,” this is a different sort of assertion with a different relationship to reality than the human’s answer, and to say the AI “knows” or “believes” Burundi to be south of Rwanda is a category mistake that will lead to errors and confusion.
💡
Friend of Faux? Josh Dzieza.

Maya Indira Ganesh's fictional dossier, The Trentino Group, turns to the escalating crisis in humanities. Could this fuel an embrace of generative AI for doing the under-recognized "human" work that is being deprived from the labor force as universities turn away from humanities to focus on more tech-specific expertise?

Humans will be required, especially at first, to “tune” the delivery of AI-generated humanities material and verify both the content’s accuracy and its reception. A student could receive a tailor-made education by education bots that identify personal learning styles and adapting content to suit it. This tailoring would require scores of well-trained translators, performers, voice actors, illustrators, designers, and filmmakers to transform the sum of human knowledge to the next generation(s). Of course, eventually there would be an increasingly narrow set of humans with humanities backgrounds required for this as their knowledge becomes codified and integrated into AI. ... there would need to be some arrangements made for what they refer to as, somewhat dramatically, “The Last Literature Professor.”
💡
The Trentino Group. Maya Indira Ganesh.

But AI's anti-humanism can be more than theoretical. It is also anti-human in the many cases in which AI is deployed to make decisions about people's lives in ways that are utterly disconnected from their reality.

In a nice complementary reading to my Hypothetical Images piece from 2023, Henry Farrell writes in "The Map is Eating the Territory" –

To direct these politics, we need to know more about the underlying political economy. So here is my best stab at one aspect of what has been happening over the last couple of decades. Over this period, we have been seeing the rise of new technologies of summarization - technologies that make it cheap and easy to summarize information (or things that can readily be turned into information). As these technologies get better, the summaries can increasingly substitute for the things they purportedly represent.

To be clear, that's not a good thing.

Political Economy & Algorithmic Sabotage

How do we address the AI as a political force?

Such conditions make the arguments of Ali Alkhatib's "Destroy AI" persuasive:

I’m no longer interested in encouraging the design of more human-centered versions of these murderous technologies, or to inform the more humane administration of complex algorithmic systems that separate families, bomb schools and neighborhoods, that force people out of their homes or onto the streets, or that deny medical care at the moment people need it most. These systems exist to facilitate violence, and HCI researchers who have committed their careers to curl back that violence at the margins have considerably more of something in them than I have. I hope it’s patience and determination, and not self-interested greed.

We also saw the anonymous Algorithmic Sabotage Research Group (a response, I think, to our own ARRG!, the Algorithmic Resistance Research Group, which came about in 2023) which created the Manifesto on “Algorithmic Sabotage."

💡
Destroy AI. Ali Alkhatib.

(Speaking of ARRG!, yes, we are still working on the zine, look for it in 2025.)

It is my sense this year that AI resistance has many forms, and can be evaluated on a few levels that merit distinction.

At the top are the ideas that motivate AI as an ideological project. This involves the quantification and classification of the entire world into discreet, manipulatable symbols. This ideological project involves stripping apart connectedness in order to re-examine and reproduce desired outcomes. The desired outcomes are centralized, and the algorithms themselves are meant to be autonomous, so as to create a layer of abstraction between the decisions about how they work, the impact of those decisions, and any form of accountability for them.

Then there are the tools of AI, which for obvious reasons don't "possess" an ideology but may nonetheless go on reinforcing certain beliefs and logics about the world because they were built into decisions the tools are designed to automate. Legacy infrastructures will then have to grapple with these fundamentals, meaning some AI systems are broken from the beginning (ie, building a "clean" LLM on top of an LLM like ChatGPT or LLama will always include scraped data). They also structure a kind of "naturalness" to the current order of the world, which is far from ideal.

Next is the technology itself, which is often built by a wide range of people who are less concerned with the ideological project and are more concerned with the production of a specific technology. Well-intended or oblivious, the entropy of design tends to lead these tools toward their ideological core. In the development phase, resisting the drift of projects into such ideological territory requires vigilance and critical literacy. This might seem relatively hopeless in corporate tech environments but has room in Public Interest Technology and other forms of counter-ideological technology projects.

“Large Language Models Reflect the Ideologies of their Creators” rejects the premise of “neutral” and “unbiased” LLMs outright and instead assesses the ideological assumptions of various models as expressions of the intent of the people who build them.

That is also where regulation can be powerful. While regulation can often be seen as a mitigating counter-weight, incapable of stopping the overall harms, there is also a need for someone's fingers in the holes of the leaking dam. There are layers of strategy that go into curbing harm. Abolishing the ideology of AI is a long-term project. Preventing people from going to jail for being misrecognized by a faulty tech product is an immediate need. Policy sits somewhere between them.

Education, and public literacy, has both short and long-term effects. If students entering the workforce know about the risks, then they may be able to change things.

From Deconstructing the AI Myth: Fallacies and Harms of Algorithmification, from Dagmar Monett and Bogdan Grigorescu:

The narratives we need about AI should focus on factual information that demystifies AI. ... AI literacy based only on its functionality (what it is and how it works) is not enough. ... We need to be skeptical, to question why, for whom, for which purposes, who is profiting from our data and why, which values and rights are eroded, and why they must be at the center of a well-functioning society, among many other questions.
💡

Artificial Imagination

How does Generative AI shape human creativity?

Shane Denson's paper, Artificial Imagination, suggests that:

"... automaticity has always been a part of the imagination, but now our visual stereotyping of the world is problematically shared with artificial agents."

I think this pokes at what I mean when I say that the ideological entanglement of AI is baked in.

Digressing (but not disagreeing) from Denson's paper – as I have written before, we often hear that AI does things just like a human does, but the issue, for me, is that it does things just like lazy humans do when they are distracted, frightened, or bored. These systems can't re-evaluate or change course through observation of their impact. They're like a human, sure: but it's a human immune to observation, transformation, or thinking differently, trapped in a dead-end production cycle and incapable of empathy. Perhaps that is not the kind of human we want making important decisions about our lives?

I also thoroughly enjoyed Joanna Zylinska's paper, Diffused Seeing: The Epistemological Challenge of Generative AI, which engaged with a paper of my own to point out what exactly we (I) mean when we (I) say that AI-generated images are "wrong" –

"In the era of alternative facts, when photographic images can be put to any usewhile deepfakes have become embedded in both marketing and propaganda, the issue of the accuracy of representation should certainly be taken with seriousness and care. Yet it is the contention of this article that the ‘not good enough’ argument with regard to generative models’ modus operandiis itself inadequate because it ends up narrowing the conceptual scope through which generative AI can be engaged, tested –and contested."
💡
Artificial Imagination, Shane Denson.

Meanwhile, Oliver Hauser shows us that Generative AI enhances individual creativity but reduces the collective diversity of novel content – in a paper appropriately titled, "Generative AI enhances individual creativity but reduces the collective diversity of novel content." (There's a pop-press summary here).

The idea is that AI might enable a bunch of people to create a bunch of things they wouldn't have thought of – but that the models are generating ideas similar to what those models are generating for other groups of users. A similar finding came in a paper that many claimed said the opposite – the paper asking whether LLMs could generate novel research ideas. As I wrote back then, the definition of "novelty" was incredibly diminished:

The LLM created 4000 ideas — after very heavy wrangling by the study's authors to focus on the structure of the research questions. The results were then evaluated by experts and narrowed down. This means that the ideas generated by the LLM were already pre-selected for the criteria under which they competed. Of those 4000 ideas from the LLM, only 200 showed enough variety from one another to be considered useful. Those 200 then competed for novelty against themselves — so it’s good at generating novel ideas if you strip away the 3800 that were not!

Likewise, another study this year suggested that generative AI doubles the risk of design fixation when used to brainstorm – people start focusing on the ideas suggested by AI, rather than developing ideas of their own. It's a challenge to the idea that AI somehow amplifies creativity, though it's fair to ask if that fixation can be unlearned. For example, "participants frequently created prompts by using keywords copied from the design brief." But the risk of imagination capture is real.

In scientific research, another paper this year suggested that:

proposed AI solutions can also exploit our cognitive limitations, making us vulnerable to illusions of understanding in which we believe we understand more about the world than we actually do. Such illusions obscure the scientific community’s ability to see the formation of scientific monocultures, in which some types of methods, questions and viewpoints come to dominate alternative approaches, making science less innovative and more vulnerable to errors. The proliferation of AI tools in science risks introducing a phase of scientific enquiry in which we produce more but understand less.

The pattern at play here is that we see our relationship with AI as one-to-one, but it is in fact many-to-one. Its use by individuals may be to help them to produce novel ideas for the individual users, but collectively, the same "novel" ideas are being produced by every user. The consequence of this is a kind of algorithmic herding, but disguised as "AI assisted brainstorming" or "novelty." An individual sees the impact on them, but does not see the collective impact of thousands of users being steered toward the central cluster of text that a single AI system can generate.

💡
💡
The Effects of Generative AI on Design Fixation and Divergent Thinking, Samangi Wadinambiarachchi, Ryan M. Kelly, Saumya Pareek, Qiushi Zhou, Eduardo Velloso.

AI and the Absence of Imagination

Can you make art with AI?

On the flip side, Ted Chiang's essay "Why AI isn't Going to Make Art" was well-loved, but I had some issues. It confused a lack of creativity in the machine for an inability to be creative with the machine, a conservative view that links the value of art to the effort required to produce it. Gen AI will never make art without humans because AI can never be "without humans." It is always the product of human design or data.

People can find creative uses for these machines because people are creative. It may not mean making images with prompts. Misuse is a form of creativity too. When people say "nobody can ever be creative with AI," I hear them suggest that humans aren't creative enough to repurpose systems. What they really mean, I suspect, is that the foundations of AI are so seriously toxic that they are immune to the kind of playfulness that is required to generate art in any way, shape, or form.

I think that people ought to say that, if so. Because linking our acceptance of AI to whether or not it is "creative" is misguided. Creative empowerment doesn't absolve the tech of its many problems. Linking aesthetic beauty, or aesthetic novelty, to the morality of a technology (or its politics) is a dangerous idea.

Likewise, linking the value of art to effort is problematic. It dismisses a lot of meaningful conversations about contemporary art, particularly contemporary digital art, but also ready-mades, found objects, chance operations, generative music (1960s!) etc.

The point is: humans are diverse, and art is diverse, and that is a strength of human art against the art produced by picture-averaging calculators. A strong fleshing out of this can be found in Aaron Hertzmann's “That’s Not Art:” Art Worlds Define Art Differently" from very late 2023:

So often, people seem to treat the words “art” and “artist” as if they each mean just one thing, like when when friends and colleagues have asked me “What do artists think of AI?” It should be clear from the essay so far that there’s no simple answer to this question. But it’s not just that they have different definitions of art. Within these established art communities, not only do they define “art” differently, they rarely acknowledge or recognize that other definitions exist.

That said, the point of a diffusion model being inherently creative is obviously nonsense: it is a probability engine that can only follow a specific series of steps in a prescribed sequence. It is, at best, a metaphorical and accommodating use of "creativity," as in, "machine creativity," which is focused on the illusion of creativity in systems – more appropriately called "generativity."

But it was shocking to remember that this piece, Generative AI Has a Visual Plagiarism Problem, from Gary Marcus and Reid Southen, was published in 2024 – and that before this, there was still debate about whether AI image models had a capacity for "creativity" when it was clear that they were scraping unlicensed intellectual property and reproducing it in new arrangements. Marcus and Southen list dozens of examples of IP infractions in image generators:

Following up on the New York Times lawsuit, our results suggest that generative AI systems may regularly produce plagiaristic outputs, both written and visual, without transparency or compensation, in ways that put undue burdens on users and content creators. We believe that the potential for litigation may be vast, and that the foundations of the entire enterprise may be built on ethically shaky ground.

I've had two years of bad-faith readings of this idea – arguments that diffusion models don't just remix training data – and that's true. But they memorize patterns and replicate those patterns, affixed to randomized constellations of noise, creating the illusion of creative decisions.

Finally, AI products are designed – a point that seems weirdly side-stepped in so many conversations about it, as if it has materialized without human decisions. But humans involved in making the decisions about how tools work and what they do are also embedding beliefs into those designed.

💡
Generative AI Has a Visual Plagiarism Problem, Gary Marcus and Reid Southen, and a followup on "accidental plagiarism" from Katie Conrad.

Thinking Like a Socio-Technical Researcher

What are the core questions we need to ask about technology?

From Ranjit Singh at Data & Society, is one of my favorite pieces of the year, and belongs on any syllabus.

First, there is no such thing as a purely technical system. 
Second, every technical system is designed with a particular perspective and a vision to transform society — but this transformation does not happen equally for everyone. So, paying attention to the differences in people’s experiences with technical systems — and where they lead —  is crucial for sociotechnical research. 
Third, every technical system represents a set of choices — choices that we make when we build it, and choices we make when we use it. Sociotechnical research is focused on the nature of these choices, why we make them, and at whose expense.
And finally, fourth: The relationship between technology and society is a two way street; they mutually shape each other.

AI & the Environment

What do we know about AI's environmental costs?

Critical Data Center Studies – Dustin Edwards, Zane Griffin Talley Cooper & Mel Hogan – suggests a framework for looking at the data center as more than just tech infrastructure. It's worth a read, particularly when looking at the intersection of AI, the data center, and politics / media:

Big Tech enters into ecological configurations – often, and increasingly, playing the role of ‘custodian and manager of natural resources’ (632). Big Tech’s model of constructing big data ecologies means that data companies continue to grow their data streams through extractivist practices, all while investing in renewable energy sources, engaging in environmental restoration projects, and projecting images of environmental sustainability. In effect, data companies ‘are becoming environmentalists by their own definition, and grabbing at land, water, and power infrastructures to make their case as the industry best suited to manage natural resources, on which Big Tech is dependent (and on which the rest of the economy is also increasingly dependent)’ (Hogan, 2018: 632). 

Elsewhere, the paper notes that local activism shapes the planning of data centers, wherein community resistance can often create frictions that repel the data center. In negotiating these tensions, the data giants make relatively abstract or meaningless promises, often tied to climate or water consumption.

In another piece, Why We Don’t Know AI's True Water Footprint by Miranda Gabbot for Tech Policy Press, we see that water use is increasingly challenging to track from the outside:

Just last year, in the midst of what UN experts have labeled a worldwide water crisis, only 41% of data center operators reported on any water usage metric at all. The data center industry is notoriously private, with key players like Google publicly lobbying for this information to remain a trade secret. These facilities are perhaps the ultimate physical metaphor for the algorithmic “black box” and have become something of a fetish object for digital humanities researchers. Entering one generally requires a passport or other government identification, and they are often obscured on Google Maps, leading to gonzo attempts to map their presence in cities.

Miranda also includes this metaphor that belongs in textbooks on AI writing:

AI is being rolled out en masse, regardless of whether it is the most suitable tool for a job. From a resource usage standpoint, using ChatGPT to generate a 100-word email is like unblocking your toilet with a stick of dynamite. Yes, it will work, but there were routes you could have tried first that would do the job more reliably—and without the damage to your immediate environment.

AI infrastructure is also the material that goes into the data center and how it gets to the data center. Ana Valdivia's paper, The supply chain capitalism of AI: a call to (re)think algorithmic harms and resistance through environmental lens, notes:

In the case of AI, these supply chains entail several industries such as mining, electronics or data centres. In the case of mining and electronics, the media ecologist scholar Taffel claims that: ‘the flows of matter and energy which transform ores, earths and fossil fuels into assemblages of digital microelectronics involves a multiplicity of materials’ (p. 161). ... To train sophisticated algorithms such as large linguistic models (LLMs) data centres need to be equipped with Graphic Processing Units (GPUs). GPUs are mainly made of silicon and copper, together with tantalum, aluminium, or tungsten which are extracted from different territories across the globe ... Electronics such as GPUs have an average lifespan of 5 years, so data centres discard electronics generating a huge volume of e-waste – which could be considered the last stage of this supply chain. Some materials are recuperated and reintegrated into supply chains. However, a large volume of electronics end up incinerated or dumped in e-waste landfills, often across the Global South (Gabrys, 2018; Taffel, 2019).

It was heartening to see the attention on environmental impacts of AI. Another highly recommended resource that came in through my bluesky thread was this podcast series on Data Vampires (Part 1, Part 2, more there) from Paris Marx at Tech Won't Save Us.

💡
Critical Data Center Studies, Dustin Edwards, Zane Griffin Talley Cooper & Mel Hogan.
💡
Tech Won't Save Us: Data Vampires, Podcast Part 1, Part 2, Paris Marx.

Decline of the Scaling Myth?

Is the age of Big Data over?

In my piece on the myths of generative AI this year, I questioned the need for all of this data in the first place. The scaling myth is the myth that more data makes better AI:

The function of the scaling myth is multi-fold. For one, it frames data rights in oppositional terms. Companies that want to dominate the AI market claim they need more data. Policies that protect data interfere with their ability to scale these systems. The scaling myth resists such policies, often by pointing to geopolitical rivals – most often China. The scaling myth is also used to justify massive investments in data centers to investors. More scaling means more data centers, which draw on resources such as energy, land, and water. Notably, the larger the datasets become, the more difficult it will be to assess the content of those datasets and the ways they influence the output, complicating regulation and transparency efforts.

Gaël Varoquaux, Alexandra Sasha Luccioni, and Meredith Whittaker tackle the scaling-efficiency-to-hype ratio in a paper, Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI, acknowledging that hype has led to the overvaluing of scale in AI research:

The bigger-is-better norm is also self-reinforcing, shaping the AI research field by informing what kinds of research is incentivized, which questions are asked (or remain unasked), as well as the relationship between industrial and academic actors. This is important because science, both in AI and in other disciplines, does not happen in a vacuum – it is built upon relationships and a shared pool of knowledge, in which prior work is studied, incorporated, and extended.

Meanwhile, from the Data Provenance Institute came news that 14,000 of the most used sites available for LLM training were quickly getting locked up – more than 27% had hidden their content through things like robots.txt that restrict its discovery or were simply taken offline.

Many in the AI industry still believe the scaling myth, though, as we learned from reporting by Rachel Metz and others for Bloomberg, some cracks are starting to show – and credit to Gary Marcus for beating the drum against endless exponential returns on data scaling laws for years.

Speaking of endless growth, there was some sobering math from Daron Acemoglu's paper (summarized here) from May:

What about the share of tasks that will be affected by AI and related technologies? Using numbers from recent studies, I estimate this to be around 4.6%, implying that AI will increase TFP by only 0.66% over ten years, or by 0.06% annually. Of course, since AI will also drive an investment boom, the increase in GDP growth could be a little larger, perhaps in the 1-1.5% range.

Part of the scaling myth is also the idea that with great scale, comes new and unpredictable properties of the model. Anna Rogers writes some compelling counter-arguments to claims for certain "emergent properties" –

This is not to say that it is absolutely impossible that LLMs could work well out of their training distribution. Some degree of generalization is happening, and the best-case scenario is that it is due to interpolation of patterns that were observed in training data individually, but not together. But at what point we would say that the result is something qualitatively new, what kind of similarity to training data matters, and how we could identify it - these are all still-unresolved research questions.
💡
Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI, Gaël Varoquaux, Alexandra Sasha Luccioni, Meredith Whittaker.

Honorable Mentions

This post is extremely long and I am tired, but there is just so much great critical writing about Gen AI this year that I would be remiss not to mention. Here are a handful that didn't fit so well into the above sections.

💡
Really clear to me in 2024: the importance of 404media.co and Tech Policy Press! I encourage you to subscribe and/or donate if you can.

If you are reading this in your inbox, it means I have successfully migrated away from Substack. The new archive for this newsletter can be found here.

If you're looking to escape from X, I recommend Blue Sky. It's now at a level of polish that rivals Twitter's glory days. If you sign up through this link, you can immediately follow a list I've hand-assembled with 150 experts in critical thinking about AI.