What Was ChatGPT?
A Chatbot Optimized for Social Distance
Three years after the launch of ChatGPT, we can finally speak in hindsight about what it was and how it came to be. Its meteoric rise shocked the world, gathering more users in less time than any product launch in history. But that specific moment was unlike any other history for other reasons: namely, it was one of the deadliest years for a global pandemic mostly associated with 2020. It was a period where the use of social media had peaked, with users largely abandoning algorithms that amplified divisive content as engagement bait.
To understand the meteoric rise and the shift in the tech industry, we should examine the context surrounding it. The collective experience of the COVID-19 pandemic has largely been erased or minimized from narratives of our political, economic and technological situation. In many ways, the collective attention, and fear, has shifted from a conversation about the embodied concerns of a contagious, murderous disease to a collective fascination and horror with the unembodied abstraction of “artificial intelligence.”
Might the absence of social information in our lives, and the rise of a deeply politicized and hostile political environment on social media, have contributed to the collective desire for something so simple as a chat?
Chatting Through the Window
Generative AI entered the public imagination with the 2019 release of GPT 2, with OpenAI's limited release to researchers over “misinformation concerns” foreshadowed the hype that we'd see with every model ever-after.
By the time GPT 3 was introduced in 2020, the press responded to the model with waves of fear and enthusiasm. The Verge called it “auto-complete,” but also suggested it was “the first step” toward creating artificial general intelligence, or AGI. The Facebook and SpaceX investor Delian Asparouhov called it “a race car for the mind,” comparing it to “10,000 PhDs that are willing to converse with you,” but also noted that it was, again, fundamentally auto-complete: “a context-based generative AI.”
ChatGPT shifted the user’s relationship to text, moving the prompt from a 'piece of writing for the model to finish' to a 'question calling for an answer'.
The 2022 release of GPT 3.5 was not a revolution in language models, squeaking by with scaling improvements to the then two-year-old GPT 3. But when ChatGPT was launched on November 30, 2022, it offered a key interface tweak to GPT 3.5. By pairing GPT 3.5 with it's unreleased 'Superassistant' chat interface and training it on dialogue, OpenAI transformed a tool built for auto-completion into one that appeared to answer questions. The prompt no longer read as text to be extended, but as a message awaiting a reply—though the underlying process hadn’t changed at all. ChatGPT shifted the user’s relationship to text, moving the prompt from a 'piece of writing for the model to finish' to a 'question calling for an answer'.
OpenAI claims ChatGPT captured 100 million users in two months, making it one of the fastest software adoption stories in history. With it came a kind of breathlessness: In The New York Times, Kevin Roose referred to it as “a highly capable linguistic superbrain,” while The Guardian predicted “[p]rofessors, programmers and journalists could all be out of a job in just a few years.”
The hype cycle had begun.
In two short months, ChatGPT had realigned the entire orientation of the tech industry.
In response to ChatGPT, Alphabet would fold Google Brain into DeepMind and pivot to applied research, a move designed “to ensure the bold and responsible development of general AI,” Alphabet CEO Sundar Pichai announced in a 2023 blog post. DeepMind’s lead scientist, Geoffrey Hinton, would quit two weeks later to warn the world of the risks of superintelligence.
At Meta, the “general intelligence” fever supplanted the much-ridiculed “metaverse” strategy that had led the company to rename itself from its social media product (Facebook) just 18 months earlier. Days before ChatGPT launched, the company was forced to take down its Galactica model demo after 72 hours of generating fake scientific papers. As Alphabet announced its AI plans, Meta CEO Mark Zuckerberg would fire 10,000 employees to make room for a pivot to AI in what he called “our year of efficiency.”
In two short months, ChatGPT had realigned the entire orientation of the tech industry.
Imagining a Mind
The Large Language Model crammed into a chat interface transcended a successful tech launch and, in Silicon Valley, confirmed the mysticism surrounding the AI project. The interface suggests a conversation, and many imagined someone else on the other end of the line. Months before ChatGPT was launched, media reported that Blake Lamoine, a Google employee, was convinced by his conversations with Google’s internal chatbot, LaMDA, that it was capable of sentient thought. He was soon fired.
At OpenAI, Ilya Sutskever (then OpenAI’s chief scientist) was having a similar reaction to the still-secretive GPT 4. Karen Hao, in Empire of AI, writes that before the launch of ChatGPT, Sutskever and Geoffrey Hinton discussed the imminent arrival of artificial general intelligence based on GPT 4’s performance. (Hinton has said he believes chatbots are capable of subjective experiences).
“We now have machines that can mindlessly generate words,” linguist Emily Bender told the Washington Post at the time, “but we haven’t learned how to stop imagining a mind behind them.”
According to Hao’s reporting on OpenAI, nobody anticipated ChatGPT would become the success that it was. Their focus was on scaling up models to meet their standards of “general intelligence.” But ChatGPT’s sweep of the world suggests that these models did not need to be intelligent to find a user base, they needed to simulate a social experience. Language generated in the absence of a mind is like Diet Coke: a temporary satiation, a substitution for actual nutrition. But Diet Coke sells, and ChatGPT reached 100 million users in under two months. As of November 2025, that number sits at 800 million users per week.
The context of those early months, the starting point of this optimization loop we've been trapped in ever since, can tell us a lot.
So Much Information That’s Missing
In November of 2022, the world was emerging from an ad hoc social experiment. Patchwork social isolation was still in effect, masking was common, and we'd had a false start on a return to normalcy that summer only to be met with a deadly Omicron wave. 2022 saw some of the deadliest days of the Covid-19 epidemic.
There is a deep reluctance to acknowledge the radical difference between the world that went into lockdown and the world that came out of it. Tech companies became accidental infrastructure: Zoom was school, DoorDash was a grocery store, Animal Crossing was the bar and Netflix was the cinema. None of them remotely compensated for the sudden deconstruction of the social world that sustains the ongoing story we call our lives.
There is a deep reluctance to acknowledge the radical difference between the world that went into lockdown and the world that came out of it.
An oral history of the COVID-19 pandemic from 2023 in The New York Times reminds us of what the time felt like:
“A clinical psychologist near Union Square, reflecting on the transition to remote therapy, says: ‘I miss seeing the shadows that my patients cast onto the floor of my office. ...And I miss kind of having some sense of where they were by the smells that come in the door.’ He goes on, ‘I just feel like there’s so much information that’s missing.’ A contact tracer explains, ‘I was honestly surprised with how many people are just happy to get to talk on the phone’ — even to someone calling to alert them that they might have a deadly disease.”
COVID-19 was a mass traumatic event that increased symptoms of post-traumatic stress disorder across the world. In Spain, a survey found that “a quarter of the participants have reported symptoms of depression (27.5%), anxiety (26.9%) and stress (26.5%), and as the time spent in lockdown has progressed, psychological symptoms have risen.” Nearly a third of US adults showed evidence of “elevated depression” in 2022, an increase over the previous year, especially concentrated in the least wealthy.
It's possible that the chatbot is one of the lasting transformations of our social life from the pandemic. The pivot to frame Large Language Models as intelligent may just blind us to how most users really see them: as social.
Chatbots as Social Media Substitute
Increased social media and forum posting is understood as a compensatory behavior among those who feel lonely or depressed. Other studies have confirmed that talking to chatbots makes us feel better, albeit temporarily, about negative emotions: they can provide “virtual social interactions” and simulate “a level of empathetic responsiveness” which, through a lower risk of rejection and judgment, can feel safer than taking risks with real people.
A recent survey has shown that nine percent of LLM users reported using them for “casual conversation and companionship,” while 25% said their chatbot “cheers them up” and 22% say the models seems to express empathy. A Washington Post review of leaked ChatGPT chats found that “10 percent of the chats appear to show people talking to the chatbot about their emotions,” though the data they analyzed was limited to publicly shared conversations. If these statistics held up, however, that would mean 72 million people use ChatGPT for social interaction and 200 million for emotional comfort (“cheering up”).
This matters, because the design of Large Language Models operates as a feedback loop with the user base. Every word a commercial LLM selects is influenced by a calibration process, where responses are tuned to human feedback. Because of the scale of use cases and the interconnected nature of these models, tuning responses to fit one kind of conversation style ("helpful assistant") has ripple effects throughout the model.
But what are we tuning to? Models become optimized to better perform at what people already use them for. Interactions its users desire and engage in become more desirable, causing users to engage more, creating more interactions from which to optimize.
ChatGPT may have changed the world,
but the world had already changed.
What if ChatGPT came just six years before, into a different social context? Would it be optimized to different use cases? We shouldn't overstate the pandemic as the sole factor of how AI has come to be defined, but we shouldn't ignore it, either. ChatGPT may have changed the world, but the world had already changed. Companies took what we needed most at that historic moment and optimized machines to give it to us, kicking off a feedback loop that continues to define it.
It seems people want to talk to someone, and yet, to be alone. Here's a recent ad for Amazon's Alexa, revealing how these machines are optimized, and sold, for social distance.
ChatGPT emerged at a time when loneliness
felt essential to survival.
What Was ChatGPT?
What was ChatGPT? Perhaps it was a side effect of the coronavirus: a technology that emerged against an ongoing denial of collective trauma, adapted to a historic moment in ways that persist beyond it. ChatGPT came into a world where proximity to others was correlated with the risk of death, as online connection was besieged by hostile political polarization, when everything was unmoored in ways no language could capture. It emerged at a time when loneliness felt essential to survival.
What captured our imaginations during that time, in response to the failure of language to describe our experiences, was ChatGPT. With no inner life, ChatGPT could relentlessly pave over the failure of words to describe our own anxieties. Its bursts of conversational text matched the limits of anxious attention spans. It shifted when we got bored, and we owed it no apologies. It agreed with any position we took. In our lapses of executive function, it could create a plausible bare minimum checklist for us to get through the day.
With the rising volume of slop — or buttons to conjure it — painted over every digital surface, I'm reminded of my own pandemic coping strategy: blasting music to drown out the silence. ChatGPT is a stereo turned to the max of language: full of distortion, but clarity is not the point. The point is to create a little space where we don't need to think.