What Does It Mean to ‘use’ Generative AI?

Are pro and anti AI camps talking about the same thing when they say “useful”?

The social media discourse seems to constantly come back to the question of whether Large Language Models are “useful (good)” or “useless (bad).” The responses are a bizarre back and forth in which one group says they use it often and another group insists that nobody uses it at all.

Notably, the only question here is utility, rather than anything else: it’s not an assessment of Gen AI’s myriad problems in the context of its claimed benefits. This post is not about that. It’s about the discourse of “useful” — what does that word, useful, mean to those who find Gen AI useful, and what makes it‘s usefulness so hard to fathom for others?

If two groups cannot agree on whether a tool is useful, they are perhaps defining totally different purposes for that tool. Maybe “use“ is not best understood through the question “what did you use it for?” which, with AI, would yield the standard banal answers — coding, writing a summary, etc — which would then be debated based on statistics or technical details about how unsuitable these things are at those tasks.

If we instead defined “use” as one task that we know Large Language Models do — create statistically likely arrangements of text — we might ask, “to whom is that useful?”

From there, I have to wonder if the value of AI for the “users” camp is largely dependent on how they experience knowledge transmission. In other words: what is information — words or images — for, for you? What does it do for you when you encounter text or an image?

If you perceive information as something that generates internal thought, you don’t care that GenAI is “just statistics.” But if you filter information by the veracity of its source, you see nothing of value.

In Silicon Valley I’d always run into people who were open to ideas, but recalculating it all according to whatever fixation they had at the moment. It didn’t matter if it was a lecture by Stephen Hawking or a YouTube paraglider transmitting the message, what mattered was what they, as the receiver of these signals, assessed the message: how it moved existing ideas around.

Ideas severed from reference to reality are often useless, but can also result in a pro-hype mentality, where “building” can just as easily mean amplifying the shared illusion.

So that’s one group of users: the internally oriented thinkers for whom any source of information can be weighed on what it sparks.

Another group of users is more direct: these are the people who use it to create a spreadsheet, or write a summary of a meeting, or an email; maybe they use it to answer questions about things. Maybe they don’t fully understand the risks of inaccuracies, or maybe they just don’t care. For this group, producing content correctly is less of a priority than doing it quickly. Perhaps a cost/benefits analysis creates that instinct, or perhaps they just don’t care about the output all that much: it’s idle, passing inquiry, like settling a casual bet at a bar.

On the other end of this spectrum are the refusers, external-facing folks that want to know what the evidence is for any information they encounter, because a connection to reality was always the point of information in the first place. In this state of mind, knowledge is a means for understanding something akin to the true state of things. Ideas come from conclusions filtered through veracity, trust in the authority of the information’s source. Statistical word assemblages are not useful to that end, because there is no source, no authority, just statistics. Therefore, AI is and always will be useless for the task of information retrieval, and the idle-thought-remapping “use” is not even worth entertaining.

People can move between these states of mind, but some people hang out in one or the other quite intensely. This explains the pointed tension between pro and anti AI camps: the veracity finders (who care primarily about information’s relationship to ground truth) are going to be driven bonkers by people for whom ground truth is just one more ingredient in the production of ideas.

Likewise, when people show me text they generated, I am profoundly disinterested. I don’t care about what statistics generates about a topic I am interested in. Others seem to be fascinated by this stuff. There are scores of people who despise AI art, but equally, there are communities of people paying money to create it, time to discuss and share it. I would rather hear about a dream someone had than be forced to read a chatGPT generated dialogue somebody found interesting, because I understand what ChatGPT is producing, and why.

I think an AI generated text, and the LLM that produces it, is useful for people who want signals, stimulation, and space to build associations and projections. I think it is less useful for people who see text and images for conveying evidence about the state of the world. I think each of us is in this state of mind, at least a little bit, at different times, depending on the task.

To one group, or one state of mind, we might see a clear “use” for AI, whereas the other may be stunned to imagine anyone using them. But this is to imagine the other person’s “use” in a particular way.

I’ll reiterate that none of this comments on whether such use is justifiable, and it hopefully doesn’t read as a judgement call on either assessment. I’m just writing this on a Saturday night because I haven’t written anything else for Sunday yet, and thought it was a worthy question.


Noisy Human Tour 2025

I'm doing events in 10 cities on three continents over the next three months, so I'm calling it a tour! The Noisy Human Tour 2025 kicks off in Baltimore and ends in Melbourne. Click the button to check out the itinerary that has been confirmed so far, including the kick off event, Monday in Baltimore!