Human Literacy
Something I Can Tell Students Now That I Am Not Teaching
You and I probably both keep hearing that students should be working toward AI literacy. That you should know what to type into prompt windows, because it will save you time. That will get you jobs in the economy of tomorrow, where I guess typing into a window to save time will be a valuable skill.
What do you type into the boxes? That’s AI literacy. There's more to it, of course: how to make sense of what comes out of the box. But how about this one: Why do you type into the boxes? That’s human literacy. You can't have AI literacy without it, but we’ve set much of that aside over the last few decades. Nobody really asks why we are asking you to cut and paste clusters of words between windows, where the sentences will be elongated by a machine for you to paste somewhere else.
You might be copying and pasting things between windows one day and get stuck on this question of why. What is the point of this work? Why is it rewarded and incentivized? You may start looking at corporate decision making and find something that makes you cynical. You may think that this skill set, AI literacy, isn’t helping you with that cynicism.
There is a basic answer, which is that if you don't know how writing or image-making or coding works, you won't be able to understand what the AI is doing and where you might be able to make something of your own out of whatever it gives you. That's AI literacy and human literacy working together.
But human literacy is more than what happens in the workplace of the future. Other things might happen in the future, too. Your father could die, he might fall out of a ladder in a garage while trying to rescue a bird. You might be in another city in another country on another continent at that exact moment and watch as a van rolls down a street with a man falling out of the driver’s side door onto the ground, only to be immediately surrounded by people trying to help — including a nun, lending a strange air of a Renaissance painting to the whole ordeal.
That man will be OK, and soon after you might get a text that your father has died at that exact moment, and look back at this strange scene and think to yourself: this feels connected somehow, and is creating a weird sense that it holds a message. You will have no idea how to explain that connection. You might ascribe it to something mystical or sacred, or dismiss it as coincidence. But it may leave you feeling more connected to your father at the time that he passed, even as that makes no logical sense at all.
You may not think of such an experience as poetry, or think of poetry as existing in experiences beyond language, because in high school we're all taught that poems are a form: an arrangement of words into certain structures. We rarely acknowledge that the poetic is different from a poem. The poetic can arise from a collision of contexts that creates a more resonant, yet unexplainable link between the events and emotions they draw out of you. Words build that up for us. But they don't do it.
This drawing out of emotion, and the mystery that surrounds the experience, is a human literacy displaced from your curriculum. We cut it from budgets to ensure you get an education in how to take a bunch of words, put them into boxes and make more words. We tell you that these words “mean something,” and so you might have come to think of meaning as a thing that arises from words whenever and wherever they appear on a screen. That’s not wrong, but it’s a problem with this word, “meaning,” in that sometimes what things mean is meaningless.
If the elongated text made by AI, or the images of forests or people, or the music it writes does not hold much emotional meaning for you, you might eventually learn to set expectations of emotional responses aside. After all, emotion isn’t all that helpful to passing a class or getting a job.
Human literacy is quite helpful, though, because living a life consciously — with real connection to interpreting and creating the poetic for whatever it is that life sets in front of us — is a far more important skill for life satisfaction than slotting words correctly for a chatbot.
Closing the Door
Many people claim that the chatbot can write a poem, and this, coming from Microsoft, is something you might believe. That a company would invest so much money and water to write poetry ought to surprise us. But it is also true that on the day your father dies or after a terrible breakup that feels like losing a limb, the machine might say something to you that is very helpful.
Likewise, one of the best pieces of advice I ever got came from a broken door. I had fallen in love and it was terrible: a misaligned love that shouldn’t have gotten as far as it did. It was a palpable thing, an ache in my bones, a kind of soggy misery that weighed down my gait like rain-drenched shoes. It was a relationship whose ending was in constant and incomplete negotiation: “what if we tried this?” You may be bummed out to hear that such relationships can still exist even in your thirties.
In the midst of that relationship I came across a note taped to a convenience store in San Francisco’s Inner Sunset neighborhood, in broken English:
“Please, closing the door — but slowly.”
It stopped me in my tracks in the way that a chatbot might stop you someday, if it says something that strikes you as profound. This need not come from any capacity for intelligence behind those words. Instead, meaning is a power in how we read, deeply influenced by the experience we are having when words come to our attention. That is Human Literacy: to know why the things that have meaning to you have meaning at all.
You want meaning, even as we are trying to strip it out of things and make them analyzable, because you are alive and will inevitably have a unique experience of the world. If you don't learn what meaning to make of that, it doesn't mean your life is meaningless. It just makes it harder to know what the meaning is. Ideally the meaning of your life should happen with you, not to you.
Teaching Human
Human literacy is challenging to teach because it is not abstract or rule-based. It’s not abstract because it is defined by particulars: one thing happens and then another, and the two unrelated events change each other by introducing metaphors and unlocking new associations. It's not rule-based because the world is unpredictable, and the things that collide will never create new meanings in the exact same way. The worst that can happen is you miss them, and miss the chance to grow the meaning in your life.
AI literacy is more about understanding the abstractions of language. AI can provide a fairly accurate summary or rough outline of the particulars, but can’t retain them for very long. The industry behind AI collects a lot of words about specific things, but strips the specifics out. It renders meaning from vague forms — the words and their order, rather than the words and what they meant to say. It's created by the same people who insist that a poem is a structure, rather than the experience that the structure creates in the back of your spine, if you're open to it.
Without human literacy, you might assume that words and their order are all that matter. Perhaps you think that the words that I, as a person, am choosing right now are the sole source of understanding between us. But the words I choose are more than the simple, predictable patterns of the sentences that get these ideas into your head. A lesson of human literacy is that if you look close enough, you'll find in my words a whole set of references that come subtly through my choices. My language and the words I choose, the placement of a comma, invisibly shape your imagination of who I am.
Reading as a human means looking for the person that emerges from their selection of words. I am writing to tell you something about myself and my thoughts. For those of you who do not study writing, or have not yet gone deeply into the craft of it, you should know that every word in every sentence is a result of a specific set of considerations even if the author is only half-aware of those considerations.
I say "set of considerations" because it means something distinct from "decisions," which suggests the algorithmic decision-making of an LLM, weighted by statistics. That phrase (“a set of considerations”) is a consideration of how I want to convey meaning. You can see that phrase, and know the other options, and ask why I have chosen that one, and your question may lead you to something more precise about what I mean.
The LLM's word choice can be analyzed too, and many do this. The fact of it is that it doesn't tell you anything at all about the language model's motivations or life because it is neither motivated or alive. The words are chosen by probability and chance and a bit of human fine-tuning. There is no person behind the language produced by a chatbot. It’s a dilution of billions of people, like adding water to sugar until the sweetness dissolves. Writing, and lots of art and other things humans do, can give you a bit of a taste of the someone behind it. With AI, that sugar is rinsed out of the mug.
But look: of course you can choose to find a story given to you by an AI system compelling and there is no shame in that. I found one on a door. I will not lie and tell you that I sit and choose every word through careful deliberation at all times. I am also not the most accomplished of writers. My point is that translating your language through AI is a lost opportunity to cultivate the sweetness within you. With your own words, connecting to the words of others, we can use stories for what they are for, which is to link ourselves with the stories of the people around us.
All of us aim to explain our lives and the world we live in. We can tell stories to ourselves or to each other. Telling a story to ourselves and back to a machine offers some protection. We don’t have to be enmeshed with stories we don’t like, or threaten our sense of pride. We can be at the center of the story. That’s quite nice but is also a lie. Centering a single voice is just not what a story (or the world) is designed to do. The reality is that your voice does not really resonate in the world until someone hears it, and the more you distort your voice, the less we really hear from you.
AI can hijack and insert itself into our collective story of the world and make it an individual one, a smaller box for us to wander in, speaking out loud to ourselves about how much we understand it all. AI will confirm this understanding. When the world breaks away from your story, you will feel isolated, and so you may go deeper into the web of words the machine is weaving with you. Away from the people sitting on their own keyboards doing the same thing. We may end up deeply entombed in isolating worlds.
Human Literacy might help with this, partly because it will show us that this isolation is nothing new. Much of the world is organized to distort your voice. Isolation and anger are a natural result of so many of our systems. Media has always emphasized a story that only a few people really understood. We talk about AI as a novel world builder or terrifying destroyer, but the reality is that the "world" is all just words and imagination. How do we imagine the world? How do others imagine the world? This is the question at the heart of human literacy.
Artificial vs Imaginary
Because meaning is made, it might be tempting to think that the world is as meaningless as anything else we call artificial. We can easily find ourselves looking at the things that make feelings, or give our lives and the lives of others dignity, and decide that these are just stories. Made up and fake. But let's be real here. The idea that stories are made up and fake, and so there is nothing at all that truly matters, is a story too. Human Literacy helps us understand this.
To walk around telling everyone their stories are meaningless is to miss the point of a story. A story is a way of saying: “this is how I have made sense of things.” If you call that meaningless, you may think you are very smart. But actually, you just haven’t made sense of things, in your story, through the stories of anyone else. So you have a story that leaves you very lonely, and the people around you unwilling to discuss the lives they make around their story.
You might find, in your AI literacy, that you prefer the stories of the chatbot to the stories of people. That is ok, they are designed to make you prefer their stories to the stories of the people in your life. And the people in your life are telling twisted stories too: presenting what they think ought to be heard, rewarded for finding the story that the social media algorithm wants to show them is the right one to tell.
Finding the story of yourself is hard work. There is a lot pulling you toward someone else’s stories, because if you get enough people to believe a story you get a lot of things out of it. Power and money, sure, but also confirmation that the story must be true. But no story is true, because nobody sees everything. So when the AI literacy people say "don't trust AI, it hallucinates!" it's worth asking what that means. What is the AI supposed to tell us that is true? Who is the AI's perspective meant to be coming from, exactly? Where is truth supposed to be located?
I should say again, though, that human literacy is a part of AI literacy. Can you use AI to tell your own story? You can, but the human literacy has to be there. You are better off knowing what the AI cannot do for you if you want to understand how to get something out of it. You could invest inward, into knowing what your voice is for and how you want to use it, before you allow it to be contorted to the language of the universal poetry machine.
It's worth marking the difference between the world of our imagination and the artificial world. The artificial world can trigger the imagination, give us stories we get lost in, give us systems we have to adapt and contort our way through. The artificial, by definition, is unnatural. Imagination, on the other hand, is absolutely natural and organic, it rises up from within us. In both cases, the risk is not the artificial sweetener or in daydreaming about coffee. The risk is in mistaking the artificial and the imaginary from the reality of the relationships we're in, and the world that grows from those relationships, as the sole source of truth, or the only way they might be done.
No Money in Knowing
You might be told that AI literacy, as defined by Tik Tok stories of productivity and efficiency hacks, is rewarded quite directly with money and power. The problem with human literacy is that it doesn’t give you something you can count up and compare to the last quarter's financial report. You will find that it comforts you in a far too easily ignorable background hum. It soaks into you and works its magic. If you find others on the same hum, you can share notes. You can share what made you feel OK about the things that were hard, find your own way to celebrate and appreciate the things that are good. You will feel more connected to things, happier with your struggles and choices, learn to learn even from the harshest of mistakes and random catastrophes.
It is weird then to get angry that others are not feeling good in the same way that you do, or not finding comfort in the things that comfort you. It is like insisting that enjoying the same food is essential to being friends with someone who is allergic to chocolate. Not every human shares the same hum. So the right thing to do there is expand your hum, not lock it in: "What's this guy humming about?" Maybe it makes your own hum grow. If it doesn't, that's fine too. It's not your hum.
Ultimately the richness of a society is strengthened for us all, individually, when we share a deeper commitment to this human literacy. We live better lives when the people around us have empathy. Of course, people can exploit that empathy. It happens all the time. But human literacy doesn’t mean you have to be a sucker. It makes you better, actually, at identifying the politicians and the CEOs and the money they’re slipping into their pockets with their words.
AI generated text may offer up some half-hearted defense of human dignity, informed by thousands of corporate value statements. AI literacy can tell you how to generate bullet-point summaries of human rights statements and make a hodgepodge of why we might preserve “human uniqueness.”
But human uniqueness is not purely collective. It's individual, too. We are human together, but you are also human alone. You exist in the constantly shifting borders between these stories: the story of the sense you make, and the sense you were born into, and the things others see that you cannot. The sense made by your parents and your community can become your sense. You might embrace it all or reject it all or choose your parts. Sometimes it hurts. Sometimes you long for it anyway. But it all gets assembled and reassembled within you. You make the sense you make. Nobody else.
What can AI do for that? As with so much of the world: probably something, but definitely not everything. Stay critical of whatever it tells you, and learn to tell the difference between the words we use for knowing and preserving the loose uncertainty of actually knowing anything at all.
In the end AI is just a sampling of stories and pictures, stripped of the people who wrote them, presented to you as a new story at the center of all things. But AI isn't at the center of anything. It has no greater claim to truth than any one person does. It's a rough sketch of a voice made from a chorus of sketched out voices. Don't let it drown yours out.