Nonobody's Mathematical Bio-Pianolas

When 1940s Surrealism meets today's Brain-Computer Interfaces

Nonobody's Mathematical Bio-Pianolas
Instructions to a Waltz (1)

I’ve been working through Jasia Reichardt’s 1971 primer, “Cybernetics, Art, and Ideas” this summer, and I keep thinking about a specific piece. Stefan Themerson’s Nonobody’s Mathematical Bio-Pianolas was written in 1940 and published in 1945. It’s written as (but likely not) a children’s story — but is a work of surrealist fiction about differences between biological and mechanical intelligence.

Themerson was a physicist and friend of Dadaist Kurt Schwitters; he and his wife, Franciszka, were mostly known for their experimental film work after the war.

“Nonobody” is interesting not only for its writing but for its inclusion in Reichardt’s book, which is unexplained.

The story is a fable about a fantastical bug world and a super intelligent biological machine.

The machine answers questions, but to answer those questions, it requires massive amounts of data. Nonobody (the bug) throws books into the machine to answer a series of mathematical word puzzles, the results of which do not come back as text but as music that must be interpreted by the one who poses the question. For example, here it responds to a nonsensical question about the value of a Santa-Re (a form of currency):

“Instead of sounding politely in one single tone whose frequency would represent a definite number, the bio-pianola began to thunder and roar through an extraordinary fugue consisting of a thousand thousand voices, each of them representing one Santa-Re share’s value as a function of various circumstances.”

Excited by this, and a bit frightened, the bug adds more information, which amplifies the music until Nonobody is both dancing and removing pieces of the machine in order to reduce the din.

At this stage, another insect (Trumpet) comes in and declares his surprise to find Nonobody “amusing himself” at the pianola. Nonobody is upset: he’s not playing around, he’s calculating. But Trumpet hears the music as a variety of specific dances, which he performs in turn.

Brace yourself, as the story then turns to the issue of bug genitals. Every insect in this story has been castrated, but not Trumpet. It is clear that the issue rising from the bio-pianola is how to interpret its responses. Trumpet, having some capacity for sexuality and fully in touch with their body, hears the data as a dance. To Trumpet, the acoustic data can be read as a waltz, a tango, Mozart’s Don Giovanni.

Nonobody — himself castrated — is troubled. When Trumpet leaves, Nonobody begins writing out queries for the bio-pianola. One of them — and then we’ll take a break from the plot summary — is this:

“If it proves to be correct that the dance, being a superstructure of sound curves, may give a picture of the problems put into a calculating bio-pianola, can one regard a fox-trot, for instance, as a solution of a mathematical problem, if the kinetic motor of a fox-trot is not associative substance but the genitals? If so, wouldn’t the development of genitals be necessary to the development of science? If so, wouldn’t there be a danger of one of the arts (namely, choreography) becoming a higher form of knowledge? (Corollary: why exactly should it be dangerous?).”
Instructions to a Waltz (2)

The Bio-Pianola of Our Age

I’m struck by this passage as I keep reading about “mind reading” by AI, with headlines like “AI re-creates what people see by reading their brain scans” and “AI makes non-invasive mind-reading possible by turning thoughts into text.” We also have another round of claims about bio-electrical activity being used to “detect” homosexuality, a particularly risky claim in a world where many people are still sentenced to death based on the accusation.

The Bio-pianola story comes to mind when I read these headlines because they emphasize the disconnect between the data produced by the body — the signals that ripple through our brains, which can be picked up by HCI headsets — and the data that is interpreted by the machine. There is a mistaken sense in these press reports that the data flow is continuous and direct. In fact, information entering machines is discontinuous, translated, and calibrated to interpretations. All of these degrade the original signal to various degrees, and in some cases, can replace the original signal altogether, substituting a falsely aligned set of values to unrelated activity.

It’s this disconnect between data and bodies that’s at the heart of Themerson’s story. Trumpet sees the sonic output of the bio-pianola as music, while Nonobody hears it as information: facts. Both instigate action, but one of those actions is playful and the other is scientific. The disconnect between these two interpretations is derived from a disconnect between the data and how it is represented and interpreted.

With HCI work — “telepathy” and “mind reading” — we measure a kind of bio-chemical activity within the brain which can be detected by electronic sensors. Certain clusters of our brain are activated, the sensors detect this activation, and they transmit it to a machine. HCI algorithms are designed to learn patterns: what lights up when you hear a certain waltz. When you hear the waltz, we see the data, and eventually, the system registers that you are hearing a waltz.

Imagine a waltz that hs never been heard by anyone else, and the machine cannot pick up and transmit that waltz to strangers. The same woth images, or text. What’s happening in these experiments is not “mind reading.” Instead, it’s setting up conditions where brain activity responds to a specific input. Brain activity is measured in response to that input. Sensors compare that activity to activity associated with previous viewings of the same input: an image, or waltz.

To put it simply, the mental activity of the brain is picked up by a sensor, but we can make the sensor do anything we want with it. It’s simply data about mental activity picked up by electrodes. If the pattern is consistent, you can tell a machine to re-interpret that pattern. That’s what’s happening when “images” are created by these mind-reading tools. It’s not showing us the mental image of the brain. It’s saying that one person’s brain “lit up” the map when they saw a cat picture. When the brain lights up in a similar way — in that one person’s brain — it sends “cat” to the image generation tool, which then does exactly what image generation tools do when we type “cat” into the prompt window: it creates pictures of a cat based on a dataset.

“Importantly, the stable diffusion algorithm doesn’t receive a text prompt directly from the test data—it can only infer that an object is present if the brain pattern matches one seen in the training data. This limits the objects it can re-create to those present in the photos used during training.” - Science

You could, just as easily and with a bit of artsy ambition, “translate” that data into a music generating algorithm which would create a waltz based on that brain activity. You could feed inputs into any processing engine and create wholly inconsistent outputs: a word, an image, or a waltz.

These systems really don’t know what you are thinking or doing. It only shows, simplistically, where your brain is pumping and where the brain is not: a series of on/off signals akin to whether or not water is present in the leaf of a plant. HCI-AI tools can “write text from your thoughts” only in the sense that it can sense brain activity that might correspond with a previously measured experience, according to the interpretations that researchers have opted to include. That activity is then translated into information that aligns with pre-existing entries within the dataset.

The sensor recognizes patterns from the things we have previously seen, the music we’ve previously heard, or the words we previously read, so long as the machine has already seen us see them. It takes these subtle signals, matches them to the existing data, and then makes a guess.

This is more than pedantry about the flow of information through a sensor. When digital information is constantly broken up into on/off cycles — and those cycles themselves translated between distinct sensing mechanisms — they are extended, transmitted, and reconstructed on the other end of a system. It’s crucial to understand the gaps. Otherwise, we introduce the risk of misinterpreting the reconstructed signal for the source signal.

The scientists themselves are honest about this in their papers. The press, and most of those who boost and share these stories online, seem to constantly mistake these signals as a linear flow of lossless information, not a series of systems that translate one signal into a number of other forms.

Instructions to a Waltz (3)

Periclase Nonobody’s Quantitative Measures

In the story, Nonobody gets a result and begins to wonder if we can use machines to think about who gets to influence society. Certain groups of people are pernicious and dangerous, and Nonobody realizes that those in power “have up to now been hindered by the lack of a clear standard against which to measure the objects of their investigations.” So he proposes a tool to make this simpler.

Nonobody uses the machine to determine who should control society. It is a series of measurements, marking the shape of the nose. There are then pages of diagrams in the story, aligned with mathematical proofs. Therein lies the excerpt — a short story written by a Jewish refugee from Poland wherein an insect learns from a machine how to control a population by the shape of their noses.

It is a strangely prescient warning for the current era of physiognomy, skull measurements and false corollaries empowered by our latest automated statistical analysis systems.

In a 1968 post-script to the story, we get this:

“Mystics apart, to grasp a truth, we must have a set of tools to grasp it with. We call these tools ‘notions.’ Sets of notions are neither fixed once for all, nor do they ever seem to be complete. Especially, what we call ‘basic notions.’ They are invariably the basic notions of the time. In our case, of our time. How can we know that those we have in hand are sufficient to grasp a truth? If it were so, if our difficulty concerned only the art of manipulating them, then … the ultra-intelligent machine would help us to grasp a truth. [Footnote] The question will arise: who is going to decide which notions are basic and to be given to the machine to impregnate it? In practice, where it is a question not of truths but of goals, and a priori arguments, and interests, it is not mathematicians who decide, but club-men.”

Thanks for reading! If you like what I write here, please do share it with others!

Where to find me:

  • These days I am trying to move away from Twitter. If you’re on BlueSky you can find me there — eryk.bsky.social - and I am also on Mastodon.
  • Cybernetic Forests also has a dedicated Instagram page (@cyberneticforests) where I post visual experiments along with newsletter updates.
  • Did you know I have a website? :)