Human-Machine Improv

On Collaborating with Machines (And Why Improv Comedy Isn't Always Funny)

Human-Machine Improv
“Insertions” by Daniel W Van Arsdale, 1973. The piece was created with information entered via punchcards and a “DatagraphiX 4460 Computer Output Microfilm Recorder.” From The People’s Computer Company magazine, 1973.

“There was an explosion of form into… something, something like a kaleidoscope, like a human kaleidoscope.” — Andre Gregory, describing the spontaneous eruption of movement during an improvisational exercise in My Dinner with Andre

I used to do improvisational theatre in the basements of the London School of Economics. One of the first lessons they teach you there is not to be funny. This helps with building an awareness of your role in a group dynamic, because you’re working to understand and sustain a shared space. You’re balancing the complexity of many actors with the tension that arises from doing nothing.

If you create a character with your actions and words, then that character emerges in response to that shared space. If it’s funny, then it’s funny. It usually isn’t.

More recently I attended a pre-conference on “AI, Music, and Improvisation,” organized at the NRW-Forum in Düsseldorf, which defined improvisation as “a collective, emergent organizational process.” It made me wonder: How would we do improv comedy with an AI? Could that lens teach us about doing other things with an AI?

Zoom out enough and improvisation looks the same, whether you’re doing a saxophone solo or saying “yes, and” in a bad Brooklyn accent. Dr. Hyun Kang Kim put it this way: “From ecological systems to human and non-human actors, improvisation is open. Coherence is formed through participation, unplanned but created in the act of improvisation itself, as opposed to a system of fixed rules. It is neither order or disorder, but an organizing method for fluid processes.”

Here’s an example. In improvisational comedy, the scene emerges in response to whatever situation is created between human actors. The actors adjust and accommodate one another’s space. That’s why we “yes, and” rather than point at the unreality in the space. There’s no dominance in improv, or at least, it’s bad form to dominate an improvised scene.

In experimental music, the art of improvising with a machine shifts a bit. A saxophonist and drummer might jam in response to one another. A machine listens differently, and we respond to the machine differently. We are still co-creating that space with the machine, noted Dr. Kim, “it just happens to be within a technical environment.”

A ringing telephone demands attention, a silent vibration requests it. Rounded corners encourage you to grasp an object, sharp corners discourage it. They may not have a consciousness of the space they share with a human. Instead, we improvise with their affordances.

Dr. Jenny Davis has described affordances as factors built into the design of things that “request, demand, encourage, discourage, refuse, and allow social action.” Autonomous machines can sense and respond according to these verbs (and possibly more).

Working with these machines in an improvisational way means organizing the complexity of possible actions through the prism of these affordances. In an improv scene, we have literally endless space to create. We can “yes, and” our way to a New York Deli or the moon, or a deli on the moon, or, hell, the moon orbiting a deli. The prism of possibility gets narrowed through interaction with another person.

If we were to organize our interactions with machines in this way, we might begin to think differently about how we relate to machines and how machines relate to us. (De Kai, at the AI music conference, said we were trapped in “thing-think” in a way that obscures relating to things — intelligence is all about subjectivity, and so artificial intelligence could perhaps be considered “artificial subjectivity”).

Someone asked Lauren Sarah Hayes if humans default to a subordinate relationships with machines because we adapt to those affordances while the machines don’t have the capacity to adapt to us. Hayes responded that this relationship is ideally reciprocal: “the performance is led by affordances, but it’s still a shared, co-created space. Improvisation is not about control, but about navigating those complexities.”

(Just as improv comedy isn’t always funny, AI-improv music isn’t immediately comprehensible, either, because it’s transformed through those affordances. Lauren Sarah Hayes’ music sounds like jazz created with a robot-alien because that’s what it is. But like all experimental music, if you give it time and listen for things rather than to them, they begin to make a different and fascinating kind of sense).

Hayes is speaking as an artist, a person with a deep relationship to the tools of her craft. But what about the humdrum technologies for the workaday user?

Dr. Georgina Born’s closing talk brought to light the relationships between externalities vs affordances. Externalities in economics, she said, “refer to otherwise hidden or occluded by-products or effects of economic and market processes that are not included in the way economic processes are framed and formulated.

These externalities are present in the systems we improvise with, which, in an age of automation, is all of them. The extraction of data and lithium, the energy burned to fuel a Collab notebook, the workers exploited to mine iPhones from caves.

Born pointed out that what’s missing in our improvisations are these externalities —how these systems and technologies are built. The humans don’t know it and the machines won’t tell them. The “yes, and” stops there. That’s where the domination and control creep in: from the machine, in subtle ways, yes. But it’s vastly overshadowed by the economic machines that build machines and erase all tracks.

How do we acknowledge these externalities in the “scene” we co-create in our improvisations with machines? Would they let us?

More and more we are co-creating spaces with machines in ways that might resemble the complexities of social interaction and spontaneity. The human and machine communicate through each other’s limitations, but the systems that make these interactions possible is hidden. A role for the improvisational AI artist, moving forward, Born notes, is to make the externalities of these systems visible.


Things I’m Doing This Week

I launched a record label!

On the heels of releasing The Organizing Committee CD of music co-written with an AI, Notype has also given me an outlet for releasing decidedly experimental AI and GAN-produced music. This week we launch with two releases, one from Merzmensch and one from yours truly — Ada ZigZag.

Merzmensch’s “Latent Voices” is a curated selection of 24, 1-minute AI-produced pop songs, fused with influences drawn from experimental music such as Fluxus, Dada and Music Concrete. All of the sounds on the record were produced entirely by an artificial intelligence model. They were never written or performed. But they are still weirdly familiar, as the AI can only build its dreams around the data it has. Here, that’s a history of jazz, pop, choral and field recordings reassembled into this sonic collage. The low resolutions give the sounds a patina of mysterious AM radio broadcasts from distant stations, never to be heard again. Funny, surreal, and weirdly addictive. Merzmensch has a project dedicated to demystifying AI creativity over at the Merzanine.

Ada Zigzag is a weird kind of “solo project” by the GANs and text-generators I used for the Organizing Committee. It is primarily a combination of spoken text and computer-generated music, but the input is more direct than it has been for my other work — even the name of the project was produced by the AI. Quieter, surreal and a bit melancholy.

Both releases are free of charge and free to download.


Things I’m Reading This Week

####

The Dangerous Ideas of ‘Longtermism’

+ “Why Longtermism is the world’s most dangerous secular credo”)

Phil Torres

Torres has an axe to grind against Longtermism and Existential Risk communities, with particular ire reserved for Nick Bostrom. Bostrom rose to fame by proposing that we all live in a simulation (not in a metaphorical sense, but a real one). Torres spells out Longtermism as a dangerous belief system, started with Bostrom, that moves from the idea that all life is a simulation, to the idea that simulated life is just as valid as human life, to the idea that the most important cause for humankind is to preserve educated, technologically advanced nations in order to preserve the future theoretical possibility of billions upon billions of sentient AI life forms.

Longtermism should not be confused with “long-term thinking.” It goes way beyond the observation that our society is dangerously myopic, and that we should care about future generations no less than present ones. At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.

It’s all a snappy conversation starter for pub chats or stoned philosophy undergrads. But Bostrom has access to billionaires and politicians who agree, or at least fund, these priorities. Torres points to an article in which Bostrom uses them to justify totalitarian digital surveillance just in case a terrorist is creating a humanity destroying weapon:

“Under Bostrom's vision of mass surveillance, humans would be monitored at all times via artificial intelligence, which would send information to "freedom centers" that work to save us from doom. To make this possible, he said, all humans would have to wear necklaces, or "freedom tags," with multi-directional cameras.”

If you want an article by another author who is (only slightly) more generous to Longtermism, you can find one here.

####

GoldenNFT

The Peng Collective

“Every day, people are making their way to the EU. Those who have no money are arrested at the external borders and detained in camps; those who are rich can easily buy their residence permit with a Golden Visa.”

If you have enough money, you can buy citizenship in a handful of European countries under “Golden Visa” laws intended to encourage investment. In Greece, that’s $250,000. So, The Peng Collective are selling NFT’s — digital records of ownership of digital-only .gifs, donated by digital artists — to raise funds for refugees to buy into that residency scheme. Essentially, it’s trading gifs for passports. It’s a brilliant bit of revelation here, contrasting the overheated NFT market with the disparity in treatment for those who can afford them and those who cannot.


Thanks all! You may note that we have moved to Monday from Sundays. We’ll see how this experiment goes.