an artistic research framework oriented towards the generation of an evolving taxonomic collection of hybrid bacterial-AI organisms. With a subtle echo to the endosymbiotic theory, we propose a symbolic formulation of a style transfer machine learning environment as a host, in which to merge bacterial/archaea time-lapse microscopy footage along with multidimensional cellular automata models as endosymbionts, all under the orchestration of an autonomous generative non-adversarial network architecture. We aim, as a result, to encounter novel algorithmically-driven aesthetic representations, tagged with a unique morphotype and genotype-like encoding, and articulated around a speculative narrative encompassing unconventional origins of life on earth and elsewhere.
interspecifics
Why do I blog this? This is typically the kind of revamping of artificial life system that emerges these days. As they say in the documentation, the authors “seek to activate a self-generative system, Artificial Intelligence, and algorithmic approximation for generating virtual organism models based on morphology and rule modeling”.. which is the kind of combo we have these days to generate new kinds of entities. Not exactly monsters, not strictly living beings either.
Found at the flea market, this Pokédex replica was made back in 1998 by Tiger Electronics and Habsro. According to Bulbagarden, it included information about Generation I Pokémon (” except for Mew, who was not yet revealed to the public at the time the toy was released”). Using a number pad as well as alphabetical keyboard, users could search Pokémon by name or page, compile a list of their favorite or captured creatures, search them by their height, weight, strength, or type. There’s also a clock and a calculator, and users could even add a password to protect their lists. Rumor has it that Nintendo was not really happy about this device, arguing that it could hurt GameBoy sales.
Why do I blog this? Beyond the cutesy plastic aesthetic that was typical in the 1990s, this device is interesting as it can be seen as an electronic/new media equivalent to bestiaries. The logic of compilation, with drawings and little text reminds me of the material one could find in medieval compendium (without the moral that was common in there). I also find interesting the was players would interact with it alongside their game console, as a sort of notebook to compile the monsters they found… which behave like a sort of natural notebook. To some extent the Pokédex replicates the transition between intellectual tradition from the Middle Ages and Modernity. Using red plastic this time.
The first paper emerging from this research paper was published today. It’s called Mapping Our Digital Menagerie: A Monster Manual for the Megadungeon and belong to a special issue of “magazèn” (a digital humanities journal) that explores how this spatial metaphor inspired by role-playing games can be helpful in order to grasp the complexity of contemporary digital ecosystems.
My piece revisits the monster manual format of role-playing games to describe the set of digital entities and creatures inhabiting the complex me-dia ecosystems that are accessed through computers, smartphones and gaming devices. Thanks Paolo Berti, Gabriele de Seta and Stefania de Vincentis for including it in this issue, and thank Justin Pickard for proofreading and constructive criticism!
Looking into ghost-like entities one could find on digital machinery, I can’t help thinking about Masamune Shirow’s work. My first encounter with his work was around 1990, reading about Appleseed on a Minitel server (3615 AKELA). Two years later, I found Ghost in the shell (the manga) and Intron Depot (an art book) in a shop in Lyon and voilà, that was my first foray into this kind of Japanese cyberpunk, sort of a visual follow-up to the Sterling/Gibson novels I used to read around the same time.
While the title suggests a spectral presence in machines, it’s actually an homage to Arthur Koestler’s oeuvre The Ghost in the Machine… which in turn is a reference to Oxford philosopher’s Gilbert Ryle’s anti-cartesian view that the human mind is not an independent non-material entity, temporarily inhabiting and governing the body. This topic is addressed by Mirt Komel in paper about the philosophy of Ghost in the shell franchise :
In the cyberpunk world we are immerging in the very word “ghost” denotes an individual’s consciousness that differentiates a human from a robot. Even if someone replaces his own biological body with a fully cyborgized prosthetic one, including a cyberbrain as the locus of the ghost, one can still be considered human as long as one retain one’s own ghost. Ghost-dubbing, that is, duplicating a ghost is nearly impossible, and even if successful the copy is always an inferior version of the original (cf. Shirow, 1997). One of the implications of such a conception of “ghost” addresses the question of human’s consciousness’ originality in contrast to its bodily banality, which can be biologically or artificially reproduced. The implied philosophical question is, despite its futuristic imagery, actually a very old one and commonly known as the “paradox of Theseus’ ship”, as most notably recorded by Plutarch in his biography of Theseus (cf. Plutarch, 1914: 1–88). The paradox as such was addressed in different manners by various philosophers preceding or succeeding Plutarch, from Heraclitus and Plato to Hobbes and Locke, where the ship is chopped by an axe hidden in a sock. 6 Regardless of its many variants the question remains always the same: does a thing remain the same if we change one by one all of its parts? Or to articulate it in a cyberpunk manner: does a human remain the same if we change all his body parts into prosthetics?
Komel, M.. (2016). The ghost outside its shell: Revisiting the philosophy of Ghost in the Shell. 53. 920-928.
Found in a paper by J.C.R. Licklider and Robert W. Taylor from 1968, this entity called “OLIVER”, which corresponds to what we’d call nowadays a computer assistant:
A very important part of each man’s interaction with his on-line community will be mediated by his OLIVER. The acronym OLIVER honors Oliver Selfridge, originator of the concept. An OLIVER is, or will be when there is one, an ‘on-line interactive vicarious expediter and responder,’ a complex of computer programs and data that resides within the network and acts on behalf of its principal, taking care of many minor matters that do not require his personal attention and buffering him from the demanding world. ‘You are describing a secretary,’ you will say. But no! secretaries will have OLIVERS. At your command, your OLIVER will take notes (or refrain from taking notes) on what you do, what you read, what you buy and where you buy it. It will know who your friends are, your mere acquaintances. It will know your value structure, who is prestigious in your eyes, for whom you will do with what priority, and who can have access to which of your personal files. It will know your organizations’s rules pertaining to proprietary information and the government’s rules relating to security classification.
An internet celebrity of 2022, Loab is a character that artist and writer Steph Maj Swanson has claimed to have ran across using a text-to-image AI generator. Given the look of that person, some folks almost immediately wondered about the possibility to have haunted presences in the latent space of AI models :
is this AI model truly haunted, or is Loab just a random confluence of images that happens to come up in various strange technical circumstances? Surely it must be the latter unless you believe spirits can inhabit data structures, but it’s more than a simple creepy image — it’s an indication that what passes for a brain in an AI is deeper and creepier than we might otherwise have imagined.Loab was discovered — encountered? summoned? — by a musician and artist who goes by Supercomposite on Twitter (this article originally used her name but she said she preferred to use her handle for personal reasons, so it has been substituted throughout). She explained the Loab phenomenon in a thread that achieved a large amount of attention for a random creepy AI thing, something there is no shortage of on the platform, suggesting it struck a chord (minor key, no doubt).
The interesting thing with the Loab case is the fact that’s supposed to be caused by ” negative prompting”. As explained in the Techcrunch article:
if you prompt the AI for an image of ‘a face,’ you’ll end up somewhere in the middle of the region that has all the of images of faces and get an image of a kind of unremarkable average face,” she said. With a more specific prompt, you’ll find yourself among the frowning faces, or faces in profile, and so on. “But with negatively weighted prompt, you do the opposite: You run as far away from that concept as possible.”
But what’s the opposite of “face”? Is it the feet? Is it the back of the head? Something faceless, like a pencil? While we can argue it amongst ourselves, in a machine learning model it was decided during the process of training, meaning however visual and linguistic concepts got encoded into its memory, they can be navigated consistently — even if they may be somewhat arbitrary. (…) Over and over she submitted this negative prompt, and over and over the model produced this woman, with bloody, cut or unhealthily red cheeks and a haunting, otherworldly look. Somehow, this woman — whom Supercomposite named “Loab” for the text that appears in the top-right image there — reliably is the AI model’s best guess for the most distant possible concept from a logo featuring nonsense words. (…) Negative prompts don’t always produce horrors, let alone so reliably. Anyone who has played with these image models will tell you it can actually be quite difficult to get consistent results for even very straightforward prompts. Put in one for “a robot standing in a field” four or 40 times and you may get as many different takes on the concept, some hardly recognizable as robots or fields. But Loab appears consistently with this specific negative prompt, to the point where it feels like an incantation out of an old urban legend.”
Why do I blog this? First off because it’s another kind of entity to be added to the mirabilia list. Besides this, it’s also because I’m less interested in whether there’s something (someone) haunting the latent space of LLMs than in the way this kind of stories emerge and circulate. It’s not exactly a creepypasta but it’s close. But it might an AI-generated cryptid (cryptids being animals “discovered” by cryptozoologists who believe they exist even though their existence is disputed or unsubstantiated by scientific research).
In a recent email exchange André mentioned this intriguing entity named “128 language Ouroborous quine“. Created by Yusuke Endoh, it is “a Ruby program that generates a Rust program that generates a Scala program that generates …(through 128 languages in total)… a REXX program that generates the original Ruby code again.”
Organized with an alphabetical order, the transition from one language to the next forms a “quine”, which can be defined as a program that prints its own source code to the screen. The “ouroboros” metaphor (a circular symbol that depicts a snake/dragon devouring its own tail and that is used especially to represent the eternal cycle of destruction and rebirth) is also interesting as it shows the circular character of the performance.
Why do I blog this? Even though this “Ouroboros quine” may not exactly qualify as a digital entity per se, I find it curious enough to consider it as a candidate for the menagerie; perhaps closer to code poetry and esolang performance.
Reading Stefan Helmreich’s ethnography of the “Artificial Life” community from the 1990s, I ran across this interesting paragraph that discusses the role of mimesis in digital entities.
Because the simulated physics and creatures were programmed together, most behaviors looked realistic and purposeful. But because Sims occasionally made errors in modeling physics, sometimes behaviors came off completely wrong, as when some creatures bounced out of the world because of his mistakes in modeling gravity. In a brilliant dash of showmanship, Sims showed videotapes of malfunctioning creatures, explaining that creatures were “exploiting” bugs in the program and were “making fun of [his] physics.” Sims’s ventriloquism delighted the audience and added a sense that his creatures were not only mimicking familiar behaviors but were also mimicking behaviors associated with the playfulness of some life-forms, a playfulness perhaps most readily compared with that of mammalian babies.
(Helmreich, 1998, p. 134)
He then discusses the role of mimesis, pointing to Michael Taussig’s book about this notion, and its importance in robotic/artificial life:
Taussig (1993) has argued that mimesis, the ability to copy behaviors, is a faculty often seen as a hallmark of the primitive-as words like aping and parroting attest. And when things considered primitive copycat more advanced behaviors-when dogs dance, birds sing, or apes sign-we think of them as cute. But things are only cute when they have relatively little power. When robots mimic behaviors that threaten humans, they are not cute. The cuteness of Artificial Life creatures is produced by and produces a sense that they are primitive entities, a sense that they are capable of miming-perhaps even of parodying or burlesquing-advanced behavior, a sign taken to demonstrate not that they are not alive but only that they are simpler forms of life. The laughter at Artificial Life is the spark of life for these simulated creatures. Is it live, or is it mimesis?
(Helmreich, 1998, p. 134)
Why do I blog this? Well, I’ve always been fascinated by Karl Sims’ creatures, but something was intriguing for me in the way they felt playful and odd at the same time (Helmreich again: “his creatures were not only mimicking familiar behaviors but were also mimicking behaviors associated with the playfulness of some life-forms, a playfulness perhaps most readily compared with that of mammalian babies.”) His analysis here is relevant, in the sense that highlights a certain degree of ambivalence.
Fenwick McKelvey’s book about daemon stayed on my desk for ages. It’s been part of a tsundoku about tales, myths and folklore, next to Jacques Le Goff, the catalogy of an exhibit about ghost in the digital age, AD&D’s Monstrous Compendium as well as Louis Dumont’s opus about tarasque. I spent few days perusing McKelvey’s text, discovering more about networked daemons, different than the ones I’m used to on my laptop computer. The whole piece was fascinating and of particular interest for the Machine Mirabilia project, both in terms of intellectual framing and factual elements about demons/daemons.
The book starts off with this idea that “daemons animate the routers, switches, and gateways of the internet’s infrastructure, as well as our personal computers and other interfaces. These computers need daemons to connect to the global internet, and they are met online by a growing pandaemonium of intermediaries that specialize in better ways to handle packets.” Or, as explained later in the introduction, “internet daemons, in my definition, are the software programs that control the data flows in the internet’s infrastructures (…) vital to understanding the internet’s backbone. Daemons function as the background for the material, symbolic, cultural, or communicative processes happening online” (p.7)
Focusing on “the internet daemons responsible for data flows”, McKelvey investigates to what extent these daemons “control the internet”, favoring certain kinds of choices and optimizations… and eventually affecting how we communicate and participate in contemporary culture. For the author, these entities named with a supernaturral connotaiton “offer a way to embrace the internet as a volatile, living mixture and to think about infrastructure without overstating the “fixed stability of materiality.” Daemons belong to the distributed agency that enables internet communication, the millions of different programs running from client to server that enable a packet to be transmitted.”
While the whole book is fascinating, the part that caught my attention is the first chapter, which describes how demons become associated with computers. Or, said differently, how “the demon made a leap from being an imaginary figure to being a real program running in an operating system.” McKelvey discusses at length the different steps of such circulation.
Firstly, he reminds me of Maxwell’s though experiment:
“In the nineteenth century, Maxwell, a seminal figure in physics, engineering, and control theory, conjured a demon into the sciences. In his book on thermodynamics, Theory of Heat, published in 1871, he paused to consider a potential refutation of its second law, which states that, generally speaking, entropy increases over time. Maybe the law could be broken, Maxwell speculated, “if we conceive a being whose faculties are so sharpened that he can follow every molecule in its course, such a being, whose attributes are still as essentially finite as our own, would be able to do what is at present impossible to us.” In Maxwell’s thought experiment, this being acted as a gatekeeper between two chambers containing molecules of gas, opening and closing a door to selectively control the transmission of molecules between chambers. By doing so, the demon isolated hot molecules in one chamber and cold molecules in the other, raising the temperature in the first chamber and lowering it in the second. This redistribution of energy toward an extreme ordered state violated the second law of thermodynamics, which predicted that the two chambers would revert back to a random distribution of molecules (or what was later called “heat death”).”
A second important step in the circulation is in “provoking reflections on the nature of communication”:
Information became a theoretical concept out of the refutation of the daemon. As Wiener explained, for Maxwell’s demon “to act, it must receive information from approaching particles concerning their velocity and point of impact on the wall”. Information about the molecules allowed the demon to control their transmission in a closed system, creating a self-regulating system. In Maxwell’s thought experiment, the demon appears to be able to acquire information about the molecules’ movement without any cost. How could a demon gain this information? Wiener argued that “information must be carried by some physical process, say some form of radiation.” The demon could not operate because “there is an inevitable hidden entropy cost in the acquisition of information needed to run the device.” The energy required to transfer information between molecule and demon would eventually, according to Wiener, cause the demon to malfunction.
(…)
Wiener wrote, “there is no reason to suppose that Maxwell demons do not in fact exist.” If demons might be found naturally, could they also be built artificially? In other words, being open to the existence of Maxwell’s demon allowed for the possibility of building a real machine designed for generalized control and information processing. Shannon, while he imagined computers playing chess, also suggested that a thinking machine could “handle routing of telephone calls based on the individual circumstances rather than by fixed patterns.” Thus, Maxwell’s demon made the transition from inspiring the idea of information to providing conceptual fuel for imagining the infrastructures of early computing.”
The third step is closer to us: Maxwell’s demon inspired programmers as they built control mechanisms for their new digital operating systems:
“Time-sharing developed as a more cost-effective way to achieve the online interaction of real-time computing. Time-sharing computers offered a cheaper solution by creating systems that shared one big and expensive machine among multiple users. (…) programmers at the center [at the Massachusetts Institute of Technology (MIT)] developed the CTSS operating system on their own. CTSS worked to create a communication network out of this shared infrastructure. The technical work of CTSS attempted to overcome the communication bottleneck imposed by the system’s central processor. (…) How did CTSS manage the demands of its multiple users? (…) CTSS relied on the Supervisor program, which managed the over- all data flows in the operating system. It remained active at all times, (…) The Supervisor greatly resembles Maxwell’s demon, and it exemplifies the kind of program through which the metaphor is actualized in computing. Where one manages the flows of molecules, the other handles jobs. One works in a closed system, the other in an operating system. Moreover, these similarities are not accidental. Researchers at the project began to refer to programs as demons or daemons in a direct allusion to Maxwell.”
Et voilà, that’s how we got daemons (“The change in spelling from “demon” to “daemon” was intended to avoid some of its older, religious connotations”)… which materialized in computer hardware since then, designating the programs running in the background of their computers and keeping a system in working order.
Why do I blog this? This kind of circulation is both intriguing and insightful. Tracing such genealogy highlights how certain connotations are embedded in computer systems. The next step here would be to look at earlier instances of the term “demon”, beyond its etymology, and investigate how they somehow shaped Maxwell’s ideas of a supervisor.