Art & Technology: Spring 2020: Editor’s letter.

From the first stone tools to the industrial revolution to now the digital revolution, technology has changed everything and come to dominate our lives. We are never, it seems, or hardly ever, without a device of one kind or another in our hand, never not plugged in, online, connected; you might well be reading these words on a little screen on your palm, instead of in our beautifully designed print edition (your loss). Few of us would want to be without, would want history to be without, the wheel, the internal combustion engine (even if we have to scrap them asap for green, electric transportation), antibiotics, the microchip, MRI scanners, etc.; who now would be without our phones, tablets, broadband and Wi-Fi and Wikipedia and the cloud?
But our data’s stolen and sold, foreign powers interfere in our elections, the positive online activism oftentimes seems outweighed by the hate, and above all there’s the climate crisis. Irreversible global warming, carbon emissions coming out of all our beloved tech, directly or indirectly, is the greatest threat, entirely brought upon itself, that humanity will ever face … and our responses so far have been wholly inadequate. We are in the sixth mass extinction of species. But maybe technology can be part of the cure: time for a green revolution.
But before civilisation collapses and the robots take over and there’s an AI writing this instead of a flesh and blood editor, Litro US this issue is, on the whole, a bit more positive about technology and its potentialities.
Writers and artists have always engaged with technology both in terms of making and of distributing their work, as well as dreaming about the future. From Cindy Sherman’s often nightmarish manipulations of her own image on Instagram, to Leonardo da Vinci’s draughtsman dreams about flying machines (2019’s commemoration of the five hundred years since Leonardo da Vinci’s death spanned from Milan to London to Paris and globally). The new technologies offer a vast array of possibilities for storytellers, from augmented and virtual reality through to technologically mediated theatre, reading experiences and beyond.
For many artists and writers the pace of technological change is daunting, even overwhelming, and this can form a barrier to innovation, but creatives continue to find remarkable and illuminating ways to engage with new technology, and that’s what this issue of Litro is here to celebrate through short stories, essays, poetry and the visual arts.
Art across the globe is being influenced more and more by technology; or is it that we are now catching up with those artists who have used technology in their work for many years. In my travels last year, in the US in September, I caught Laurie Anderson’s collaboration with Hsin-Chien Huang that invites viewers to explore a multifunctional constellation of galleries and installations including a working studio, audio archive, exhibition venue, and a virtual reality environment for experiences. In China, later in the year, following a dazed and confused experience of the immersive theatrical performance of “Sleep No More” by the UK production company Apples & Snakes, I stumbled across island 6, an art collective of self-confessed tech-geeks and creative talents. The collective produces cutting-edge art that constantly contemplates the future of Asia, engages sights and scenes from old and new China and elevates the skills of new talents by working from a communal forum, the artistic collective Liu Dao, whose artists’ works are visual, interactive, conceptual, humorous, and always striking, involving fresh takes on modern technology. The spirit and purpose behind all Liu Dao artworks, literary contributions and exhibitions is collaboration. The group places emphasis on interaction in order to engage artists, curators and technicians in the process of creating and experiencing art. The digital artworks also use postproduction editors, such as After Effects experts or sound designers. Each artwork and exhibition has a cast and credits, like film credits. Major collections that have Liu Dao artwork include Louis Vuitton House, the Antoine Arnault Collection in Paris, the White Rabbit Collection in Sydney, the Katz Collection in NYC, and the Patrizio Bertelli Collection (Prada).
In New York in early December I visited the much-anticipated reopening of MoMA after its $450 million renovation. The Museum on West 53rd Street is bigger; they’ve added about 50,000 square feet. Throughout the new space, one gets the sense of the museum embracing the way people in this new decade, will consume a wide variety of images in a digital world – this kind of lateral projection of imagery. You tap here, you get this, you connect this. There’s a kind of seamlessness to the way both images in the virtual world and images in the physical world are absorbed. In one room were the miniature worlds of Sou Fujimoto’s Architecture Is Everywhere – Fujimoto here uses unexpected materials to construct miniature architectural models – in another was the next visual sensation, Janet Cardiff and George Bures Miller’s The Killing Machine. In an automated ballet of robotics, props, light, and sound, The Killing Machine operates on an unseen, imagined victim. You activate it by pushing a red button. Alluding to Kafka’s 1919 story “In the Penal Colony”, about an elaborate execution apparatus, The Killing Machine ironically contrasts its sinister activity with the rudimentary mechanics of a music box and playful embellishments like faux fur and a disco ball. Part of the artists’ continued exploration of theatrical tropes and immersive environments, this work brings to life a haunting spectacle that, in its futility, doubles as a critique of the sanctioned use of torture.
To the next room, where visitors are invited to sit on judo mats to view a thirty-minute video experience – Hito Steyerl’s Liquidity Inc., a multi-layered portrait of Jacob Wood, a financial analyst who lost his job in the economic crash of 2008 and became a career mixed-martial-arts fighter. Wood’s story unfolds through real and virtual worlds that are made to overlap, combining conventions of documentary film with internet-syntax hashtags, gifs, and memes. With its computer-generated waves and news footage of hurricanes and tsunamis, the work uses water and extreme weather as metaphors for the fluidity of financial assets and digital information, and for a collective sense of instability.
It’s interesting to note, with Brexit now inevitable, Kings Place will open its season and the new decade with Laurie Anderson. If you are in London in April, she will be performing Nature Unwrapped throughout 2020.
Technology can enrich our lives and our art practices if we take the challenges seriously and think hard about the ethics and consequences of technological innovation while at the same time remaining open to innovation and, above all, creative play.
I will always be a techno optimist. There is no need to look to the future for advances in technology; like a William Gibson novel, we need but look at the present – and smell the roses! 2020 ushers in the dawn of 5G and the true definition of the internet of things. 5G will enable driverless automobiles; a boxset of your favourite show need not be downloaded the night before your flight but in a matter of seconds whilst you queue to check in your luggage. 5G will see the internet truly connected to the motherboard – so who knows, the next edition of Litro may just be edited not by a living, breathing, bleeding human but an AI!

A Lack of Understanding: Storytelling for Robots


“Have you ever heard of the poet Xu Zhimo?” asked Paul.

I hadn’t, nor was this a question I was expecting to hear from my Uber driver in a discussion about autonomous vehicles.

“A great Chinese poet, studied at Cambridge. The Chinese tourists love him. They go get their picture taken by the plaque over at King’s College. Problem is, it’s a big plaque, so they have to step back into the road to get a picture of it. So whenever I drive past King’s and see a group of Chinese tourists, I know I have to be on the lookout. Could a driverless car do that?”

Paul had a point. On the face of it, at least, being able to anticipate a pedestrian’s behaviour like that would require not just quicksilver silicon reflexes, but the ability to get inside their head, to understand their motivations, their goals, the reasons for their actions. In other words, to tell a story about what they were going to do and why they were going to do it.

Most of us make these kinds of predictions effortlessly. From planning a perfect date to getting the seating right at a dinner party, we’re able to understand and anticipate each other’s behaviour. This capacity is what philosophers call folk psychology – not the psychology of scientists in labs, but the psychology of everyday life. If I know my co-worker feels undervalued, I know he’ll appreciate an earnest email of thanks, and if I know my friend likes to be seen as an expert on arts and culture, I might make a point of asking her opinion on the latest Tarantino movie. At its most exalted, folk psychology can seem like magic – that moment when the detective intuits exactly what the murderer will to do next, or the lover knows just what to say to make her beloved swoon.

At its core, folk psychology is a matter of constructing models of people – their beliefs, dreams, fears, wants, and needs. In this sense, it’s a matter of storytelling, of creating narratives about the people around us. When we tell a story, we transform a dry and chaotic cosmos of objects, properties, and events into a vivid and tractable world of characters and motivations. And just as a story requires characters, so too do characters require a story.

It’s perhaps tempting to think that these abilities are a rarefied human quality – something required of people navigating complex social environments, but hardly a requirement for an artificial system. When Siri yet again gives me the wrong answer to my query, and I say an exaggerated thank you, it doesn’t matter that she doesn’t recognise my sarcasm. But cognitive scientists have long recognised that our ability to construct and elaborate stories and characters is key to many everyday tasks, from language, coordination, and leadership. And if we want artificial systems that aren’t just crude tools but colleagues, then they’ll need to learn to construct and understand stories – or at least, to convincingly fake it.



In thinking about the importance of storytelling, language itself provides a good demonstration. Consider a sentence like “the scientists gave the monkeys bananas because they were hungry.”

This might seem like a simple bit of text, readily digestible without the need for storytelling or understanding. But there’s more complexity here than meets the eye. Nothing in the rules of the grammar or syntax of English tells you that the ones who were hungry were the monkeys – the word “they” could just as easily refer to the scientists themselves.

The reason we know it’s the monkeys who were hungry is that we’ve constructed a microstory: some scientists are doing an experiment, and they’re making sure the monkeys are properly fed. If we’d been given an unusual backstory – for example, one in which the scientists were only allowed to eat after giving bananas to their experimental animals – then we would naturally interpret the reference of the word “they” quite differently.

Examples like this have created a fertile test for artificial intelligence known as the Winograd Schema. If an AI can satisfactorily resolve this kind of ambiguity, the reasoning goes, it must actually have some intelligence, some ability to understand. Unfortunately, as is often the case in AI research, it turns out it’s easy to cheat on the test. By exposing artificial systems to large corpuses of text, it’s possible to teach them patterns of language and speech which they can use to work out the most likely reference of an ambiguous word. Artificial systems may not be able to understand, but – as it turns out – understanding isn’t necessary in this case for successful prediction.

This kind of ambiguity, though, is just the tip of the iceberg. Philosophers have long recognised that language doesn’t consist of simple atomic propositions, utterances like “the cup is on the table.” Most human language conveys far more than meets the eye: it’s full of shades of nuance and unspoken assumptions that can only be decoded once we have a grip on who we’re speaking to and the purpose of the conversation. This is most obvious when we speak obliquely. If I ask you if Jane is dating anyone, and you pause before replying that she’s been flying to New York a lot recently, I’ll naturally (and effortlessly) recognise that you’re hinting at a distant love affair.

This kind of implicit meaning – what linguists call pragmatics – underpins even simple communication, and involves a dizzying amount of interpretation that we conduct unconsciously. Imagine you’re sitting on a bench in the park and your friend leans over and points in the direction of an ice cream van. You instantly recognise that the person working in the van is your friend’s secret lover. But are they pointing out the ice cream van or their lover?

It depends. If they’ve disclosed their secret lover to you, then it’s reasonable to assume they’re pointing her out. But let’s say you know about their lover secretly – for example, by having read their secret diary without their knowledge. In that case, you know, but they don’t know that you know, so they’d have no reason to expect you to recognise their lover and must be pointing to the ice cream van. But things get yet still more complicated. Imagine that they caught you reading their secret diary. Now they’d know that you knew, and you’d know that they you knew that they knew, so it would make sense they were pointing out their lover (if you know what I mean).

It’s easy to get lost in descriptions of these kinds of complex “mind-reading” scenarios, but none of us have any difficulty in navigating them as they arise. We can keep track of who knows what, and who knows who know what, thanks to our effortless and largely unconscious social minds. It’s interesting and perhaps telling that even our closest relatives, chimpanzees, don’t seem to use pointing gestures in nature, and struggle to understand when scientists use pointing to help them locate food; even in these simple gestures, there’s a rich tapestry of social cognition woven into our everyday communication and even body language.

This casual facility for understanding others is brilliantly demonstrated by flash fiction. When we read a story like Hemingway’s “For sale: baby shoes, never worn”, our minds instantly fill out background details, turning a black and white sketch into a technicolour portrait, something no existing AI would be remotely capable of achieving. In order to understand stories, you have to construct them, by filling out a world with characters, motivations, backgrounds, and personalities.

Of course, not everyone has it so easy. Neurodiverse individuals, and in particular people with autism, often struggle to decode the subtle implications buried in these simple short utterances, and sometimes face real challenges in interpreting indirect communication. I should stress that autism takes many shapes and forms, and many people with autism have managed to find effective strategies for dealing with the frustratingly circuitous communicative tendencies of others. But their experience at the very least shows that the easy social understanding wielded by neurotypical people is a tricky cognitive achievement.

The exact nature of the achievement, however, is still something of a controversy. Do we learn to construct stories about others, or are we born with the ability to do so? Philosophers and psychologists are deeply divided on the issue. One famous experimental paradigm known as the “Sally Anne Test” (or more prosaically, the false belief task) has suggested that there’s a specific window in childhood development – around the age of four or five – when neurotypical children acquire the ability to understand that other people can have their own beliefs and agendas. In the classic version of the test, children see a doll (“Sally”) put a marble in a basket. Sally then leaves the room, and another doll (“Anne”) comes in and moves the marble to a different basket. Sally then re-enters the room, and the children are asked where she’ll initially look – in the basket she put the marble in originally, or the basket that Anne had moved it to?

Somewhat surprisingly, children younger than four seem to adamantly believe that Sally will look in the basket where the marble really is; the idea that she might not know that it’s been moved just doesn’t compute. But around age four, something seems to change for the neurotypical children: they pass the test fairly easily, suggesting they can now make sense of the fact that the world contains people who don’t believe the same things as them. By contrast, children with autism struggle with this test, suggesting that they haven’t yet acquired this ability. And while most adults with autism can pass it, they acquire this ability later, perhaps suggesting that they’ve had to learn to construct stories about others the hard way, rather than relying on some innate ability.

Even as adults, people with autism struggle to pass some subtler tests of this kind. In one such test, for example, participants are told a story about Sarah and Tom who are going on a picnic. Just as they sit down, torrential rain starts pouring down, to which Sarah remarks “How wonderful.” The participants are asked to say what Sarah meant by this. While most neurotypical adults immediately infer that she’s being sarcastic, those with autism are less confident, suggesting, for example, that perhaps Sarah really likes the rain. Here again, it’s the ability to tell accurate stories – to project ourselves inside Sarah’s head, to gauge her likely motivations – that’s key to understanding.

Exactly how to interpret results like these is still hugely controversial among philosophers and psychologists. But the most straightforward reading is that neurotypical people have an innate ability to understand others that “comes online” early in childhood, while people with autism have to acquire this ability the hard way.

It’s not hard to imagine why evolution might have endowed most people with this ability. We’re fundamentally social creatures, and the ability to easily model others’ beliefs and goals – to construct rich stories about each other’s minds – is extremely useful for our thriving and survival. Some thinkers have suggested that this ability is what makes humans so distinctive. For most of our recent evolutionary history, humans have lived in tight-knit social groups in which coordination, cooperation, and reciprocity have been key skills, whether via working together to bring down large prey or just keeping track of our friends and rivals, and stories are what enable us to do this.

There’s also a darker side to our ability to tell these kinds of stories, namely that it lets us manipulate and control each other. If you can effortless intuit other people’s motivations and beliefs, then it becomes easier to control them, whether by feeding them plausible lies or playing on their hopes and fears. And while not every social environment is as cutthroat as Game of Thrones or House of Cards, we’ve all encountered brilliant persuaders and manipulators who always seem to get their way. There’s even a view – the so-called “Machiavellian intelligence hypothesis” – that claims that it was this aspect of our social intelligence rather than cooperation that drove the explosion in our brain size in our recent evolution. Put bluntly, we’re smart because we need to be devious.

While systems such as DeepMind’s ToM-net (short for “Theory of Mind network”) are capable of predicting certain kinds of behaviour – and effectively passing the Sally-Anne test – they lack the understanding required for true manipulation: we need not fear an imminent virtual Iago. And while the cold impersonal intelligence exhibited by the ruthless artificial systems of Terminator or 2001 are certainly dreadful, their wickedness pales in comparison to the Machiavellian hatred of genuinely devious AIs like Harlan Ellison’s famous AM (“Aggressive Manipulator”) of “I Have No Mouth, and I Must Scream”. A system that was blind to the existence of others and their inner lives might be a killer, but it couldn’t be cruel, malicious, or exploitative. If this is right, then we should perhaps be somewhat relieved that this is a kind of intelligence that AIs seem to lack (at least for now). For humans, by contrast, the ability to tell stories is like the forbidden fruit: expelled from the Edenic solipsism of early childhood, we’re faced with a world of actors both malign and benevolent, and become ourselves capable of good and evil.



We construct stories to understand and control. But we also create them to share. From the blind bard of Chios who stitched together the myths of Achilles, Odysseus, and Priam to the Mesopotamian scribes who laid down the tales of Gilgamesh and Enkidu at Uruk, every civilization with a written record has bequeathed to us its folklore and mythology. Exactly why we tell stories – their social and cultural function – is itself a vexed question among evolutionary psychologists, but it’s certainly true that the use of stories not merely as interpersonal tools but as a form of shared culture is that rara avis of anthropology, a near-universal human trait.

It’s tempting to think that our ability to craft and share stories publically is a development of something more basic, namely our ability to construct private mental stories to understand one another. Certainly, it’s hard to imagine that someone could write convincing fiction unless they already had a good grip on how other people tick. When the hashtag #menwritingwomen went viral last year, Twitter delighted in exposing the embarrassing blunders many male authors made when trying to craft believable female characters. Implicit (and sometimes explicit) in the critique was a charge not merely of literary ineptitude or anatomical cluelessness but a certain lack of empathy and understanding for the different experiences of women (apparently women do not routinely look in the mirror and compare their breasts to large items of fruit). It’s natural to think, then, that to tell a story about someone, you first need to get inside their head.

If this is right, then the one career that doesn’t need to fear automation any time soon is that of fiction author. But as proved to be true with linguistic ambiguity and theory of mind, even if artificial systems don’t really understand, they can do a passable job of faking it. A striking demonstration of this came with the development of a system called GPT-2 earlier this year by researchers at OpenAI. GPT-2 is a language model trained on some eight million webpages with a simple goal: predict the next word in a sentence. In this sense, it’s loosely comparable to the predictive text systems we’re all familiar with from smartphones. But GPT-2 goes way beyond their capabilities. Give it a prompt – even a few words – and it’ll wax lyrical, turning a sentence or two into a text resembling a news story, an essay, a short story, or even a poem.

It’s an unsettling experience which is worth experiencing first hand (you can try it yourself online at For example, when I plug in “once upon a time”, GPT-2 returns a nice little poem –

Once upon a time
I was a very strong girl,
But soon I began to fall.
I’ve fallen too far.

But maybe that was too easy. How about a more specific opener; say, “Draco looked at Hermione coldly”?

Draco looked at Hermione coldly.

“I have to keep up appearances in class, you know,” he said. “The whole world gets to know me, that’s part of what makes me so mysterious. But I will keep it quiet if it means we will all be fine.”

“We must all be fine,” she said. “No one is going to want to touch your wand.”

Of course, GPT-2 isn’t operating in a vacuum here: of the 8 million webpages it was trained on, doubtless at least a couple of million of them were Harry Potter fanfiction (many of which probably involved a fair amount of wand-touching). But faced with an AI able to produce passages of fiction that could easily have been written by a real person, it’s hard not to feel human exceptionalism slipping away.



When we think about humans, stories, and AI, there’s a tension that’s hard to resolve. On the one hand, stories seem critical to our understanding of the world, of language, and of each other. They’re part of what makes us human, and part of the reason our species has been able to thrive so destructively. And yet everywhere we look, we can see artificial systems encroaching on our estates, clumsily but relentlessly doing what we do with stories, all with a complete lack of understanding. For now, most of what we see is a bad cover version: a simulacrum of human behaviour built on very different foundations. But with every new press release from Google or research paper from IBM, it feels like a little bit of human distinctiveness slips away.

When Paul had first told me about Xu Zhimo, I’d been impressed: clearly, anticipating the behaviour of pedestrians in the way he described was impossible for an artificial system that could get inside people’s heads. But the more I thought about it, the less sure I was. If there’s one thing AIs are good at, it’s learning from mistakes. A few chance collisions or near misses outside King’s College would be all it would take for a driverless car to realise that this was a dangerous spot. Given enough time and data, it might even learn to be cautious of large groups of tourists in front of Xu Zhimo’s plaque, and all this without a shred of understanding or empathy.

If AI can do so much without stories, then we face the question of why we tell them at all. Is the understanding they grant more superficial than meets the eye – nothing more than a rose-tinted Instagram filter on reality? Is it a mistake to argue, as I have, that they’re so important for our skills and abilities?

I think not. Even if stories aren’t essential for intelligent beings to understand the world, they’re a cognitive shortcut – an incredible interpretative strategy that lets us pull off miracles of prediction. When someone – even a person we just met – tells us that they’re afraid of flying, or have always dreamed of visiting Paris, or are excited about their new job, we can easily to fill out a picture of them that lets us understand and anticipate their behaviour. When we read a first-hand account of a parent who has lost a child or a soldier left to die on the battlefield, we can gain powerful new insights into human actions and emotions. As far as raw prediction goes, perhaps an AI will one day be able to match us at guessing what a desperate lover will do next, or how a community will react to a sudden tragedy. But it will do so thanks only to having copious amounts of data analysed grindingly over hundreds of millions of processing cycles. We can do it on the cheap.

There’s something almost mystical about this ability. In a memorable passage from Hogfather, the author Terry Pratchett asks us to grind the universe to the finest powder, and find a single atom of justice or molecule of mercy. If we grind the universe to a powder, we won’t find stories, character arcs, or motivations. Yet somehow we can use these things to understand, anticipate, and even manipulate each other. They may not be real in same way as atoms and molecules, but – to borrow a phrase from philosopher Daniel Dennett – they’re real patterns, and we’re exquisitely attuned to them. The stories we tell ourselves aren’t just some Dulcinea we need to believe in for our own comfort; they’re a royal road to understanding.

The Representation of Augmented Reality in Fiction 

Cory Doctorow’s Down and Out in the Magic Kingdom is set in a near-future where everyone is effectively immortal, and money has been replaced by the crudest form of social capital – power is determined by popularity rankings called “whuffies.” The book’s protagonist, Jules, and his girlfriend, Lili, live an idyllic life in Disneyland, maintaining and repairing the old rides. However, new engineers are in town, peddling augmented reality sim rides that adapt to consumer preferences in real time.

“Why didn’t you tell me?” she said, after a hard moment’s staring into the moonlight reflecting off the river.

“Tell you?” I said, dumbly.

“They’re really good. They’re better than good. They’re better than us. Oh, God.”

[…] “I don’t think so. I don’t think they’ve got soul, I don’t think they’ve got history, I don’t think they’ve got any kind of connection to the past. The world grew up in the Disneys — they visit this place for continuity as much as for entertainment. We provide that.” I’m offline, and they’re not — what the hell happened?

“It’ll be okay, Lil. There’s nothing in that place that’s better than us. Different and new, but not better. You know that — you’ve spent more time in the Mansion than anyone, you know how much refinement, how much work there is in there. How can something they whipped up in a couple weeks possibly be better that this thing we’ve been maintaining for all these years?”

She ground the back of her sleeve against her eyes and smiled. “Sorry,” she said. Her nose was red, her eyes puffy, her freckles livid over the flush of her cheeks. “Sorry — it’s just shocking. Maybe you’re right. And even if you’re not — hey, that’s the whole point of a meritocracy, right? The best stuff survives, everything else gets supplanted.”

In this exchange, a contradiction immediately emerges – the augmented reality is both self-evidently better than the real thing, and self-evidently worse than it. If it wasn’t better, why would Lili be so threatened? If it wasn’t worse, why would there be reason to mourn? Jules insists that the original staying power of the Disney rides is due to ineffable qualities, such as history and soul. The augmented reality is, yes, potentially more entertaining, more immerse, but it is not real.

Disneyland is perhaps an odd venue to use as a stage to fight about the real. After all, postmodern theorist Jean Baudrillard refers to Disneyland as a space designed to confront hyperreality. As he wrote in his 1981 book, Simulacra and Simulation, “Disneyland is presented as imaginary in order to make us believe that the rest is real, when in fact all of Los Angeles and the America surrounding it are no longer real, but of the order of the hyperreal and of simulation. It is no longer a question of a false representation of reality (ideology), but of concealing the fact that the real is no longer real, and thus of saving the reality principle.” Baudrillard argues that in contemporary society, our reality is filtered through new media (primarily, but not exclusively, television) and our urban lives are designed to cut us off from nature. Therefore, there is a very real sense that our reality is no longer “real” – that we are surrounded by representations that don’t have a real counterpart. This is the simulacra of the title. Disneyland exists to be an incredibly visible copy of America – to reassure us that, by contrast, everything outside is real.

And yet the characters of Down and Out in the Magic Kingdom cling to the reality of the physical rides over the augmented reality. Part of this is because one symptom of the postmodern condition is the belief that we have run out of anything meaningful to say through art – as Frederic Jameson argued, the best we can hope for is new and novel forms of pastiche. We cling to the past forms of art as inherently better and more authentic.

Augmented reality is at the cutting edge of entertainment technology – most peoples’ first experience with it will have been 2016’s mobile game Pokémon Go. (Again, even the new technology is packaged with a nostalgic intellectual property – Pokémon Go promises to make the childhood fantasy of Pokémon real, combining the old and the new.) Although the concept of AR has been around for a while – hence Doctorow’s usage of it in his novel – it has become more commonly included in science fiction literature, film and television since 2016. Part of the reason is surely novelty – these works portray the use of AR, now novel, as commonplace, reinforcing the futurity of their settings. But AR functions as more than a curiosity.

Augmented reality’s ghostliness is emblematic of the cheapness and ephemerality of modern capitalism. We live in a rentier economy – fewer and fewer people own their own homes, for example. Video streaming services curtail our choices (try finding a film on Netflix made before 1970) while ensuring that we never own films or music. Even tracks bought on iTunes are not really ours – they are subject to recall at any time. Augmented reality in these works is a frustrating reminder of this reality. It is a form of entertainment that you pay for, but can never touch – and never truly possess.

Denis Villeneuve’s 2017 film Blade Runner 2049 gets at the pitifulness of augmented reality. A sequel to the 1982 cult classic Blade Runner, the film follows K, a replicant (an enslaved artificial human) who works for the LAPD to track down and kill rogue replicants. K’s personhood is evident, even though he is owned by the state, as evidenced by his longing for companionship.

After work, K comes home to a tiny apartment in the sprawling megacity. He is greeted by his holographic girlfriend, Joi. Joi is programmed to adore her owner, and she is quite literally confined to the home. Joi can be a hard character for a feminist to love. Although the use of holograms in science fiction is not new, there are several moments where Joi recalls contemporary AR – the moment when she superimposes her image over the replicant sex worker Mariette, their movements not quite syncing up, recalls not only contemporary deepfakes controversies, but also plays off the same eerie disconnect between the incorporeal image and the real world that makes, say, seeing a Pikachu in a graveyard so fascinating. The incongruity calls attention to the lack of embodiment.

In many ways, Joi is a retrograde sexual fantasy – a virtual woman utterly devoted to her man, willing to do anything to please him. The old-fashioned gender politics seems to be baked into Blade Runner’s DNA. The original film featured Rachael, who was styled like a 1940s femme fatale. Unlike the duplicitous yet in-control women of film noir, Rachael passively fell in love with Deckard after he forces himself on her. Blade Runner’s use of film noir elements is emblematic of our inability to imagine a different future – this is perhaps even more apparent in the countless cyberpunk novels and films since then that have shamelessly plagiarised Blade Runner’s “Art Deco for the rich, dirty Asian-ish back alleys for the poor” visual aesthetic. The films also reassert the primacy of the male – if Rachael was a 40s new woman defanged as part of the 1980s backlash against feminism, Joi is a 1950s housewife for a generation that can afford neither a house or a wife.

On the other hand, it may be that Joi is not so simple to read. After all, Rachael was meant to be a “real” woman – she was unaware of her status as a replicant before Deckard figured it out, and her memories are that of Tyrell’s real niece. She is meant to be fully in control of her actions, and yet she is utterly subservient to Deckard. There is little indication in the film that her declaration of love at the end is false – despite her insistence that she didn’t want to sleep with him. Joi, on the other hand, is explicitly a fantasy, bought for one specific purpose – to provide companionship for a being that is deemed unfit for human company. Even then, Joi manages to break from her programming. When K leaves the city to track down Deckard, Joi insists on coming with, even if it means that her automatic backup will no longer work. She is murdered on the trip – lost forever.

One recurring image in Blade Runner 2049 is the extensive use of large-scale holographic advertisements. In one sequence, K walks between the giant feet of a ballerina, projected to be storeys high. After his copy of Joi dies, K encounters a giant advertisement for a new Joi. Although this one is also played by Ana de Armas, her look is entirely different – she is naked, with neon-blue hair. Her features are smoothed out, and her exaggerated size emphasises her unreality. She flirts with K, but he walks away disillusioned. The Joi he knew was different, and she will never come back. We are one of the loneliest generations, and K finds to his despair that love – real love – cannot be bought.

The theorist Mark Fisher argues that contemporary art, and contemporary society, is locked into “capitalist realism.” While the 2008 financial crash has exposed the failings of the capitalist system, we are no longer able to imagine any alternatives to it. However, 2016 had the effect of rupturing, at least partially, the logic of capitalist realism. The Bernie Sanders movement and the current UK Labour party have rejected the dominance of capitalism, while followers of Donald Trump have rejected the logic of realism. Blade Runner 2049 shows some of the cracks in the ideology of never-ending capitalist stasis. Film academic Robin Woods wrote that Blade Runner should have ended with Deckard joining a replicant liberation movement – Blade Runner 2049 shows us that movement gaining ground. Augmented reality also plays a large part in two 2017 video games that criticise privatised capitalism – Prey and Tacoma.

Both games are set in a world where privatised space travel has become commonplace – in many ways fulfilling the dreams of Elon Musk and Richard Branson. Prey is set in an alternate history where John F. Kennedy lived and rapidly advanced the space programme, while Tacoma is more clearly extrapolating from our current world, as companies such as Amazon, Hilton and Carnival (a cruise company) are explicitly name-dropped. Both programs feature a protagonist exploring a largely-abandoned space station. In Prey, Morgan Yu is exploring the Talos station after the release of deadly aliens; in Tacoma, Amy is investigating a lunar space station on behalf of the fictional Venturis Corporation following a purported accident.

In both games, augmented reality is not a recreational tool, but is key to corporate tracking and surveillance, and is used to help understand what happened on these space stations. The use of AR is fairly incidental in Prey – Morgan uses the recorded footage of the lead-up to the aliens escaping in order to figure out why they lost their memories, and determine the true motivations of their brother, Alex. The use of AR is utilitarian, although the effect of Morgan walking around the projection and seeing the same event from different angles is fascinating.

While AR is merely used as a narrative shorthand in Prey, AR is central to the narrative and themes of Tacoma. The AR is used to track the movements of the workers at all times – speaking to an extreme version of current corporate surveillance culture. The crew of the Tacoma are absent from the ship, but their recorded movements and conversations are represented by brightly-coloured silhouettes. Much like the previous game made by Fulbright Studios, the highly influential indie hit Gone Home, Tacoma allows you to dig through other peoples’ lives in order to solve a mystery. Like Gone Home, the pleasure of eavesdropping is in uncovering the full humanity of the absent people you are observing. While Amy is hired to assess the corporation’s liability in the accident, that is not what is most striking about the narrative. While the AR is also intangible and ethereal, it is also remarkably intimate. The player can watch a crew member rehearse his declaration of love for another – an endearing moment of privacy. While it is immensely rewarding to discover these moments, it calls to mind the pervasive access that their employer has to their every movement.

Like Blade Runner 2049, Tacoma both takes place in a dystopian extension of current capitalist logic, but also offers a way out. (Without spoiling anything, Prey is much more pessimistic.) Tacoma’s recordings take place on Obsolescence Day, which is an annual holiday for the blue-collar workers. It is later revealed that Obsolescence Day commemorates the day that human workers were meant to be replaced by artificial intelligence. This was blocked by a strong trade union movement. It is then later revealed that the Venturis Corporation engineered the accident in order to have an excuse to remove human workers from their spaceships. The ruthlessness of the corporate system seems to resonate with, say, the despicable conditions in Amazon warehouses, and the human crew members are often stuck in impossible positions – while none of them are particularly happy with the Venturis corporation, they are also unable to leave steady work. The crew are in the end rescued by a rival company – suggesting that, while Venturis may be harmed by these revelations, the capitalist system will continue unimpeded.

However, the existence of trade unions in highly precarious industries seemed like a fantasy in 2017. Silicon Valley, and start-up culture in general, is notoriously hostile to organised labour. But Google contractors recently became the first significant union in the tech sector. Tacoma suggests a future of human solidarity (and AI solidarity) in the face of corporate malfeasance. Banding together can work. In the end, the humans survive because a rebellious ship AI (ODIN) rebels against its programming and tips off the crew as to the corporation’s plans. ODIN is then due to be destroyed for disobeying orders. In one final twist, Amy is revealed to be a secret agent for an AI liberation movement, and she offers ODIN asylum. Like the replicants of Blade Runner¸ ODIN is an intelligent creation that is owned by an organisation that does not have its best interests at heart. Tacoma suggests that there may be a way out of this bind, but it requires class solidarity and radical action.

Augmented reality, then, holds a strange place in contemporary science fiction. It reflects some of the direst failings of our current system – our inability to own anything, our loneliness, our reliance on employers who seek to exploit us. But in these works of fiction, we can see a way out of this conundrum. It won’t be easy, and the gains may be partial – but it is better than surrendering to despair. Science fiction reflects our world, extrapolating our hopes and fears in a distant setting. But science fiction also provides a vision of the future. These works suggest that what we want this future to be is entirely in our hands.

Art and Algorithms: The Work of Manfred Mohr by Charlotte Kent

Algorithms don’t seem like fun. New sources regularly announce how algorithms are responsible for assorted prejudices against certain populations that will only get worse. They are part of the reason that artificial intelligence is getting stronger every day, going to take over jobs, and make humanity useless. But even if these dire forecasts are true, there is another side to algorithms epitomized in the art works of Manfred Mohr.

Mohr had been an abstract painter and a musician before he turned to computers. He played the tenor saxophone and oboe in jazz clubs across Europe and was a member of the band Rocky Volcano. Reading about Max Bense’s information aesthetics in the mid 1960s, however, launched Mohr on a career-defining trajectory. Bense aimed to create a rational aesthetics and “program the beautiful.” His work influenced many across Europe to investigate the computer as a system of art making. He saw the computer not as a mere tool but as embodying the rule of art. Many engineers started creating art-producing programs, contributing to the later explosion in graphic design. For Mohr, Bense provided a new way of thinking about art. If algorithms allowed a sense of order, then introducing randomness was a means of introducing the unforeseen, akin to the artist’s intuition. Mohr would later say: “Even though my work process is rational and systematic, its results can be unpredictable. Like a journey, only the starting point and a hypothetical destination are known. What happens during the journey is often unexpected and surprising.” His works are visual analogues of the algorithm’s process. They are the visualization of an event.

An algorithm is, at its simplest, a set of instructions. It provides a set of rules for a specific procedure. Roman Verostko, an artist who left the priesthood to become one of the pioneers of computer art, explains that even a recipe for baking a cake can be understood as an algorithm. The cake is the visualization of the process that is an algorithm. The formula on the page appears static, but it describes an event in time and space. Euclid’s geometry provides algorithms that, for example, lead to Proposition I.48 wherein the square of two sides of a right angle triangle equal the square of the hypotenuse. Beyond mathematics, however, algorithms appear in many common situations, like knitting or tying shoes. What transformed the concept of algorithms was the advent of the computer. Computers made it possible to solve far more complicated problems within a useful time frame. For some, the extraordinary power of those calculations also meant a new ability to generate forms.

Software wasn’t commercially available until the 1980s so most of the original artists using computers needed the ability to program. They designed algorithms but also recognized that the process could be made visual with devices like plotter arms (a kind of robotic arm for drawing). Jean Pierre Hebert proposed the term algorists in 1995 for those who create an object of art with a process that includes their own algorithms. Included in the original list were Yoshiyuke Abbe, Harold Cohen, Charles Csuri, Herbert Frank, Hiroshi Kawano, Manfred Mohr, Frieder Nake, Georg Nees, Vera Molnar, and Edward Zajec, all part of this early period of computer art. Hebert even created an algorithm that determined an algorist, which speaks to the playfulness of the group and their idea of what it means to work with algorithms:

if (creation && object of art && algorithm && one’s own algorithm) {
include * an algorist *
} elseif (!creation || !object of art || !algorithm || !one’s own algorithm) {
exclude * not an algorist *

Mohr’s titles all include the numbers that are significant to the algorithm. The first number references the specific body of work. For example, in the case of P2400-299_714, 2400 is the number Mohr assigns to the series. 299 refers to an older program he wrote in 1978 which is an important and substantial aspect of the P2400 series. The number 714 is the random number from which that particular drawing process started. It would be a mistake however to think his work is about the math. He uses multidimensional hyper cubes and charts paths through them to expand what people can see and think. In the early days, he was one of the few artists who found respect across the sciences and humanities. Scientists appreciated his work because of his precision and rationality while the art world recognized the purity of his abstraction as a clearly aesthetic practice.


Similar but different

When Sol Lewitt wrote that “the artist would select the basic form and rules that would govern the solution to the problem,” he could have been describing the type of work that Mohr was doing. Similarities exist between conceptual art and algorithmic art in the 1960s. Both aimed to eliminate the artist’s presence. Lewitt wanted to disassociate from the individual craft of the artist; the process should be able to be replicated by anyone. The works of Mohr, likewise, eliminated any spontaneous gesture. The algorithm was fixed and the plotter arm drew the work as indicated.

The stunning resemblances between Mohr’s P-154-C (1973) and Lewitt’s Variations of Incomplete Open Cubes (1974) often lead people to think the two artists, as well as conceptual and computer art, are the same. Both works display the construction and deconstruction of a rotating cube, adding and removing one line at a time. In a grid-like formation, Mohr’s investigation of the cube occurs across ten images for seven lines, while Lewitt’s grid is thirteen by ten. Despite the impersonal use of serialization, standardization, and a strict logic and order suggestive of a certain commonality, Grant D. Taylor describes the radically different critical reception of these works in When the Machine Made Art (2014). He explains how those engaged with computer art did not write manifestos or articles articulating their relationship to other avant-garde movements. They did not present philosophical statements. Computers were seen as “cold and soulless” and so the works were too. Today, the term digital art is most common but despite the change of term, many audiences still reject its artistic merit.

A difference can also be drawn to those who combined art and technology but weren’t specifically interested in how technology produced visual forms. In 1966, Robert Rauschenberg collaborated with the Swedish physicist Billy Klüver to produce a series of performances at the New York City 69th Regiment Armory. It laid the groundwork for Experiments in Art and Technology (EAT) the following year, a movement largely responsible for introducing technology within art practice in the United States. EAT brought artists and engineers together, as well as industry professionals from companies like AT&T, IBM, and Pepsi, to encourage collaborations across the disciplines. At that time, computers were large, cumbersome objects mostly held in research facilities, so artists who wanted to produce works had to find willing partners in the military and science fields. This had led to culture clashes, as described infamously in C.P. Snow’s 1959 Rede lecture, then published as The Two Cultures and the Scientific Revolution.

In the post-war period, use of computers was largely dedicated to the growing technocratic military-industrial complex. Mohr was once accused of using a “devilish capitalistic instrument” and someone else at that same lecture threw an egg at him, all because he used computers. EAT participants aimed to reorient the cultural potential of the computer to transform the social order. Rather than cultivating ever better tools of war or expanding industrial pollution, the arts could harness technology to “infiltrate engineering and reform industry,” as the digital artist and critic Edward Shanken would later describe of the movement.

In this way, EAT had a significant socio-political goal to question the underlying moral implications of technology, an attitude that was not present in many computer artists whose background in mathematics and engineering often did not feel a need to examine the social implications of what was for them simply their tool. Since EAT sought these larger goals, it wasn’t particularly focused on the computer and so embraced all media. They produced large-scale performances, sound works, and displayed the computer as a part of the visual art work. This distinguished it from computer artists specifically interested in the potential aesthetic output of the computer.


Algorithms for a new generation

Mohr continues to produce works but many artists these days are more interested and concerned by the impact of algorithms within social structures. Algorithms allow for quantities of data to be analyzed and turned into specific results. Target could identify pregnant women by their shopping, long before those pregnancies were apparent or shared with family. The algorithms of many social media sites have been blamed for creating filter bubbles, whereby audiences only see posts, news, and advertisements that are similar to what they have liked previously.

Many companies use algorithmic decision systems to mitigate human error, improve accuracy, cut costs, and increase efficiency. Unfortunately, as evidence of bias and harm increases for those within criminal justice, education, employment, and healthcare systems, there is inconsistent proof of their benefit. Organizations like Data & Society or AI Now actively address these issues and disseminate research to help create a more informed population. Artists like Stephanie Dinkins, Yang Jian, Esther Hovers, Jennifer Lyn Morone, Inc., Trevor Paglen are only some of the many artists addressing these complex issues in very different forms and formats. Daniel Canogar is among those whose work responds to environmental factors. Jim Campbell examines notions of memory in a world of databanks. The list goes on.

A tendency remains to clump artists working with technologies together, as if all were somehow doing the same thing. Just as we recognize that oil painters differ widely in style and content, so do artists whose work now revolves around computers and the use of algorithms. Artists working with algorithms may actually produce their own algorithms to create a visual output, just as Mohr and many of the algorists did. But artists may adopt software to create projects and thus implicitly use algorithms; just about any artist using a computer or digital camera falls into this category. Artists may use data from others’ algorithms to create work. They may produce work about the social use of algorithms, though the art work does not per se involve algorithms or even computers. These are wildly different projects, all of which use or respond to data and algorithms.

For Mohr’s most recent show at bitforms gallery in New York City, the artist did an interview about his work. That video is titled “I Can Trust the Machine.” These are powerful words in this day when so many feel overwhelmed and confused by what the computer seems to already know about and do for us. Beyond these anxieties, however, Mohr’s work is also a reminder that algorithms in and of themselves are not the problem. The recent show’s title points a different direction for thinking about algorithms: Manfred Mohr: A Formal Language. Working with algorithms is an opportunity to create forms with affective and intellectual possibilities. Artists like Leo Villareal use algorithms to create light effects in public spaces; they may be engineering feats but audiences respond with feeling. Mohr investigates mathematical configurations to examine how we can think about what we can’t see. These inquiries remind us that algorithms can produce positive contributions. In better understanding algorithms, we can become better judges of those that harm and take pleasure in those that expand our world.

Five-Hundred Year Anniversary: Leonardo da Vinci at the Louvre

It seems fitting that the current Louvre exhibition has surrounded the Renaissance man who saw himself as a scientist with all the latest in new technologies to commemorate the 500-year anniversary of his death. Although the Mona Lisa is absent from the exhibition (it has remained on view in its usual display case for all to see in the Italian Renaissance galleries of the Louvre), access to the iconic painting is available by dint of a virtual reality headset. In some ways, it’s actually better than viewing the real painting behind its thick bullet-proof glass covering at a distance, usually behind a bustling throng of visitors. You get to see the painting released from its frame. You even get to experience the grain of the poplar wood panel underneath the paint. The VR documentary also showcases da Vinci’s sfumato technique, an approach which involved applying numerous thinned layers of paint that make the transition from light to shadow almost imperceptible. To round off the experience, you get to fly into the ghostly blue mountains that reach far behind the enigmatic sitter.

The virtual reality experience allows you to get as close as you would like to the painting while a soothing voice unpicks its secrets. You learn, for instance, that the reason why the Mona Lisa never travels abroad is that a single journey could be fatal to it. Da Vinci spent the last ten to fifteen years of his life painting it (alongside two other masterpieces) during his final stay in the employ of the French king François I. When he died, Leonardo gave his last three paintings to one of his pupils who promptly sold them at an exorbitant price to the king. François I cherished Leonardo’s work as much as Louis XII had, to the extent that he installed the paintings in his favourite room, his lavishly decorated bathroom.

As you can imagine, years of exposure to hot steam did little to strengthen the poplar panel on which the Mona Lisa is painted. It warped the painting into a permanent convex shape. It is currently so fragile that a split at the back of the wooden panel still threatens to break right through Mona Lisa’s face. Apparently, even a small temperature difference during a trip abroad could snap the whole piece in two. While the lapis lazuli paint underneath is still intact today, the coat of varnish that covers the painting has darkened over time, making the silk veil covering Mona Lisa’s dress seem opaque rather than translucent.

Another novelty in the exhibition’s scientific display apparatus is the widespread recourse to infrared reflectograms, a technique that makes the drawings underneath the painting visible to viewers. Reflectograms pick up on the carbon signatures of the drawings so that you can see the graphite without the overlying coat of paint. This allows the viewer to perceive any pentimenti, changes that Leonardo made to the drawings as he executed the paintings, but it also allows you to see the murkier parts of the paintings, those cast in shadow by the chiaroscuro technique that Leonardo used to such astounding effect. There were reflectograms of all the major paintings that could not be present at the exhibition but also of those on display. The only pity was that the reflectograms weren’t placed side by side with the finished paintings to make comparison easier.

There were a larger number of Leonardo’s scientific drawings and notebooks on display, including Vetruvian Man, probably the most famous drawing in the world. It almost didn’t make it into the exhibition, becoming the object of a polemic, France and Italy being at loggerheads in the current political climate. Leonardo’s Salvator Mundi was supposed to be delivered but hasn’t arrived yet. Another Renaissance version of the painting is on display though; ditto for The Last Supper which was of course impossible to present at the exhibition as it is painted onto a wall in the refectory of the convent of Santa Maria Delle Grazie in Milan. (There was a tendency in the Renaissance to place themed works in appropriate places: The Last Supper was deemed ideally suitable for the church canteen. El Greco’s The Disrobing of Christ was similarly painted for the ecclesiastical cloakroom of a church in Toledo.)

The French king Louis XII liked The Last Supper so much that he attempted to remove it from the wall of the refectory to bring it back to France. He failed to do so and the mural has remained in its original setting ever since, fading slowly year after year. The technique of fresco painting is something that Leonardo was entirely new to when he began the mural and he quickly discovered it didn’t suit his painstaking, slow-moving approach. To make the fresco adapt to his slow delivery, Leonardo used various chemicals which haven’t aged well. Leonardo’s scientific turn of mind pushed him to experiment relentlessly, proceeding by trial and error. There are errors in his military drawings too, those he drew for Lodovico Sforza, the duke of Milan. Some have argued that these mistakes were put in deliberately so that others would not steal his secrets and take the credit for his inventions. He also wrote from right to left, possibly as a way of perplexing spying eyes. 

Although Leonardo is known today mostly for his paintings, historians argue that he was really more of an engineer who liked to paint. He tended to leave his paintings unfinished as soon as the prospect of an engineering position cropped up, leaving the commissioners of his paintings in the lurch. He left his first major commission in Florence unfinished to enter the employ of the Duke of Milan, a man who was mostly keen on waging war.

As a military engineer, Leonardo devised the most brutal military weapons alongside the first known tank, portable bridges and other strategic weaponry. He even invented a monster-sized crossbow that was 27 yards across. It was never built, however, and was probably designed to fire large stones or primitive bombs that would explode on impact.

After Milan, he moved on to Venice and finally back to Florence where he became obsessed by a longing to invent flying machines, hundreds of years before the first engineers devised airplanes. Observing birds, he understood the way their wings worked through flapping but also using subtle feather movements that captured air. His later drawings show a willingness to devise flying machines that attempted to harness the forces of nature: using wind and air, rather than trying to counter the force of gravity.

Although few of his inventions ever saw the light, recent attempts to construct some of his drawings have been fruitful. Leonardo can be credited with having invented the ancestor of both the hand-glider and the helicopter. He also he invented what is called an ornithopter, a machine based on the working of bird wings.

Understandably, the exhibition at the Louvre tends to favour the painter in Leonardo, pointing out that he wanted to elevate painting to the level of the most prestigious sciences. In his day, poetry was placed as highly as mathematics in the hierarchy and there was no real distinction between the sciences and the humanities. Leonardo called painting “the divine science”.

The curators of the exhibition are at pains to emphasize Leonardo’s passion for painting, disqualifying the long-standing idea that Leonardo was interested mostly in conception to the detriment of execution. The small number of paintings attributed to Leonardo (between 15 and 20, according to most contemporary experts) does not reflect a dilettante approach to painting; on the contrary, it shows how slow, meticulous and earnest he was about the technicalities of the art. He often spent several years painting the same picture. The Mona Lisa, for instance, was started circa 1503 and “finished” circa 1517. Some experts argue that he didn’t see it as finished when he died in 1519. He would have agreed with E. M. Forster that a work of art is never finished, it is only abandoned. Most artists would agree in fact. The French painter Gustave Moreau used to add finishing touches to his paintings no less than thirty years after he first “finished” them. Henry James was known to modify the sentences in his published novels when he found copies of them in other people’s homes.

The curators of the exhibition in the Louvre are so keen to emphasize Leonardo’s devotion to painting that they opine that the unfinished paintings (those left half painted with the drawings left apparent) were part of his sprezzatura, his lightness of touch, his rumbustious spirit of endeavour, something which Leonardo called his “componimento inculto”, a sort of intuitive composition that included movement and unfinished sketch-like compositions.

Basing their point on the thousands of drawings and the tumultuous preparatory drawings on the panel of The Adoration of the Magi, the curators argue that Leonardo was an artist who constantly changed his mind, reworking compositions endlessly. But the theory really only works for The Adoration. As the reflectograms show, Leonardo’s other painted compositions vary little from the initial drawings sketched onto the wooden panels that support the paintings. There are a few pentimenti one or two other early paintings, such as The Annunciation or The Madonna of the Yarnwinder, but the other paintings follow the original drawn lines very faithfully, suggesting that Leonardo’s compositional quandaries plagued him mostly at the beginning of his career. He later painted two completed two substantially different versions of The Virgin of the Rocks, but only because his commissioners found the first one problematic on a theological and iconographic level: it was objected that the angel Uriel was pointing at Saint John the Baptist rather than Christ. His commissioners failed to appreciate the compositional originality of the first version (the one in the National Gallery in London): Leonardo used the pointing angel to draw the viewer in to the scene to indicate a model of devotion, Saint John addressing his prayer to Jesus.

Although the componimento inculto theory is enticing, making Leonardo a kind of anticipatory Renaissance Futurist who relied primarily on shifting impulsive intuitions, it’s ultimately more convincing to accept that Leonardo was a perfectionist who never managed to find the time to complete his paintings. He certainly had a whirlwind of swirling ideas on his mind, but he was also a man with no fixed social position at a time when political earthquakes were constantly sending out premonitory cautionary tremors that made Leonardo shift his professional and political allegiances at the drop of a hat. Leonardo was restless and footloose and eager to make a living wage wherever he could find it. He was often pulled away from his paintings by his desire to make it as a military scientist.

Whatever the case may be, the exhibition is certainly very successful in having brought together a large number of his masterpieces, the largest ever gathered: eleven out of around fifteen. It’s actually a world record – even Leonardo never saw that many of his greatest paintings reunited.

The three greatest visual treats on show, as far as I’m concerned, are the Louvre version of The Virgin of the Rocks, Leonardo’s Leda and the Swan and Andrea del Verrocchio’s amazingly beautiful sculpture, Christ and Saint Thomas. Verrocchio was the first major artist that Leonardo came into contact with when he joined his workshop as a very young apprentice. One of the greatest artists of his day, Verrocchio is supposed to have said that he would never paint again when he saw how exquisitely and accurately the young Leonardo had painted the angel on the left of his Baptism of Christ. The Leonardo workshop version of Leda and the Swan brims with erotic energy and playfulness. Its motifs and composition will have you spellbound for minutes. As I moved from masterpiece to masterpiece, I kept wanting to circle back to it.

Interview with Virtual Reality artist, Rachel Rossin

Litro: You’re a self-taught coder and game designer. How did you develop the programming skills to support your artistic practice?

RR: I’ve been coding and using command line since I was about five, it’s something I’ve always loved…. Some of my first drawings were made on top of spooled dot-matrix printers. 

There is this misconception that coding is something you have to learn it in a structured setting, but the reality is that all programmers are self-taught because the sands are always shifting.

Litro: Why has immersion become so important to your work, e.g. in Stalking the Trace?

RR: Immersion felt salient for Stalking the Trace because that show is about control and agency. 

I wanted a space where I could overtake the viewer and pull back when I needed to.

Litro: How do you see immersion relating to absence in the work and viewer?

RR: I suppose in the aspect of “immersion” being an all-consuming state, that’s an attractive endgame. It’s something I’m seeking in the work I want to make and the other art I’m attracted to. 

Of course, I don’t mean immersion in any technical sense (it doesn’t need to be installation or virtual reality). I find myself in that state in front of paintings I love. The rapture. 

It does have to do with threading absence and presence and that’s probably why my virtual reality works hinge on the user as the moderator – or giving them agency to drive or change a piece. 

 Litro: Can you talk me through your process, inspirations Michelangelo Antonioni’s Zabriskie Point is an inspiration, are there any others? and what is that process from seeing Zabriskie Point and producing The Sky is a Gap.

RR: When I cite outside material, it’s because it ends up acting like a synecdoche. For example, with Zabriskie Point, Antonioni wanted to initially end the film with a plane skywriting “Fuck You, America”, the producers didn’t want to pay for that, but that was the message he wanted to send. He charged that high-spectacle explosion scene with that type of energy, but he gets lost in the beauty of it – that’s the type of intent I wanted to charge that piece with.

Litro: Has this process changed over time?

RR: Of course, but it really depends on what the work is or what I feel like it needs.

Litro: It must be disorienting to constantly alternate between the physical and virtual world. Do you find this affecting your work?

RR: It definitely affects my work but it’s also where my work comes from. The reason I highlight entropy in my practice so often is because it’s about exchange and state-changes – and that feels very true to how we move through reality today. There is a material to the uncanniness of that… We’re constantly shifting through state-changes, siphoning off from one psychological space to another and back again. 

Litro: Can VR experiences create deeper empathy towards appreciating art / storytelling – creating more human connections?

RR: I think VR has as much power to create empathy as any medium does. It depends how emotionally open someone is while having a screen 3 inches from their face; it’s certainly immersive and extremely powerful, if done well, but it’s not for everyone. 

I don’t think VR as it stands is a medium for story-telling, but I do think it’s a great medium for working with perception. For me, I’ve always been attracted to VR as logic-based installation that can work with the root level substances of being a human being. 

Litro: Do you think audiences that engage with your work feel closer to it, because VR makes them part of it and not apart from it?

RR: For my VR works? Maybe? That is, if they like VR, but sometimes you have to break a few eggs to make an omelet. 

Litro: What does the future hold for VR?

RR: Right now, we’re in a nice place because our devices are still separate from us. We’ve always used peripherals to extend what it means to be human. It used to be other organisms: i.e horses to extend our range of motion, dogs to extend our senses, canaries for air quality. Now we have digital technologies as peripherals for our cognition – i.e. our phones for memory and navigation. 

For VR in its current incarnation, that same thing is true – it still feels separate and very clunky. Escapism can be therapeutic up until it tips into addiction and I hope that VR continues to move slowly because we need each other. Future speaking in terms of forecasting trending our devices will likely start to become a part of us – (AR/VR included) and technology will move off of electricity and into biotechnology. 

Litro: how do you define “immersion or “immersive experiences”?

RR: Gesamtkunstwerk.

Litro: What do you hope audiences will get from your work?

RR: Live laugh love :’)

Interview with multidisciplinary artist Nwando Ebizie

Fizzing silhouettes, low synths, dancing in the dark: Distorted Constellations captured imaginations at Manchester’s PUSH festival and Brighton Festival earlier this year. The immersive installation of music, holograms, ritual and dance was designed by Nwando Ebizie, who also performs as part of the work.

At the heart of the piece is Ebizie’s experience of a rare neurological condition that was barely recognised until 2014.

Visual snow” came into the mainstream five years ago when in Brain, a neurology journal, a paper was published about the experiences of twenty-two patients. Commonly, people with the condition have their sight disturbed by dots in their field of vision – often compared to TV static. Many also suffer migraines. Authors noted that some patients with visual snow had first had symptoms misdiagnosed as side effects of anxiety, depression, or even after-effects of LSD. The condition can develop mid-life, or be with patients from birth. There is no cure, and it affects vision, hearing and cognition. For some, it can be disabling.

For audiences who already experience visual snow, seeing Distorted Constellations may be the first time anyone has portrayed, projected and shared the embodied experience of visual snow with people who have no experience of it.

Ebizie always solicits responses from the audience. (Amongst the 289 recorded in Brighton: “one of the most amazing sensory experiences I’ve had”, “timeless”, “amazing”). Extraordinary encounters seem to be the norm. Recently, she recalls: “An artist who came said that she recently got visual snow. She had had to quit her job: she worked in theatre as a set designer, and she just felt like she couldn’t do her job anymore.

“Coming to see my project changed her mind because she saw me creating this visual art piece, as someone who has visual snow, and said: ‘Oh, you know, if she can do it then maybe I can still do it’.”

For people affected by visual snow, the work offers them a unique chance to connect and access new research, Ebizie explains: “One of the important things about the project is that it’s offering a model whereby you can have art and science sitting together. You can create an experience that can somehow feeds back into scientific research.” In Manchester, for example, Ebizie hosted an event with Dr Francesca Puledda, a neurologist researching the pathophysiology of the condition at King’s College London. People travelled from as far off as Cornwall and Doncaster to attend.

As well as crossing the “two cultures” divide between arts and science, for Ebizie Distorted Constellations also fulfills a social mission: “When I started researching visual snow, one of the things I realised is how despairing some people are who have visual snow, and how depressed and anxious it makes them to suddenly have their perception completely change.”

She was doing a project for the Wellcome Collection when she began to develop ideas around visual snow: “Because it was only discovered in 2014, there’s this thing of feeling like it just suddenly appeared. There’s no evidence it has just appeared, but there’s no evidence that it hasn’t just appeared either.”

When she began looking through their library for accounts of symptoms dating from before 2014, she drew a blank: “I couldn’t really find anything, but I realised it was because language and perception is so slippery. One of the main descriptors of visual snow symptoms is seeing something that looks like TV static, which is obviously only something something someone could describe from the latter end of the 20th century.

“What would somebody have described to us before? So I started looking at other artists, like Van Gogh, like Seurat, who maybe created their reality in a way that actually describes what they’re actually seeing or experiencing.”

For many, tackling such a huge new subject might be intimidating, but for Ebizie she thrives on the opportunity to deep-dive into fresh subjects with every project: “The way I work is, each of my projects works in quite different art forms and quite different kind of subject matters because I get really interested in something and go really in-depth into it. The process always begins with learning a new idea, subject matter or technique. With Distorted Constellations I became a Fellow In Immersion with an organisation called the South West Creative Technology Network and learned about immersive technology. With the opera I’m currently creating I researched medieval Benedictine ritual.”

When I ask her whether she faces any difficult moments working on Distorted Constellations, she replies: “It’s always really difficult. I’m trying to create something that’s a model of my perceptual reality, which we already know is something that most people don’t have and they can’t empathise with. Neuroscientists now understand that brains are inherently unique, and exist on a spectrum, with some being more typical and some being more atypical.

“Trying to create the installations, trying to create the systems of the project, can be really frustrating for everyone involved. It’s a collaborative project, so you’re constantly trying to find a shared language of something that hasn’t been created. It’s inherently creative and interesting, but it is also really tiring to feel that you’re constantly explaining yourself, and explaining why certain things are important.”

The stress was particularly sharp because the work combines two intensely personal (and poorly understood) subjects: “There is a lot of personal material in the idea of the project, because it’s this exhibition that, in a wide sense, is trying to encapsulate my reality. A part of that is my interest in Afro-diasporic ritual, and that’s very much within it, which is a whole other area which most people don’t know about. Having to explain that and weave that in isn’t… Yeah, it’s … interesting.”

In Distorted Constellations, immersive sounds and imagery become a medium for sharing the spiritual knowledge and experience that underlie the project.

One of the two key technologies in Distorted Constellations is its visuals. Ebizie’s work with a neuroscientist, Ed Bracey, into neural pathways inspired its labyrinthine design. Another artist-technologist, Coral Manton, co-developed projections that mimicked visual snow.

The installation’s other key technology is its ambisonic system. Unlike a typical two-speaker stereo system used in many installations as well as cinemas and venues, Distorted Constellations has a 360-degree sound setup (in Manchester, on eight speakers, and at other venues, six) to create a more surrounding, immersing sonic environment for its sound “palette” of others’ visual snow symptoms.

Key to the success of the installation was finding ways to make it more accessible. One collaborator, Guillaume Dujat, produced a binaural mix of Ebizie’s original composition, “Twenty Minutes of Action”, by recording the sounds from the ambisonic system on a dummy “head” (the mics sit where the ears would be. For people who will listen to the composition on headphones or audio loop, it provides a really close simulation to listening in-situ). For those who can’t attend the installation in person, an online 3D “game” version of the exhibition is in development.

It’s been a huge year for the artist. As well as touring Distorted Constellations, Ebizie also held a fellowship with the Southwest Creative Technology Network, launched a new composition at King’s Place last October, and was one of six artists to win the UK’s biggest award for women in experimental music last summer. The Daphne Oram prize, awarded last June, was presented for her work in her pop persona, Lady Vendredi, which has taken her to packed audiences at the Barbican, the Roundhouse and a BBC Music stage at Latitude.

But her achievements have also brought their own stresses with them. From autumn last year to the following May, when Ebizie started working intensively on Distorted Constellations, she experienced a “pretty consistent” panic disorder: “I was having a lot of anxiety issues and depressive symptoms. I found most things that I do quite difficult because of that.”

A change of scene has helped, she says: “It’s given me more headspace. I felt a constant crushing weight of too much going on, always being on the go. People aren’t like that outside of London.”

She left London a couple of years ago to move to the Calder Valley, and has been wild swimming and fell-running in her spare time. “If you had told me two years ago I would be doing that I would have laughed in your face,” she jokes, “‘I would never get in zero degrees water, that’s insane’.

“I’m surrounded by the hills so it’s really easy to get away – I mean, not always, because sometimes walking out the door is difficult. But it’s easier than being in London, in that you can go away. In two minutes I’m surrounded by hills, and no people, and sheep.”

Another year looms, and another project. Up next: an opera about a 12th-century mystic. Ebizie discovered Hildegard von Bingen during research into possible historic cases of visual snow. The mystic had already been retrospectively diagnosed by Oliver Sacks, she notes, with what might have been scintillating scotoma. Ebizie performed some original compositions from the opera at King’s Place in London last August, but has even grander plans for the project: “I want to build it up so it’s this kind of modern, secular ecstatic experience related to the ritual that Hildegard, or someone like her, would have gone through when they entered a monastery as a child … A death ritual with a funeral liturgy spoken over them as they laid on the ground, covered in leaves. They would have had to say, ‘I’m leaving this world now…’ She would have been about eight.”

To Ebizie, Hildegard’s significance was more about than her neurodivergence: “I was just really interested in the mind of someone who had that and but was also this crazy, incredible genius at this time when that was so difficult to be. She was crazily strong-willed enough to do it.” As two stars draw together, a new constellation appears.