Mackenzie Jorgensen interviews Eli Lee (part two)

Mackenzie Jorgensen is a Computer Science doctoral researcher working on the social and ethical implications of Artificial Intelligence. We invited Mackenzie to chat with novelist Eli Lee about her debut, A Strange and Brilliant Light (Jo Fletcher, 2021), and representations of AI and automation in speculative fiction. This is part two. Part one can be read here.

A Strange and Brilliant Light, By Eli Lee

I wanted to ask about Janetta and her research into AI and emotion. There’s been a lot of research done into emotion detection, and a lot of critique. For example, what would it mean for a machine to ‘objectively’ know your emotions, when you may not even know yourself? 

Yes. In the novel, Janetta is aspiring to teach AI about emotions, but she’s learning about emotions herself. She’s had a break-up and a rebound with someone who inspires her, but destabilises her as well. This experience is difficult but it helps her come into her own. She was a very unemotional person before that – she tried not to have emotions; but it turned out that she did. 

So in that sense, the novel is more about Janetta being at peace with having emotions. Rather than the idea that emotional intelligence in auts is ever going to happen. I knew that it would be a novel about gaining emotional intelligence – but it was always meant to be in Janetta, someone who needed to do this.

You definitely see that growth throughout the novel. It’s such a hard thing to learn, but so important. Emotional intelligence, being able to be vulnerable, all of those things.

Thank you, that’s exactly it. Janetta has never been vulnerable. She’s used her work as a shield. I wanted this to be a story about being vulnerable, about screwing up, and about bringing yourself back from that.

“Do you think that AI can be taught to read emotions?”

Right, exactly. But it seemed like Janetta believed it could be done. So I was curious about your views.

I just don’t think it can be done at all, full stop. The research that’s been done, some based on facial recognition. One person could be smiling, but they could be desperately sad inside. Could an AI detect that? Humans don’t just detect emotions by observing from a distance. We interact, we probe, we learn. We use our own emotions to invite others how to feel theirs.

So yes, maybe AI can be trained in intersubjective standards of emotion recognition, enough to make reasonable ascriptions. Let’s say, to take a pretty clear emotion, in King Lear when Lear comes back on stage at the end, carrying the body of Cordelia, his beloved child. What does he say? “Howl, howl, howl, howl!” The majority of people can piece the evidence together and understand that he’s upset.

An AI could learn to do that. But in terms of the intricacies of people’s emotions, the depth and the context of them? No, I don’t think so. But what about you? Do you think that AI can be taught to read emotions?

I think researchers will continue to try, but I don’t think it’s really possible. Like you say, someone can be smiling yet struggling inside. And I think the attempts to develop that technology may do more harm than good – in relation to surveillance, for example.

I was thinking about care homes where they have companion AIs, seals and cats and things. That certainly has therapeutic potential. Otherwise, I don’t know how it could possibly read the nuances of human emotion. We don’t even understand our own behaviour sometimes!

I think with a lot of AI, the technology and the science behind it is very interesting. But at the end of the day, the real questions are around how it’s used. Who holds the power? Who has the data that it’s being trained on? That has a major, major impact.

Is that what you’re looking at in your PhD research?

I’m looking at Machine Learning classification settings. So an example of a binary classification setting might be, “Oh, we think this person will repay the bank if given a loan,” versus, “We think this person will default on the loan.” I’m exploring the potential delayed impact of a classification. For example, if you are a false positive, if an AI predicts you’ll repay but instead you default, then your credit score will probably drop. So there will be a negative impact on you too, even though you were given a loan. 

How do you investigate this? 

There isn’t much data, and it isn’t easy to track. It involves a lot of presumptions, and running simulations, and giving more weight to the false positives and the false negatives. I’m trying to understand, “Okay, maybe in these problems, we need to really focus on the false negatives, versus in these ones, the false positives.” Essentially, I’m exploring how we might mitigate the harm an AI decision has on a person. Also, I’m interested in investigating the impact on underrepresented or underprivileged groups, because we have a lot of issues with AI classification systems learning bias and perpetuating sexism and racism, for instance, from our society.

Is it a generally done thing? Say it was about applying for a loan – can the bank automatically exclude the people the AI doesn’t like, because they haven’t got enough income, or their credit’s bad, or because of some other factor?

“Algorithmic fairness has been a field that has really boomed recently, but it’s been around for a while.”

Sure. Algorithmic fairness has been a field that has really boomed recently, but it’s been around for a while. It came into the light in the  ’60s and ’70s, when a lot of Civil Rights work was being done. At the time, the focus was on education and employment settings. Nowadays, it’s still focused on those settings, but also in areas like finance and economics, and many others.

That’s really great, you’re actually doing something that’s potentially making a difference in people’s lives. People who do AI (rather than just write about it in novels!) blow my mind. It’s impressive to have a brain that can do data, logic and mathematics – I’m very jealous.

No, I mean, I think anyone can code and learn about it. I know it seems as if it’s unattainable, but…

That’s a good point. I could learn to code, potentially!

Well, we do need more women in this area, so … ?

I suspect I’ll never get around to it…

What’s your background? Did you do English?

Yeah, I did English at uni, and since then, I’ve worked in editing, comms and publishing. I wrote three novels before this one, but I never sent them to an agent because I thought they weren’t very good. All I ever wanted to do was become a writer, so I’ve ended up with a narrow range of competencies. Writing and editing, essentially. But if that gets automated … what’s it called, GPT-3?

One of the big language models?

Yeah. Janetta’s job, for example, is safe from automation. Right up until AI is able to start consciously self-replicating – like in the movie Her, that sort of singularity moment – Janetta’s job is safe. But in my day job, where I edit publications, how safe is that? My skills are going to become obsolete soon. I might give it fifteen, twenty years. But that’s all I’ve ever trained myself for. It’s not an everyday worry, but it is a distant worry.

I think creativity, especially with regards to novel writing, is not something I can see an AI doing. They most likely would only be re-making other people’s ideas that they were trained on. I think being a creative thinker is a great spot to be.

That’s definitely the pragmatic view! I think the kind of deeply pessimistic, slightly addled-with-dystopia view is that they’re going to be able to recreate Madame Bovary within thirty years, and then all writers will be out of a job. 

But yes, I think the greater question is around how AI might transform creative expression, rather than take it over. There will undoubtedly still be ways for us to bring our humanity to books and music and art.

Realistically, AI is everywhere.

“Realistically, AI is everywhere.”

Right, right. And going back to the novel, you really showcase auts in hospitality settings. Is that the main place that you see them potentially going? Or do you see them in other settings?

Realistically, AI is everywhere. It’s in our Netflix algorithms, and it’s in our traffic lights. So in that sense, I didn’t portray reality – I didn’t convey all the hidden AI that shapes everyday life. In terms of hospitality, I guess there’s already automation in the supply chains and the logistics, and places like the Ocado warehouses. I don’t know if you know about Ocado, the delivery company that went really heavily automated?

Yes.

Ocado has one of the most automated picking and packing systems in the world; these robot arms just picking up ketchup and putting it in bags! So I touched on that a bit, but yes, mostly cafés. Have you ever read Cloud Atlas by David Mitchell?

David Mitchell, the comedian?

Yet another uncanny double. There’s two David Mitchells. There’s the Peep Show comedian, and then there’s a novelist who doesn’t really write sci-fi, but he wrote a novel called Cloud Atlas, and there’s a chapter in a very futuristic setting. And I read it when I was quite young, and the imagery from it, where the utopia that masks a scary dystopia beneath, has stuck with me ever since.

Also, I love coffee shops. Coffee shops are so warm, cozy and human. I just knew that robot servers in a café were a way to have a real interface with humans coming in to get their coffees and being hit with, once again, uncanniness and unnerving futurity, and a slightly utopian, slightly dystopian vibe. In the novel, one of the characters, Van, sings while he works, and I imagined the coldness of that being replaced by auts.

I’m also someone who loves coffee shops. Their ambiance and conversations with the barista are two of my favourite things about them. They’re always in a very warm setting.

Coffee shops are a classic institution. You’re from Seattle, right?

Yes, I am.

The home of coffee shops!

Yup.

You have the best coffee – all of Seattle is like one big coffee shop. And then you know exactly; a good coffee shop is the most wonderful place. 

Can we expect a sequel?

Potentially! I’m curious, would you see it as a free-ranging AI utopia, where they’ve managed to create this benevolent AI that’s also autonomously functioning?

I guess I wondered about Lal’s decision in the last chapter, and seeing what that actually does to Tekna and their world.

That makes sense. To be honest, I found writing this novel so difficult. I’d written a sci-fi novel before, and I think the reason it was difficult was because, well … do you read a lot of sci-fi?

Honestly, this was my first sci-fi book! I’m usually a non-fiction person. Currently, I’m reading non-fiction, Caste: The Origins of Our Discontents by Isabel Wilkerson, which is really good. Very different from this though.

I read quite a bit of literary fiction – writers like Elena Ferrante, Alice Munro. But I feel almost compulsively drawn to sci-fi, like it’s where my imagination wants to go. I realised during the writing of this that it had to be driven by plot, and then the characters react to that. So the automation and conscious AI plots were the engines of the novel. 

Right.

And I wonder if I’m better suited to something where the engine is people living their lives in a more scaled-down way. I haven’t worked it out yet; I only know that when the Guardian called A Strange and Brilliant Light ‘character-driven’, they weren’t wrong!

Having written the novel, though, what are the major takeaways that you want readers to come away with?

I know it sounds potentially counterintuitive, because the novel is about AI. But I think to me, it is more about some of that more mundane, slow-burn stuff. It’s about figuring out who you are and allowing yourself to make messes and wrong choices, and then being able to do something about this. So all three of the girls do make pretty terrible choices, and then they manage to figure out who they need to be in order to make things better. So it’s that hokey stuff about being true to yourself, and having faith in yourself. Because even Lal shows she has faith in herself, in the end.

The other message is about vulnerability and emotional intelligence. Lal shows that at the end, because the person she needs to be vulnerable to is her sister. And Rose needs to stop being vulnerable to powerful men and put some boundaries down. Vulnerability and self-assurance are connected.

It’s a feminist novel. When you’re in your twenties, you go through a lot of self-doubt. Most people I know, unless they’re bizarrely confident, struggled quite a lot internally with who they should be and whether they’re doing the right thing, especially in their twenties. And I wanted to show some women who also struggle, but manage to figure things out.

“I wanted it to be about AI and automation, and I wanted to focus on class”

I loved that. Yeah, the emotional intelligence definitely was shining throughout. And yeah, it did seem like quite a progressive future, which was really cool to see, and very feminist as well.

I’m aware that there are other contemporary feminist issues it could have taken up. It could have contained more trans representation, for example – it could maybe have been more explicitly intersectional. I chose to not mention the main characters’ racial identities, too, beyond them being Iolran.

Yeah, I noticed that.

I think I knew that I wanted it to be about AI and automation, and I wanted to focus on class – you know: “let’s talk about class.” That doesn’t mean I wanted to ignore the other stuff, but not every book can be everything and this novel already packs so much in! And class and economics are deeply worthy of sustained focus, too.

Janetta is a queer character, but her sexuality is in no way definitive of her entire character.

I wanted it to not be an issue at all. There was a flashback scene that I ended up cutting, where she came out to her parents and they were totally unphased. Partly I felt like, as a straight person … it’s not that I can’t tell that story, but I asked myself: how qualified am I to tell this story?

And related to that, I was cautious of making it Janetta’s main thing. I really built her character around her genius. I wanted her to be a visionary and not be hampered by anything other than her own emotions, and her fear of her own emotions. So that’s why being lesbian was just part of her, and not a big deal.

I liked that she was still in love and dealing with those relationships throughout the novel as well.

Thank you. I worried I made her too involved in relationships. But then I thought, but that’s the point. Because she needs to learn how to love and how to grieve. That’s how she becomes the person she needs to be.

Well, speaking of vulnerability, it’s very brave of you to keep going and actually get it published. 

Thank you. I think I reached a point where it was like, “Oh, this is the fourth novel, and it’s now or never.”

And you’re still interested in writing fiction?

Yes, definitely still speculative fiction. But I’m aware that when you write speculative fiction, you have to be open to your imagination going to unexpected places. At first the novel was only about automation. As I went along, though, I realised that when you write fiction about AI, you’re naturally drawn towards the idea of conscious AI – at least, I was. 

I could have written a smaller and more focused novel, but to me, the singularity is an irresistible part of the collective imaginary about AI! And this made things very complex, plot-wise. There was an arc about automation and the loss of jobs, and a second one about conscious AI, and interweaving them was hard!

Before we go – with conscious AI, do you think we should be striving for that, or should we not?

No. It’s fun for movies and books, but that would be a crazy world, no?

Agreed. Yup. We’ve got a lot of problems we need to solve already in the world today. Climate change, poverty, hunger. I don’t think we need a conscious AI to stir the pot even more.

Exactly. Do you think it’s ever likely to happen, though?

I think it could happen. I mean, people are working in that space for sure, but I don’t know if we’ll exactly know when it does. It would probably happen by accident, and surprise people. I think it’s a possibility, but I’m not keen for a world where that does happen.

I couldn’t agree with you more. 

Well, Eli, this has been wonderful speaking to you.

Thank you, it’s been really enjoyable. And your questions were excellent – it’s nice to have what you’ve written about reflected back at you by someone who asks such intelligent, thoughtful questions! So yes, thank you, that was great.

“Do we want that?” Mackenzie Jorgensen interviews Eli Lee

Mackenzie Jorgensen is a Computer Science doctoral researcher working on the social and ethical implications of Artificial Intelligence. We invited Mackenzie to chat with novelist Eli Lee about her debut, A Strange and Brilliant Light (Jo Fletcher, 2021), and representations of AI and automation in speculative fiction. Should we fear or embrace the “rise of the robots”? Or perhaps the robots rose a long time ago, or perhaps that whole paradigm is mistaken? How might AI and automation impact the future of work? What would it mean for emotional work to be automated? How do human and machine stories intersect and blur?

This is part one of two.

A Strange and Brilliant Light, By Eli Lee

Hi Eli, I’m really excited to talk to you today. I gave myself plenty of time to read A Strange and Brilliant Light, but I ended up going through it super quickly, because I enjoyed it so much.

Oh, thank you! 

So I was curious – what made you decide to showcase three women’s stories?

Well, the genesis of the three stories was unexpected even to me. When I started, I wanted to write about a pair of best friends whose lives go in different directions. That’s based on my own relationship with my best friend, who became an incredible political activist whilst I just sat around and watched TV and read books. So that was the real kernel.

But as I wrote, it felt like something was missing. Lal and Rose came to me immediately – Rose was very passionate and active in the world whereas Lal had some of my own flaws – she was bossy, ambitious, and somewhat selfish.

But the dynamic needed a third person who was a contrast to both – and that’s when Lal’s sister Janetta came in. She works in AI, and she’s driven by her own hopes and fears. Once I had those three characters, it felt complete.

Did you see parts of yourself in Lal?

I did. I felt she was a good vehicle for the parts of me I’m less proud of – so she’s a bit selfish and insecure, and she feels belittled by her older sister, stuck in her shadow and ignored, but she’s still a decent person. She wants to work to make money for her family, but she’s just more … petty!

Got it!

And then I put what I would aspire to be in Janetta. Janetta’s very self-sufficient. She’s dedicated to her work and pure of heart. She has insecurities and flaws like the rest of us, but she always works for the greater good. So I kind of separated some of my worst qualities, and the qualities I wish I had, and put them in those two.

And you made them sisters, which works well in that sense.

I’ve got two brothers, but I don’t have a sister. Have you?

No, I have a younger brother.

I mean, this is the thing. Sibling relationships can be so gendered. I wanted to investigate what it’s like if there’s an older sister who is very successful and leaping ahead academically, and then you’re the younger sister in that dynamic. What’s for you? How do you stand out – how are you different, or memorable? So that was Lal.

“I kind of separated some of my worst qualities, and the qualities I wish I had, and put them in those two.”

How far into the future did you kind of picture the novel to be?

One of the get-outs of setting it in an alternate universe is that you don’t have to specify, “This is ten years in the future,” or, “This is fifteen years in the future.” I could choose the kind of technology that fit with the plot. They’re not mind-reading, they’re using mobile phones.

To me, this says it’s not that far in the future? Eight or ten years, perhaps. I’d be interested to hear what you think, as an AI researcher, about when it could plausibly be set? When that early, deep automation of jobs is filtering through?

Eight to ten years, yeah. End of the 2020s.

Then again, part of me thinks maybe that’s too soon! You know when you watch Back to the Future II, and there’s a flying car. It’s set in 2015. We all watched it in the late ‘80s, early ‘90s, and there was this sense that 2015 would look futuristic like that. Now we’re past that date, and the changes don’t seem that drastic.

Right.

So in ten years’ time, maybe things will look the same as they do now? Maybe AI will still be in our lives, but in a way that’s similar to what it is now – essentially under the surface and hidden. Ubiquitous, but hidden. The robots still won’t be serving us coffee! So I’m willing to be proved completely wrong with my timeframe.

I think you’re good! I feel like oftentimes AI is portrayed, especially in media and films, as taking over everything in the very near future. It’s often a dystopian presentation. But actual AIs right now, they’re always just good at one thing. They’re very task-specific. We don’t really have anything like what Janetta was trying to work on, like emotional AI.

Exactly.

And there’s another question: do we want that? Because I feel like emotion is something that makes us human. At the end of the day, AI and tech are a bunch of zeros and ones. You can’t really instill that with real human emotion and experiences, in my opinion. There are scientists out there who disagree though.

I should say that, in terms of eight to ten years, I’m not talking about emotional intelligence and AI. Consciousness is way off, if it ever will happen. I think probably it won’t. But in terms of AI and automation …

Automation, yeah. No, definitely.

My friend works for an AI start-up. He often looks at stuff in my novel, and says, “What the … This is crazy!” And I say, “I know! It’s not meant to be real!” When you watch Ex Machina or Her, there’s a suspension of disbelief. But I guess as an AI researcher it must be even harder, not to just say, “Come on, come on now. That’s not going to happen!”

“Maybe AI will still be in our lives, but in a way that’s similar to what it is nowー essentially under the surface and hidden. Ubiquitous, but hidden.”

And that question of whether AI can be human is just such a long-running, fascinating topic, isn’t it? We just can’t let go of it. That uncanny other self, reflected in an AI.

Yeah, definitely. I agree with you that I can see automation coming more into play in the near future, especially with big companies like Amazon. Which is scary, because people do rely on those big corporations for jobs. We’ve seen recently that unionizing doesn’t necessarily work in those scenarios. That’s one reason Rose’s character is very interesting to me. She explores the future of social justice activism, in a near-future world increasingly dominated by automation.

I knew that you can’t talk about automation without talking about Universal Basic Income. But I didn’t want someone who straight out of the gate was like, “You guys, UBI: I’m going to sort it out.” I wanted to make sure that Rose’s activism wasn’t disconnected from the rest of her life.

So much of the novel is about these three women in their early twenties, figuring out who they are, especially who they are in their relationships. With Rose, an important part of this is how she relates to men of power, or men who have power. There’s her father, her brother, and this other guy Alek, and initially she’s unable to get out from under them.

And so she needed to come into her own power. So I thought, Rose is going to be this activist, but she’s also going to be not sure of herself initially. So a lot of it was their inner struggles, intersecting with those larger economic, social, political, or technological stories.

There was a quote I made note of. ‘Alek said, “True leisure, true creativity and true freedom are within our reach for the first time in human history. And so we must set up source gain and welcome the auts.”’ This seemed quite ironic to me because relinquishing more control of the world could seem like the opposite of freedom. And Rose did realize this as time went on, which was cool to see, as she was learning and growing. 

So Alek was with these other two academics at that point in the novel. Alek’s initial point of view is: “Auts are bad, AIs are bad. We need to just destroy this stuff.” But then when these two guys come along, one of them mentions post-work utopias. John Maynard Keynes wrote about something similar in the 1930s, an essay called ‘Economic Possibilities for our Grandchildren’, and Herbert Marcus wrote Eros and Civilisation in the 1950s, and there has been lots of writing about post-work more recently. 

Maybe machines can do everything, and then you can sit around and play all day, and not have to do things you don’t want to. This idea floats past Alek this evening, and suddenly he’s like, “Oh, wait! Yeah, we can just be free, because auts will do the boring stuff!” 

But that’s obviously not a realistic suggestion, because if you take it a step further, like Rose does, the question is, “Who owns those auts?” Well, if it’s the corporations, that’s not freedom. So that brings Alek back to his original idea: we need source gain. We need some kind of UBI. So in that moment when he talks about post-work leisure, he’s speculating. He’s not thinking about what’s necessary now.

Can you see a world where AI grows in importance alongside human creativity and freedom? Or are they opposing forces?

In a post-work scenario, the AIs are doing the grunt work, doing the kind of cleaning and tidying, and fixing things, and all the behind-the-scenes organisational work, so humans can play and fulfil ourselves. So that’s what Alek would mean by welcoming the auts, I think. But do you mean in terms of AI more as an equal?

I guess, or at least AI growing in social importance, and taking on more and more roles?

The way Alek envisions AI, in that moment, they would be this kind of sub-caste. They’d work away in the background, and you wouldn’t need to worry about them because they wouldn’t be conscious. But I think for us, even without AI consciousness, this could still be a very unsettling and unnerving vision.

We’re already seeing that when AI creeps into more and more areas of life, that ideal of true leisure and creativity gets compromised. You’re surrounded by stuff that’s monitoring you, surveilling you, collecting and analysing your data, perhaps even filtering your reality, and steering you in various ways. It’s almost like the more AI we have, the more inhibited we might feel.

Right, and the more potential problems we might face. On the surveillance point, there’s that moment where Janetta and Taly discuss helping the government with docile spy dogs —

This is one of my cringe moments. I read it now and think, “Spy dogs? What?”

Well Boston Dynamics has a robotic dog. The New York City Police Department had a test run, and there was a huge backlash. So they said, “Okay, actually, no. We are not going to use this.” But about Janetta and Taly’s conversation, I was curious: were you critiquing how governments and the private sector collaborate over surveillance? How do you feel about that? 

Attitudes about surveillance are deeply personal. I’ve got one friend who just does not care about his privacy – he’ll happily give all his data to everything and everyone. It’s not because he believes that it might make society better; he just doesn’t care. I suspect he’s not alone in that.

“We’re already seeing that when AI creeps into more and more areas of life, that ideal of true leisure and creativity gets compromised. You’re surrounded by stuff that’s monitoring you, surveilling you …”

The bird on the front of the novel, illustrated by Sinjin Li, is a CCTV bird. If you look closely, it’s got a little robot-y eye. Taly’s company, Mutants, is all about making stuff that looks friendly and cutesy, but it’s actually spying on you.

Personally, I think we should be very scared about surveillance. And not just visual surveillance, but also the amount of data that we’re giving up to companies more generally. So yes, the book definitely includes a critique of DARPA and agencies like that, who are using AI to further cement their military power.

Early in the book, there’s a humanoid robot that looks like Lal. I wondered if you could talk about that choice? It felt like it might be symbolic of Lal’s almost robotic existence at that point.

That’s a fantastic interpretation of it! Even my editor asked me why I did that. Basically, I just wanted one of the main characters to get the experience of the uncanny valley. It was nothing more than that – a moment of AI spookiness.

It definitely was.

I wanted Lal to have that experience of gazing at a factory produced version of herself.

Another reason for Lal to have that experience is that she hasn’t quite figured out how she feels about the auts. She wants to be part of that world, so this is saying: “Here are versions of you who are part of that world … but they’re just auts. They’re just nothing. They’re also praised and loved by everyone. But they’re still soulless machines. Do you really want to be a soulless machine, Lal?” So you’re right, it does touch on the idea that she becomes a bit of a soulless machine.

Okay.

People ask about that moment, and whether it’s a clue to a big conspiracy. But it’s not there for plot reasons. It’s more about Lal herself, and about the social experience of sharing a world with these uncanny others.

It was an intriguing thing to include early in the novel.

Well, I learned a lot about novel plotting during the writing of this book. And there are some things I’d probably change, because I think that ended up feeling like a red herring.

Lal goes to Tekna and gets absorbed into that world. She expects it’s going to be this shimmering, exciting experience. But actually it’s quite dreary.

Dhont is like an industrial estate. The Tekna Tower is where all the glamour happens, where Taly works, and where the conferences are. Lal sees that and she thinks, “That’s where I’m going to work! That’s where it’s going to happen for me!” 

And then she’s deposited in the backend of nowhere instead. Dhont is meant to imply precarity and being low down on the chain at Tekna; it’s the opposite of the Tekna Tower.

Dhont has also been denuded of people, because of the automation. I don’t know if you saw the Richard Ayoade film, The Double?

I haven’t.

It’s based on a Dostoyevsky novella, I think. Jesse Eisenberg goes to work at this very grim, dystopian factory. But after a while, he’s kind of struggling. Then there’s a double, like another version of him that turns up and aces everything. The film is about their conflict. It’s really good, and the surroundings are very grim and derelict. So I had that industrial dystopian feel in mind. With automation on the rise, and Lal fighting for her survival, I wanted her to realise that working for a glamorous company might not be so glamorous after all. Work in an Amazon warehouse is horrible. So I wanted to pull the rug out from under her.

And she could see the Tower from afar.

From her sad little room!

She does work her way up. But it doesn’t feel like she’s happy with that.

All that glitters isn’t gold. When she does get promoted, she’s aware that there’s something lurking underneath. Something’s not right. She thinks, “Well, okay. This is great, and I’ve got loads of money, loads of time. But things are a bit off…” But then, she’s also competitive, especially with her sister, so she also wants to believe everything’s great. I wanted capitalism to pull her in with all its glories, and then wring her dry.

Yes, it definitely did. At the end, we don’t quite know for sure what she decided. I got the impression she made the right decision.

I’m glad you think she made the right decision. 


Keep your surveillance apparatus peeled for part II, coming soon.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Dan Byrne-Smith in conversation with Gordon Cheung

Published as part of Vector 293 exploring Chinese SF.

Born in London to Chinese parents, Gordon Cheung is an artist who will, whenever possible, talk to people who want to know more about his work. I’m very grateful for all of the occasions when he has given his time to discuss his work with me, conversations which often turn to the topic of science fiction. This interview took place on 4th March 2020, as the impact of COVID-19 was beginning to be recognised in the UK, as the streets of central London started to look very quiet, and elbow bumps had replaced handshakes as the acceptable greeting among friends. Before the interview, we discussed COVID-19 and the strange sense of fear that was taking hold. We talked about whether perhaps there was a sense of xenophobia attached to it, relating specifically to China.

The context of the interview was his exhibition ‘Tears of Paradise,’ held at Edel Assanti Gallery in London from 17th January to 18th March 2020. The interview was a chance to explore Cheung’s fascination with science fiction, the ways in which his practice becomes a lens through which to view some extreme conditions of modernity, and the nature of his work as a series of speculative forms. It was also a chance to talk about these interests in the context of an exhibition that very much looked towards China. The show was presented as a reflection on the continuing emergence of China as a global superpower, an act of witnessing which looks towards futurity as well as to historical narratives, such as the Opium Wars. The five paintings in the exhibition offered aerial views of landscapes, equal part actual and prophetic. These relate to sites of infrastructure projects on an enormous scale. Using a combination of methods, including paint and hardened sand, floating cities coexist with the proposed outlines of new urban realities. These paintings shared the gallery with Home, a sculptural installation made using bamboo and paper from the Financial Times. These sculptures, suspended from the gallery ceiling, were recreated forms of traditional Chinese windows, evoking homes demolished as part of the ongoing process of rapid urbanisation. 

Since graduating from the Royal College of Art in 2001, Gordon Cheung has built a practice around painting, while sometimes making use of sculpture, video and elements of installation. He is best known for his paintings, often large in scale, created on a paper laminate surface made up from stock listings cut from the Financial Times. His 2009 exhibition ‘The Four Horsemen of the Apocalypse’ brought together these elements to create a hallucinatory overview of the present, through evocations of both histories and futures. The exhibition demonstrated the extent to which Cheung’s work had become a visual practice of cognitive estrangement. There is not just a demonstration of an interest in science fiction but rather the construction of a science fictional set of operations manifested in a body of extraordinarily rendered imagery, offering a contested arrangement of the future in a form that demands engagement. 

Cheung’s work beguiles and seduces, alluding to the terror of the sublime while exploiting the seductive potential of images and surfaces. He is captivated by the ongoing history of the twenty-first century. Earlier work was preoccupied with his own memories of the promise of a technological revolution, a future that was never to arrive. The hopeful things to come, both social and technological, that Cheung was once led to believe in have been superseded by wave after wave of catastrophe, played out as forces of global capitalism, perpetual conflict, and environmental destruction. Within Cheung’s work, the apocalypse is happening right now. 

The thematic and symbolic territory has moved on since Cheung’s ‘Four Horsemen’ exhibition over a decade ago. For some time he developed something of an obsession with tulips, both as a trope of Western painting and as the embodiment of the first speculative economic bubble. As evidenced in the exhibition ‘Tears of Paradise,’ his practice in recent years has increasingly looked at imagery and narratives derived from his fascination with China as global superpower. 

Gordon Cheung, String of Pearls, courtesy of Edel Assanti gallery, 2020   
Continue reading “Dan Byrne-Smith in conversation with Gordon Cheung”

History-informed futures

Angela Chan interviews Beatrice Glow. Published as part of Vector 293 exploring Chinese SF.

Artist-researcher Beatrice Glow’s extensive commitment to public history shapes her work across social-botanical history, dispossession, enslavement, migrations and extractive economies. Building long term projects directly with communities, Beatrice maps complexly interconnected colonial histories through grounded investigations and emerging technologies. Currently on a residency, Beatrice chats from Singapore with Angela Chan in the UK about her work and science fiction’s capacity to tell truthful histories and envision just futures together. 

Clay Pipe, Smoke Trails Series, 2021, Beatrice Glow, VR Sculpture. The reference image is from an obsolete 2 dollar bank note from the Timber Cutter’s Bank, Savannah, Georgia, United States, and features a smiling Black woman carrying a child and carrying tobacco leaves in her apron.

AC: Hello Beatrice, thank you for calling with me. Given the array of your practice, how would you like to describe yourself as a practitioner and what are the key themes that guide your outlook and activities? 

BG: It’s constantly going through evolutions, and at the moment, I think of myself as a multidisciplinary artist-researcher in service of public history. I activate many different mediums across art, from sculptural installations to video, to emerging technologies, and all of that with the intent to meet my audience where they are. Public engagement is an important factor in my practice, and for the art, to shift a dominant narrative.

AC: I first experienced your work through your solo exhibition Forts and Flowers (2019) at Taipei Contemporary Art Center, which is part of the larger community-centred project Rhunhattan: A Tale of Two Islands (2016 – ongoing). That was my entry point into the many extended investigations you sensitively spend time with. They often focus on everyday elements of migration, extraction and globalisation, such as etymology, perfumes, tableware, nutmeg, architecture. How did you begin mapping these complex and multiple histories of colonisations, and as aligned with indigenous land sovereignty and climate justice?

BG: I’m glad you got something out of that exhibition, because it was a small attempt at trying to bring that story to my ancestral homeland in terms of the larger history that ties together the different migration flows, the circulation of people, goods, cultures between Asia, the Americas, Europe and the Great Ocean in between. 

Place is very important to me in how we shape ourselves and reflect on who we are through lived experiences. Growing up in North America, with family in South America, and parents from Taiwan, I’ve always had in mind how my family’s experiences are different by place. After university I had an amazing opportunity in receiving a Fulbright grant, and I moved to Peru, where there is the largest Asian Latin American population. The year before that, I had been to Argentina for a few months to meet my family, and that really piqued my interest in the different ways in which we experience belonging and feeling safe as racialised people in this world. My uncle, whom I met there, seemed very much not to have been in a safe place for most of his experience; he slept with a pistol under his pillow. They went through the saqueo in the early 2000s in Argentina so they had a very different idea of what it means to be a racialised minority. It made me interested in this side of history, and I was also surprised about the way I was treated as a romanticised ethnic Other. Experiencing humorous yet strange questions/encounters or microaggressions, I guess, led into my early development as an artist: just trying to poke fun but ask questions around identity and perception, and how we show up as racialised bodies. 

So in Peru, I wanted to look at the longer history of Asian presence in South America, and that brought me to so many homes of people with diasporic histories. I visited many cemeteries, for records of Japanese and Chinese labourers, which uncovered difficult histories. I also traced the railroads from Lima city all the way up to the Andes. I finally took a boat ride in Iquitos, which is a city in a jungle in the Amazonian river basin, looking for the village called Chino, which on a basic level means Chinese. But really the word Chino is an imaginary word to me: it has many definitions in Spanish, the colloquial language and its slang. Meaning orange in Puerto Rico, it can also mean an indigenous person in Central America, 50 cents in Peru, or cannabis, in reference to squinty eyes one has after smoking. So I was looking for Chino in its plethora of meanings. When I finally arrived, they said I was the first chinita to arrive, but I don’t identify as Chinese. I was placed under that umbrella, and I was placed to think about how we are read. 

That experience also allowed me as a young person to visit the Guano Islands where Chinese ‘coolies’ were forced to do labour, and where the first railroads in Latin America were built to transport the guano on these islands. These horrific places that inflicted violence on these people, and trying to understand that history, also allowed me to see the complexities of where my privileges were, and where my disadvantages were. I met teachers who were of indigenous and mixed race ancestry, white Peruvians, and Afro-Peruvians who also have Chinese ancestry that’s not so much documented, which informed what it means for me to be a visibly racialised settler in South America.   

That set the scene for me, regarding how we tell important stories, and what the artist’s role is in recovering stories that are not told. A lot of people had entrusted me with the responsibility, telling me I’m the first person to ask them these questions and allowed me to do their interviews and they shared their family photos. It was a gift that I could stay for two years doing this work with people. When I travelled back to the US, I thought about the stories that slip through the gaps in the archives, and one of the main ones was the pre-Columbian connection between Asia and the Americas, which signaled to me the Great Ocean, known also as the Pacific. There’s one founding myth in the northern coastal region of Peru, of Señor del Naylamp who arrived on a boat, and he had almond-shaped eyes, and many concubines and ‘brought civilisation.’ There are many archaeological references to this character, and people were wanting to tell me that our ancestral heritages are related, like in these stories. Such folklore and artwork allow for more speculative understandings of history than the archives of history books. It made me think about the Great Ocean, and growing up in California, my mother’s brother would say if you look out to the west, you’ll see Taiwan. This sparked my imagination that despite geographical differences, you’re always connected to a place. 

I’m presently on a residency, and I’m in the Malay archipelago that’s a homeland of many Austronesian peoples, and their history is under-discussed in the world. The general consensus in linguistic research, which some contest, is that Austronesian peoples set sail from Taiwan around five to six thousand years ago, and people speak Austronesian languages across Taiwan, Aotearoa, Madagascar, Hawaii, Indonesia, Philippines, Rapa Nui, just to name a few. So it’s a very beautiful story about human connection that’s also seen in certain foods of the Pacific that are found in the Andes. Those are the stories I’m interested in about Asia and the Americas, in which history doesn’t begin with Columbus; it’s an anti-colonial narrative I began following then, even if I didn’t realise this at my younger age. So you see, I’m mapping a very big map! 

Continue reading “History-informed futures”

Maggie Shen King and Chen Qiufan (Stanley Chan) in conversation

Published as part of Vector 293 exploring Chinese SF.

In this cross-interview, we have two prominent writers interview each other about their respective debut novels. Maggie Shen King is the author of An Excess Male, one of The Washington Post’s 5 Best Science Fiction and Fantasy Novels of 2017, a James Tiptree Jr. and Lambda Literary Award Finalist. Chen Qiufan (a.k.a. Stanley Chan) is the author of Waste Tide, which has been praised by Liu Cixin, China’s most prominent science fiction author, as “the pinnacle of near-future SF writing.”

Maggie interviews Stanley

Maggie: For those of us who recycle diligently, it’s easy to become complacent and forget about the magnitude and consequences of our consumption. I really appreciate that Waste Tide brings to the fore the sheer volume of the Western world’s electronic usage and creates in the process a twenty-first century waste land in its electronic recycling center. I understand that you grew up near Guiyu, the town that inspired your novel. What do you hope to accomplish in elevating this issue to center stage? As China becomes a superpower and increasingly begins to turn away this sort of work, what are your thoughts and hopes for the emerging nations of the world? 

Stanley: I try to stir up the awareness of the truth that all of us are equally as responsible for the grave consequence of mass pollution happening across the globe. In China, the issue escalated during the last four decades along with the high speed of economic growth. We try to live life as Americans, but we have 1.4 billion people. China has already replaced the USA as the largest producer of e-waste simply because we are so after the consumerism ideology. All the trash that China fails to recycle will be transferred to a new trash yard, perhaps somewhere in Southeast Asia, Africa or South America. If we continue to fall into the trap of consumerism and blindly indulge in newer, faster, more expensive industrial products, one day we may face trash that is untransferrable, unavoidable, and unrecyclable. By then, we would all become waste people.  Technology might be the cure but fundamentally it’s all about the lifestyle, the philosophy and the values we believe in. 

Continue reading “Maggie Shen King and Chen Qiufan (Stanley Chan) in conversation”

Chinese SF industry

By Regina Kanyu Wang et al. Published as part of Vector 293 exploring Chinese SF.

According to Science Fiction World, the concept of “science fiction (SF) industry” was first proposed in academia in 2012, when a group of experts were brought together  by the Sichuan Province Association of Science and Technology to comb and research SF related industry, and put together the Report of Research on the Development of Chinese SF Industry. Narrowly defined, the SF industry includes SF publishing, SF films, SF series, SF games, SF education, SF merchandise, and other SF-related industries, while a broader definition also includes the supporting industries, upstream or downstream in the industry chain.

According to the 2020 Chinese Science Fiction Industry Report, the gross output of the Chinese SF industry in 2019 sums up to 65.87 billion RMB (about 7.4 billion GBP), among which games and films lead the growth, with publishing and merchandise following (check out more in Chinese here). The SF industry plays an important part in China’s cultural economic growth.

We have invited sixteen organizations, companies, and projects that play a role in China’s SF industry to introduce themselves to the English readers. You can see the diversity and vigour from the texts they provided. We’ve tried to keep editing to a minimum in order to show how they posit and define themselves in the SF industry. Here they are, ordered alphabetically.

Continue reading “Chinese SF industry”

Chen Qiufan: Why did I Write a Science Fiction Novel about E-waste?

Guangzhao Lyu, Angela Chan and Mia Chen Ma. Published as part of Vector 293 exploring Chinese SF. If you’d like to receive the issue, join the BSFA.

This is a transcription of Chen Qiufan’s public talk at Goodenough College, London, invited by London Chinese Science Fiction Group (LCSFG), on 12th August 2019, which is followed by a conversation with Angela Chan and Mia Chen Ma. This was originally published in Chinese on LCSFG’s WeChat account.[1]

The London Chinese Science Fiction Group (LCSFG) is a community for people interested in Chinese languages (sinophone) science and speculative fiction. Since it was founded in April 2019, LCSFG has been organising monthly reading groups focusing on short stories available both in Chinese and English and has been inviting established/emerging Chinese SF writers to participate in online discussions following the pandemic lockdown since March 2020. During our meetings, we explore the story’s themes, literary styles and even translation techniques and choices, as a way to better understand the piece, as well as the evolving field of contemporary Chinese SF.


Chen Qiufan:

Firstly, many thanks to the London Chinese Science Fiction Group for inviting me here, and to Goodenough College for providing such a gorgeous place. Today, I would like to talk about my debut novel, and only novel to date, Waste Tide. And don’t worry, there won’t be any spoilers. Before I discuss the story itself, let me give some general background information and my inspiration, that is, why I wanted to write a science fiction novel about China’s near-future in conjunction with e-waste recycling.

Continue reading “Chen Qiufan: Why did I Write a Science Fiction Novel about E-waste?”

Intimate Earthquakes: An interview with Sensory Cartographies

This interview originally appeared in Vector 292.

We’re lucky to be talking today to Jonathan Reus and Sissel Marie Tonn, whose collaborative work appears under the name Sensory Cartographies. Their work includes, among other things,  the creation of wearable technologies that explore the nature of sensation and attention. […] So like many great collaborations, there’s quite an interdisciplinary aspect to Sensory Cartographies, is that right?

Sissel: Yes, we both have our different backgrounds. Jon really comes from a music and performance background, as well as instrument building and media archaeology. And my background is more in visual arts and arts research.

So tell us how Sensory Cartographies came to be.

Sissel: It started in 2016, when we got an opportunity to do a residency together in Madeira. Sensory Cartographies really grew out of that residency. I’d been to Madeira before in 2013, and started this drawing project, to do with Madeira’s position in the Age of Exploration, which you could really call the Age of Colonization. 

Continue reading “Intimate Earthquakes: An interview with Sensory Cartographies”

Emerging from vibrations: An interview with Juliana Huxtable

A sneak peek at Vector 292, the contemporary art issue. Juliana Huxtable’s groundbreaking postdisciplinary artistic practice encompasses cyberculture, portraiture, performance, poetry, transmedia storytelling, critical making, fashion, happenings, and myriad other modes and magics. In September 2020 Vector took the opportunity to chat with Juliana about her work, especially the role played by science fiction

What were your early encounters with science fiction like?

My father, in particular, was obsessed with science fiction, and so we had a lot of science fiction lying around the house, games, films, magazines. He was really into Heavy Metal magazine, which featured this sci-fi soft-core pornography. For my dad, who was not a religious person, it was as close to a religious practice as we came.

My mom on the other hand was highly religious. But both of my parents really saw technology almost as this necessary gateway to liberation, to cultural and social advancement. There was a strong racial aspect to that. So that was the context in which I grew up, and what’s funny is that when I went to university, I almost had this kind of adolescent “I need to define myself!” moment. I pulled away from science fiction, and would feign disinterest.

How long did that last, that feigned disinterest?

It really was when I moved to New York that I started to develop my own interest in science fiction. Possibilities related especially to gender are so interesting to me. So I found myself naturally drawn to subjects that heavily relied on science fiction, or that were actually a form of science fiction … even if they might not be formally classified as part of that cultural sphere.

For instance, there was my interest in the Nuwaubian Nation. The merger of Ufology and Egyptology, and the literature and contemporary almost pseudo-science which that produces, is essentially a form of science fiction. That reanimated my interest in science fiction more generally. I started engaging with it again almost as a form of art research.

“Maybe the goal is that gender doesn’t have any meaning, because there’s less ascribed to that tethering …”

This morning I saw this tweet where somebody was like, “Describe your gender in five words or less or more, and you can’t use words like masc, fem, androgynous.” People were replying with song lyrics and so on. I guess my question is, Juliana, what is gender?

For me, the struggle for gender that I’m interested in, and the work for gender that I’m interested in, is about expanding beyond inherited gender structures. That means expanding the signifying space that floats right above the concrete materiality of sex. So if ‘sex’ is this literal form of inherited embodiment, whose essence supposedly can’t be modified, then ‘gender’ is the directly corresponding world of cultural, religious, linguistic, and social meanings. Meanings that are, it’s assumed, birthed from that materiality. 

The struggle for gender and the work for gender that I’m interested in is de-linking those two, and then expanding that field, ideally to a point where maybe it doesn’t have any meaning any more. Maybe the goal is that gender doesn’t have any meaning, because there’s less ascribed to that tethering, both of the two parts of a binary to each other, and to the idea of gender as it’s tethered to sex.

Continue reading “Emerging from vibrations: An interview with Juliana Huxtable”

Freeing art from the human artist: Hod Lipson speaks to Fiona Moore about AI and creativity

Interview with Hod Lipson

By Fiona Moore

Artist: Pix18, a robot ‘that conceives and creates art on its very own.’ Oil on Canvas. (Image source: http://www.pix18.com)

Hod Lipson is a professor of Engineering and Data Science at Columbia University in New York. With Melba Kurman he is co-author the award-winning Fabricated: The New World of 3D printing and Driverless: Intelligent cars and the road ahead. His often provocative work on self-aware and self-replicating robots has been influential across academia, industry, policy, and public discourse more generally (including this very popular TED talk), and his interests also encompass pioneering in the fields of open-source 3D printing, electronics 3D printing, bio-printing and food printing. Hod directs the Creative Machines Lab at Columbia, where they “build robots that do what you’d least expect robots to do.”

Fiona Moore is a writer and academic whose work, mostly involving self-driving cars and intelligent technology, has appeared in Clarkesworld, Asimov’s, Interzone and many other publications, with reprints in Forever Magazine and two consecutive editions of The Best of British SF. Her story “Jolene” was shortlisted for the 2019 BSFA Award for Shorter Fiction. Her publications include one novel, Driving Ambition, numerous articles and guidebooks on cult television, guidebooks to Blake’s Seven, The Prisoner, Battlestar Galactica and Doctor Who, three stage plays and four audio plays. When not writing, she is a Professor of Business Anthropology at Royal Holloway, University of London.

You are a celebrated figure in the world of artificial intelligence research. Can you tell me how you came to be interested in, and working in, this area?

Thanks. To me, issues like self-awareness, creativity, and sentience are the essence of being human, and understanding them is one of life’s big mysteries – on par with questions like the origin of life and of the universe. There are also many practical reasons to understand and replicate such abilities (like making autonomous machines more resilient to failure). I think that we roboticists are perhaps not unlike ancient alchemists, trying to breathe life into matter. That’s what brings me to this challenge.

My own interest in AI is, in part, as an anthropologist, looking at culture. To what extent will AI “learn” culture, at least initially, from humans, and to what extent do you see them as capable of developing culture on their own?

Yes, AIs learn culture (for better and worse) from humans and from a human-controlled world; but as AIs become more autonomous, they will gather their own data, and develop their own norms, perspectives, and biases.

Do you see this already happening? If so, what do AI cultures look like at present?

AIs today are still like children, and their cultures are heavily controlled by us humans– their “parents.” For example, AIs that generate music are influenced by existing human music genres; AI’s that generate human portraits are influenced by images of humans they find on the web – disproportionately favouring certain aesthetics, genders, and ethnicities, etc. AIs that generate text are influenced by prose that they are trained on, and so forth.

I have not seen AIs that have full autonomy on the data they consume, but this will eventually happen as artificial intelligence becomes more physically autonomous and can collect its own data. But again, we humans are also increasingly subjected to an information diet that is prescribed by the culture we live in, and we have to make a conscious effort to rise above our culture or go against it. 

Continue reading “Freeing art from the human artist: Hod Lipson speaks to Fiona Moore about AI and creativity”