Mackenzie Jorgensen interviews Eli Lee (part two)

Mackenzie Jorgensen is a Computer Science doctoral researcher working on the social and ethical implications of Artificial Intelligence. We invited Mackenzie to chat with novelist Eli Lee about her debut, A Strange and Brilliant Light (Jo Fletcher, 2021), and representations of AI and automation in speculative fiction. This is part two. Part one can be read here.

A Strange and Brilliant Light, By Eli Lee

I wanted to ask about Janetta and her research into AI and emotion. There’s been a lot of research done into emotion detection, and a lot of critique. For example, what would it mean for a machine to ‘objectively’ know your emotions, when you may not even know yourself? 

Yes. In the novel, Janetta is aspiring to teach AI about emotions, but she’s learning about emotions herself. She’s had a break-up and a rebound with someone who inspires her, but destabilises her as well. This experience is difficult but it helps her come into her own. She was a very unemotional person before that – she tried not to have emotions; but it turned out that she did. 

So in that sense, the novel is more about Janetta being at peace with having emotions. Rather than the idea that emotional intelligence in auts is ever going to happen. I knew that it would be a novel about gaining emotional intelligence – but it was always meant to be in Janetta, someone who needed to do this.

You definitely see that growth throughout the novel. It’s such a hard thing to learn, but so important. Emotional intelligence, being able to be vulnerable, all of those things.

Thank you, that’s exactly it. Janetta has never been vulnerable. She’s used her work as a shield. I wanted this to be a story about being vulnerable, about screwing up, and about bringing yourself back from that.

“Do you think that AI can be taught to read emotions?”

Right, exactly. But it seemed like Janetta believed it could be done. So I was curious about your views.

I just don’t think it can be done at all, full stop. The research that’s been done, some based on facial recognition. One person could be smiling, but they could be desperately sad inside. Could an AI detect that? Humans don’t just detect emotions by observing from a distance. We interact, we probe, we learn. We use our own emotions to invite others how to feel theirs.

So yes, maybe AI can be trained in intersubjective standards of emotion recognition, enough to make reasonable ascriptions. Let’s say, to take a pretty clear emotion, in King Lear when Lear comes back on stage at the end, carrying the body of Cordelia, his beloved child. What does he say? “Howl, howl, howl, howl!” The majority of people can piece the evidence together and understand that he’s upset.

An AI could learn to do that. But in terms of the intricacies of people’s emotions, the depth and the context of them? No, I don’t think so. But what about you? Do you think that AI can be taught to read emotions?

I think researchers will continue to try, but I don’t think it’s really possible. Like you say, someone can be smiling yet struggling inside. And I think the attempts to develop that technology may do more harm than good – in relation to surveillance, for example.

I was thinking about care homes where they have companion AIs, seals and cats and things. That certainly has therapeutic potential. Otherwise, I don’t know how it could possibly read the nuances of human emotion. We don’t even understand our own behaviour sometimes!

I think with a lot of AI, the technology and the science behind it is very interesting. But at the end of the day, the real questions are around how it’s used. Who holds the power? Who has the data that it’s being trained on? That has a major, major impact.

Is that what you’re looking at in your PhD research?

I’m looking at Machine Learning classification settings. So an example of a binary classification setting might be, “Oh, we think this person will repay the bank if given a loan,” versus, “We think this person will default on the loan.” I’m exploring the potential delayed impact of a classification. For example, if you are a false positive, if an AI predicts you’ll repay but instead you default, then your credit score will probably drop. So there will be a negative impact on you too, even though you were given a loan. 

How do you investigate this? 

There isn’t much data, and it isn’t easy to track. It involves a lot of presumptions, and running simulations, and giving more weight to the false positives and the false negatives. I’m trying to understand, “Okay, maybe in these problems, we need to really focus on the false negatives, versus in these ones, the false positives.” Essentially, I’m exploring how we might mitigate the harm an AI decision has on a person. Also, I’m interested in investigating the impact on underrepresented or underprivileged groups, because we have a lot of issues with AI classification systems learning bias and perpetuating sexism and racism, for instance, from our society.

Is it a generally done thing? Say it was about applying for a loan – can the bank automatically exclude the people the AI doesn’t like, because they haven’t got enough income, or their credit’s bad, or because of some other factor?

“Algorithmic fairness has been a field that has really boomed recently, but it’s been around for a while.”

Sure. Algorithmic fairness has been a field that has really boomed recently, but it’s been around for a while. It came into the light in the  ’60s and ’70s, when a lot of Civil Rights work was being done. At the time, the focus was on education and employment settings. Nowadays, it’s still focused on those settings, but also in areas like finance and economics, and many others.

That’s really great, you’re actually doing something that’s potentially making a difference in people’s lives. People who do AI (rather than just write about it in novels!) blow my mind. It’s impressive to have a brain that can do data, logic and mathematics – I’m very jealous.

No, I mean, I think anyone can code and learn about it. I know it seems as if it’s unattainable, but…

That’s a good point. I could learn to code, potentially!

Well, we do need more women in this area, so … ?

I suspect I’ll never get around to it…

What’s your background? Did you do English?

Yeah, I did English at uni, and since then, I’ve worked in editing, comms and publishing. I wrote three novels before this one, but I never sent them to an agent because I thought they weren’t very good. All I ever wanted to do was become a writer, so I’ve ended up with a narrow range of competencies. Writing and editing, essentially. But if that gets automated … what’s it called, GPT-3?

One of the big language models?

Yeah. Janetta’s job, for example, is safe from automation. Right up until AI is able to start consciously self-replicating – like in the movie Her, that sort of singularity moment – Janetta’s job is safe. But in my day job, where I edit publications, how safe is that? My skills are going to become obsolete soon. I might give it fifteen, twenty years. But that’s all I’ve ever trained myself for. It’s not an everyday worry, but it is a distant worry.

I think creativity, especially with regards to novel writing, is not something I can see an AI doing. They most likely would only be re-making other people’s ideas that they were trained on. I think being a creative thinker is a great spot to be.

That’s definitely the pragmatic view! I think the kind of deeply pessimistic, slightly addled-with-dystopia view is that they’re going to be able to recreate Madame Bovary within thirty years, and then all writers will be out of a job. 

But yes, I think the greater question is around how AI might transform creative expression, rather than take it over. There will undoubtedly still be ways for us to bring our humanity to books and music and art.

Realistically, AI is everywhere.

“Realistically, AI is everywhere.”

Right, right. And going back to the novel, you really showcase auts in hospitality settings. Is that the main place that you see them potentially going? Or do you see them in other settings?

Realistically, AI is everywhere. It’s in our Netflix algorithms, and it’s in our traffic lights. So in that sense, I didn’t portray reality – I didn’t convey all the hidden AI that shapes everyday life. In terms of hospitality, I guess there’s already automation in the supply chains and the logistics, and places like the Ocado warehouses. I don’t know if you know about Ocado, the delivery company that went really heavily automated?

Yes.

Ocado has one of the most automated picking and packing systems in the world; these robot arms just picking up ketchup and putting it in bags! So I touched on that a bit, but yes, mostly cafés. Have you ever read Cloud Atlas by David Mitchell?

David Mitchell, the comedian?

Yet another uncanny double. There’s two David Mitchells. There’s the Peep Show comedian, and then there’s a novelist who doesn’t really write sci-fi, but he wrote a novel called Cloud Atlas, and there’s a chapter in a very futuristic setting. And I read it when I was quite young, and the imagery from it, where the utopia that masks a scary dystopia beneath, has stuck with me ever since.

Also, I love coffee shops. Coffee shops are so warm, cozy and human. I just knew that robot servers in a café were a way to have a real interface with humans coming in to get their coffees and being hit with, once again, uncanniness and unnerving futurity, and a slightly utopian, slightly dystopian vibe. In the novel, one of the characters, Van, sings while he works, and I imagined the coldness of that being replaced by auts.

I’m also someone who loves coffee shops. Their ambiance and conversations with the barista are two of my favourite things about them. They’re always in a very warm setting.

Coffee shops are a classic institution. You’re from Seattle, right?

Yes, I am.

The home of coffee shops!

Yup.

You have the best coffee – all of Seattle is like one big coffee shop. And then you know exactly; a good coffee shop is the most wonderful place. 

Can we expect a sequel?

Potentially! I’m curious, would you see it as a free-ranging AI utopia, where they’ve managed to create this benevolent AI that’s also autonomously functioning?

I guess I wondered about Lal’s decision in the last chapter, and seeing what that actually does to Tekna and their world.

That makes sense. To be honest, I found writing this novel so difficult. I’d written a sci-fi novel before, and I think the reason it was difficult was because, well … do you read a lot of sci-fi?

Honestly, this was my first sci-fi book! I’m usually a non-fiction person. Currently, I’m reading non-fiction, Caste: The Origins of Our Discontents by Isabel Wilkerson, which is really good. Very different from this though.

I read quite a bit of literary fiction – writers like Elena Ferrante, Alice Munro. But I feel almost compulsively drawn to sci-fi, like it’s where my imagination wants to go. I realised during the writing of this that it had to be driven by plot, and then the characters react to that. So the automation and conscious AI plots were the engines of the novel. 

Right.

And I wonder if I’m better suited to something where the engine is people living their lives in a more scaled-down way. I haven’t worked it out yet; I only know that when the Guardian called A Strange and Brilliant Light ‘character-driven’, they weren’t wrong!

Having written the novel, though, what are the major takeaways that you want readers to come away with?

I know it sounds potentially counterintuitive, because the novel is about AI. But I think to me, it is more about some of that more mundane, slow-burn stuff. It’s about figuring out who you are and allowing yourself to make messes and wrong choices, and then being able to do something about this. So all three of the girls do make pretty terrible choices, and then they manage to figure out who they need to be in order to make things better. So it’s that hokey stuff about being true to yourself, and having faith in yourself. Because even Lal shows she has faith in herself, in the end.

The other message is about vulnerability and emotional intelligence. Lal shows that at the end, because the person she needs to be vulnerable to is her sister. And Rose needs to stop being vulnerable to powerful men and put some boundaries down. Vulnerability and self-assurance are connected.

It’s a feminist novel. When you’re in your twenties, you go through a lot of self-doubt. Most people I know, unless they’re bizarrely confident, struggled quite a lot internally with who they should be and whether they’re doing the right thing, especially in their twenties. And I wanted to show some women who also struggle, but manage to figure things out.

“I wanted it to be about AI and automation, and I wanted to focus on class”

I loved that. Yeah, the emotional intelligence definitely was shining throughout. And yeah, it did seem like quite a progressive future, which was really cool to see, and very feminist as well.

I’m aware that there are other contemporary feminist issues it could have taken up. It could have contained more trans representation, for example – it could maybe have been more explicitly intersectional. I chose to not mention the main characters’ racial identities, too, beyond them being Iolran.

Yeah, I noticed that.

I think I knew that I wanted it to be about AI and automation, and I wanted to focus on class – you know: “let’s talk about class.” That doesn’t mean I wanted to ignore the other stuff, but not every book can be everything and this novel already packs so much in! And class and economics are deeply worthy of sustained focus, too.

Janetta is a queer character, but her sexuality is in no way definitive of her entire character.

I wanted it to not be an issue at all. There was a flashback scene that I ended up cutting, where she came out to her parents and they were totally unphased. Partly I felt like, as a straight person … it’s not that I can’t tell that story, but I asked myself: how qualified am I to tell this story?

And related to that, I was cautious of making it Janetta’s main thing. I really built her character around her genius. I wanted her to be a visionary and not be hampered by anything other than her own emotions, and her fear of her own emotions. So that’s why being lesbian was just part of her, and not a big deal.

I liked that she was still in love and dealing with those relationships throughout the novel as well.

Thank you. I worried I made her too involved in relationships. But then I thought, but that’s the point. Because she needs to learn how to love and how to grieve. That’s how she becomes the person she needs to be.

Well, speaking of vulnerability, it’s very brave of you to keep going and actually get it published. 

Thank you. I think I reached a point where it was like, “Oh, this is the fourth novel, and it’s now or never.”

And you’re still interested in writing fiction?

Yes, definitely still speculative fiction. But I’m aware that when you write speculative fiction, you have to be open to your imagination going to unexpected places. At first the novel was only about automation. As I went along, though, I realised that when you write fiction about AI, you’re naturally drawn towards the idea of conscious AI – at least, I was. 

I could have written a smaller and more focused novel, but to me, the singularity is an irresistible part of the collective imaginary about AI! And this made things very complex, plot-wise. There was an arc about automation and the loss of jobs, and a second one about conscious AI, and interweaving them was hard!

Before we go – with conscious AI, do you think we should be striving for that, or should we not?

No. It’s fun for movies and books, but that would be a crazy world, no?

Agreed. Yup. We’ve got a lot of problems we need to solve already in the world today. Climate change, poverty, hunger. I don’t think we need a conscious AI to stir the pot even more.

Exactly. Do you think it’s ever likely to happen, though?

I think it could happen. I mean, people are working in that space for sure, but I don’t know if we’ll exactly know when it does. It would probably happen by accident, and surprise people. I think it’s a possibility, but I’m not keen for a world where that does happen.

I couldn’t agree with you more. 

Well, Eli, this has been wonderful speaking to you.

Thank you, it’s been really enjoyable. And your questions were excellent – it’s nice to have what you’ve written about reflected back at you by someone who asks such intelligent, thoughtful questions! So yes, thank you, that was great.

This Is How You Produce The Time War Part 1: Powder Scofield interviews Amal El-Mohtar and Max Gladstone

Screenshot 2020-03-11 at 22.53.08

Amal El-Mohtar and Max Gladstone’s This Is How You Lose The Time War (Jo Fletcher, 2019) has been gathering a glowing reception. It’s an intense, lyrical, tragicomic novella about two elite warriors, Red and Blue, who strike up a correspondence across the millenia and across enemy lines. Adam Roberts, in his pick of SFF of the year, calls it ‘one of a kind.’ The novella has also made the shortlist for the 2019 BSFA Award. Late in 2019, Powder Scofield joined Amal and Max to shoot the breeze. This interview is a two-parter, with Part 2 dropping next week. Special thanks to Robert Berg for all his help with the interview.

PART I: ‘So we were in this gazebo …’

Powder: You’ve said one of the foundational premises of your friendship was writing physical letters to one another, and obviously that shows up in This Is How You Lose The Time War. Are there other bits of real life embedded in Time War? When you’re working on a project, how much are you intentionally processing past experience? 

Max: Some of it’s intentional, but in my experience, intention is like a raft that’s on an ocean that’s in the middle of a storm. You’re aware of what you can see, but you’re not in control of it as much as you think you are. There’s a little rudder, and you can maybe try to paddle. But if a wave is driving you east, you’re going east. So I think when we sat down to write, we both knew that we were drawing on our experience of writing letters to each other, and of correspondence more generally, and the particular strange kind of time travel that you do when you’re writing a letter, especially a physical letter. But at the same time, there’s the raft, there’s the ocean, and there’s the storm.

Powder: There’s a line in the book, like, “There’s a kind of time travel in letters.” I can see that. The time it takes to write a letter, the time it takes to get there. The way letters can sometimes cross each other in transit.

Max: Exactly. You’re imagining who the other person is that will be receiving this, you’re imagining where you’ll be when they’re receiving the letter in a week or two. You’re wondering sometimes about the many forces that could stand between you dropping the small and very fragile piece of paper into a confusing and vast and twisty basically state system with the hope and trust that the $1.35 stamp will see it across the international border to someone else’s actual house just because you happen to put some words on it. So all of these steps create many different versions of yourself and of the recipient and of your respective spaces. I think that was the intent with Time War. But there are other things that I think were beneath and driving that intent. 

Amal: And to answer really literally, when we were writing the book, we were also in a gazebo with no internet. So we were sitting across from each other and we only had recourse to our own bodies of knowledge. The book is built primarily out of no research, but instead what we both brought to the literal table between us in a literal gazebo as we wrote things! There’s so much in there built out of, for one thing, the surroundings. It was a gorgeous late June, early July in the Midwest. There were trees and birds and plants and things that were finding their ways into the things we were writing, for sure …

Max: Except that I don’t know plants and animals as well as you do. For me: it was green … green was nice … Continue reading “This Is How You Produce The Time War Part 1: Powder Scofield interviews Amal El-Mohtar and Max Gladstone”

London Meeting: Dan Abnett

The guest at tonight’s BSFA London meeting is Dan Abnett, author of a lot, including the “Gaunt’s Ghosts” series of Warhammer 40,000 novels, and the recent alternate history Triumff. He will be interviewed by Lee Harris.

As usual, the meeting will be head in the upstairs room of The Antelope: 22 Eaton Terrace, London, SW1W 8EZ. The closest tube station is Sloane Square, and a map is here.

There will be people in the bar from 6-ish, with the interview starting at 7. The meeting is free, and open to any and all — not just BSFA members — and there will be a raffle with a selection of sf books as prizes.

London Meeting: Ian McDonald

The guest at tonight’s BSFA London meeting is Ian McDonald, author of Cyberabad Days, Brasyl, River of Gods and many other books. He will be interviewed by Simon Bradshaw (and not, as previous announcements have indicated, Tony Keen, because Tony is ill. Get well soon, Tony!)

The venue is the upstairs room of The Antelope, 22 Eaton Terrace, London, SW1W 8EZ. The closest tube station is Sloane Square, and a map is here.

As usual, there will be people in the bar from 6-ish, with the interview starting at 7. The meeting is free, and open to any and all, though there will be a raffle with a selection of sf books as prizes. See you there, I hope.

Iain Banks on Open Book

Pointed out to me yesterday: last Sunday’s Open Book features an entertaining interview with Iain Banks about his new novel, Transition. As you’d expect, the sf/non-sf divide comes up, but this time it comes up because Transition is being marketed as a non-M novel, yet features parallel worlds and similar excitements. (And, in fact, in the US, it is an M-Banks novel.) Full marks to Muriel Gray for this exchange:

GREY: You’re one of Britain’s most popular and best-loved and best-selling writers, and yet something that really really annoys me personally is that you’ve never been nominated for one of the big literary prizes yet. Why do you think that is?

BANKS: I think possibly it’s because I’ve always got a foot in both camps as it were. Put it this way, I think if I’d kept my nose clean, if I hadn’t written science fiction, if I’d got away with The Wasp Factory as piece of a youthful indiscretion and if I’d written respectable novels since then, then maybe you know I’d have had a chance, a crack at the Booker prize by now!

GREY: You see, I have to interrupt you there. “Respectable novels”, referring to science fiction as not respectable, that’s Margaret Atwood territory –

BANKS: — well, quite, yeah

GREY: — the woman who refuses to admit she writes science fiction, she calls it “speculative fiction” so she continues to win prizes. This enrages me! Science fiction is perfectly respectable.

Alas, nobody has seen fit to send me a proof copy this time, so it may be a while before I get to it. Sounds promising, though.

London Meeting: Nick Lowe

The guest at tonight’s BSFA London Meeting is Nick Lowe, film reviewer for Interzone. He will be interviewed by Graham Sleight.

As usual, interview will start at 7pm, though there will be people in the bar from 6-ish; the meeting is free, though there will be a raffle (with sf books as prizes), and it is open to any and all.

The venue is the upstairs room of The Antelope, 22 Eaton Terrace, London, SW1W 8EZ. The closest tube station is Sloane Square, and a map is here.