Mackenzie Jorgensen interviews Eli Lee (part two)

Mackenzie Jorgensen is a Computer Science doctoral researcher working on the social and ethical implications of Artificial Intelligence. We invited Mackenzie to chat with novelist Eli Lee about her debut, A Strange and Brilliant Light (Jo Fletcher, 2021), and representations of AI and automation in speculative fiction. This is part two. Part one can be read here.

A Strange and Brilliant Light, By Eli Lee

I wanted to ask about Janetta and her research into AI and emotion. There’s been a lot of research done into emotion detection, and a lot of critique. For example, what would it mean for a machine to ‘objectively’ know your emotions, when you may not even know yourself? 

Yes. In the novel, Janetta is aspiring to teach AI about emotions, but she’s learning about emotions herself. She’s had a break-up and a rebound with someone who inspires her, but destabilises her as well. This experience is difficult but it helps her come into her own. She was a very unemotional person before that – she tried not to have emotions; but it turned out that she did. 

So in that sense, the novel is more about Janetta being at peace with having emotions. Rather than the idea that emotional intelligence in auts is ever going to happen. I knew that it would be a novel about gaining emotional intelligence – but it was always meant to be in Janetta, someone who needed to do this.

You definitely see that growth throughout the novel. It’s such a hard thing to learn, but so important. Emotional intelligence, being able to be vulnerable, all of those things.

Thank you, that’s exactly it. Janetta has never been vulnerable. She’s used her work as a shield. I wanted this to be a story about being vulnerable, about screwing up, and about bringing yourself back from that.

“Do you think that AI can be taught to read emotions?”

Right, exactly. But it seemed like Janetta believed it could be done. So I was curious about your views.

I just don’t think it can be done at all, full stop. The research that’s been done, some based on facial recognition. One person could be smiling, but they could be desperately sad inside. Could an AI detect that? Humans don’t just detect emotions by observing from a distance. We interact, we probe, we learn. We use our own emotions to invite others how to feel theirs.

So yes, maybe AI can be trained in intersubjective standards of emotion recognition, enough to make reasonable ascriptions. Let’s say, to take a pretty clear emotion, in King Lear when Lear comes back on stage at the end, carrying the body of Cordelia, his beloved child. What does he say? “Howl, howl, howl, howl!” The majority of people can piece the evidence together and understand that he’s upset.

An AI could learn to do that. But in terms of the intricacies of people’s emotions, the depth and the context of them? No, I don’t think so. But what about you? Do you think that AI can be taught to read emotions?

I think researchers will continue to try, but I don’t think it’s really possible. Like you say, someone can be smiling yet struggling inside. And I think the attempts to develop that technology may do more harm than good – in relation to surveillance, for example.

I was thinking about care homes where they have companion AIs, seals and cats and things. That certainly has therapeutic potential. Otherwise, I don’t know how it could possibly read the nuances of human emotion. We don’t even understand our own behaviour sometimes!

I think with a lot of AI, the technology and the science behind it is very interesting. But at the end of the day, the real questions are around how it’s used. Who holds the power? Who has the data that it’s being trained on? That has a major, major impact.

Is that what you’re looking at in your PhD research?

I’m looking at Machine Learning classification settings. So an example of a binary classification setting might be, “Oh, we think this person will repay the bank if given a loan,” versus, “We think this person will default on the loan.” I’m exploring the potential delayed impact of a classification. For example, if you are a false positive, if an AI predicts you’ll repay but instead you default, then your credit score will probably drop. So there will be a negative impact on you too, even though you were given a loan. 

How do you investigate this? 

There isn’t much data, and it isn’t easy to track. It involves a lot of presumptions, and running simulations, and giving more weight to the false positives and the false negatives. I’m trying to understand, “Okay, maybe in these problems, we need to really focus on the false negatives, versus in these ones, the false positives.” Essentially, I’m exploring how we might mitigate the harm an AI decision has on a person. Also, I’m interested in investigating the impact on underrepresented or underprivileged groups, because we have a lot of issues with AI classification systems learning bias and perpetuating sexism and racism, for instance, from our society.

Is it a generally done thing? Say it was about applying for a loan – can the bank automatically exclude the people the AI doesn’t like, because they haven’t got enough income, or their credit’s bad, or because of some other factor?

“Algorithmic fairness has been a field that has really boomed recently, but it’s been around for a while.”

Sure. Algorithmic fairness has been a field that has really boomed recently, but it’s been around for a while. It came into the light in the  ’60s and ’70s, when a lot of Civil Rights work was being done. At the time, the focus was on education and employment settings. Nowadays, it’s still focused on those settings, but also in areas like finance and economics, and many others.

That’s really great, you’re actually doing something that’s potentially making a difference in people’s lives. People who do AI (rather than just write about it in novels!) blow my mind. It’s impressive to have a brain that can do data, logic and mathematics – I’m very jealous.

No, I mean, I think anyone can code and learn about it. I know it seems as if it’s unattainable, but…

That’s a good point. I could learn to code, potentially!

Well, we do need more women in this area, so … ?

I suspect I’ll never get around to it…

What’s your background? Did you do English?

Yeah, I did English at uni, and since then, I’ve worked in editing, comms and publishing. I wrote three novels before this one, but I never sent them to an agent because I thought they weren’t very good. All I ever wanted to do was become a writer, so I’ve ended up with a narrow range of competencies. Writing and editing, essentially. But if that gets automated … what’s it called, GPT-3?

One of the big language models?

Yeah. Janetta’s job, for example, is safe from automation. Right up until AI is able to start consciously self-replicating – like in the movie Her, that sort of singularity moment – Janetta’s job is safe. But in my day job, where I edit publications, how safe is that? My skills are going to become obsolete soon. I might give it fifteen, twenty years. But that’s all I’ve ever trained myself for. It’s not an everyday worry, but it is a distant worry.

I think creativity, especially with regards to novel writing, is not something I can see an AI doing. They most likely would only be re-making other people’s ideas that they were trained on. I think being a creative thinker is a great spot to be.

That’s definitely the pragmatic view! I think the kind of deeply pessimistic, slightly addled-with-dystopia view is that they’re going to be able to recreate Madame Bovary within thirty years, and then all writers will be out of a job. 

But yes, I think the greater question is around how AI might transform creative expression, rather than take it over. There will undoubtedly still be ways for us to bring our humanity to books and music and art.

Realistically, AI is everywhere.

“Realistically, AI is everywhere.”

Right, right. And going back to the novel, you really showcase auts in hospitality settings. Is that the main place that you see them potentially going? Or do you see them in other settings?

Realistically, AI is everywhere. It’s in our Netflix algorithms, and it’s in our traffic lights. So in that sense, I didn’t portray reality – I didn’t convey all the hidden AI that shapes everyday life. In terms of hospitality, I guess there’s already automation in the supply chains and the logistics, and places like the Ocado warehouses. I don’t know if you know about Ocado, the delivery company that went really heavily automated?

Yes.

Ocado has one of the most automated picking and packing systems in the world; these robot arms just picking up ketchup and putting it in bags! So I touched on that a bit, but yes, mostly cafés. Have you ever read Cloud Atlas by David Mitchell?

David Mitchell, the comedian?

Yet another uncanny double. There’s two David Mitchells. There’s the Peep Show comedian, and then there’s a novelist who doesn’t really write sci-fi, but he wrote a novel called Cloud Atlas, and there’s a chapter in a very futuristic setting. And I read it when I was quite young, and the imagery from it, where the utopia that masks a scary dystopia beneath, has stuck with me ever since.

Also, I love coffee shops. Coffee shops are so warm, cozy and human. I just knew that robot servers in a café were a way to have a real interface with humans coming in to get their coffees and being hit with, once again, uncanniness and unnerving futurity, and a slightly utopian, slightly dystopian vibe. In the novel, one of the characters, Van, sings while he works, and I imagined the coldness of that being replaced by auts.

I’m also someone who loves coffee shops. Their ambiance and conversations with the barista are two of my favourite things about them. They’re always in a very warm setting.

Coffee shops are a classic institution. You’re from Seattle, right?

Yes, I am.

The home of coffee shops!

Yup.

You have the best coffee – all of Seattle is like one big coffee shop. And then you know exactly; a good coffee shop is the most wonderful place. 

Can we expect a sequel?

Potentially! I’m curious, would you see it as a free-ranging AI utopia, where they’ve managed to create this benevolent AI that’s also autonomously functioning?

I guess I wondered about Lal’s decision in the last chapter, and seeing what that actually does to Tekna and their world.

That makes sense. To be honest, I found writing this novel so difficult. I’d written a sci-fi novel before, and I think the reason it was difficult was because, well … do you read a lot of sci-fi?

Honestly, this was my first sci-fi book! I’m usually a non-fiction person. Currently, I’m reading non-fiction, Caste: The Origins of Our Discontents by Isabel Wilkerson, which is really good. Very different from this though.

I read quite a bit of literary fiction – writers like Elena Ferrante, Alice Munro. But I feel almost compulsively drawn to sci-fi, like it’s where my imagination wants to go. I realised during the writing of this that it had to be driven by plot, and then the characters react to that. So the automation and conscious AI plots were the engines of the novel. 

Right.

And I wonder if I’m better suited to something where the engine is people living their lives in a more scaled-down way. I haven’t worked it out yet; I only know that when the Guardian called A Strange and Brilliant Light ‘character-driven’, they weren’t wrong!

Having written the novel, though, what are the major takeaways that you want readers to come away with?

I know it sounds potentially counterintuitive, because the novel is about AI. But I think to me, it is more about some of that more mundane, slow-burn stuff. It’s about figuring out who you are and allowing yourself to make messes and wrong choices, and then being able to do something about this. So all three of the girls do make pretty terrible choices, and then they manage to figure out who they need to be in order to make things better. So it’s that hokey stuff about being true to yourself, and having faith in yourself. Because even Lal shows she has faith in herself, in the end.

The other message is about vulnerability and emotional intelligence. Lal shows that at the end, because the person she needs to be vulnerable to is her sister. And Rose needs to stop being vulnerable to powerful men and put some boundaries down. Vulnerability and self-assurance are connected.

It’s a feminist novel. When you’re in your twenties, you go through a lot of self-doubt. Most people I know, unless they’re bizarrely confident, struggled quite a lot internally with who they should be and whether they’re doing the right thing, especially in their twenties. And I wanted to show some women who also struggle, but manage to figure things out.

“I wanted it to be about AI and automation, and I wanted to focus on class”

I loved that. Yeah, the emotional intelligence definitely was shining throughout. And yeah, it did seem like quite a progressive future, which was really cool to see, and very feminist as well.

I’m aware that there are other contemporary feminist issues it could have taken up. It could have contained more trans representation, for example – it could maybe have been more explicitly intersectional. I chose to not mention the main characters’ racial identities, too, beyond them being Iolran.

Yeah, I noticed that.

I think I knew that I wanted it to be about AI and automation, and I wanted to focus on class – you know: “let’s talk about class.” That doesn’t mean I wanted to ignore the other stuff, but not every book can be everything and this novel already packs so much in! And class and economics are deeply worthy of sustained focus, too.

Janetta is a queer character, but her sexuality is in no way definitive of her entire character.

I wanted it to not be an issue at all. There was a flashback scene that I ended up cutting, where she came out to her parents and they were totally unphased. Partly I felt like, as a straight person … it’s not that I can’t tell that story, but I asked myself: how qualified am I to tell this story?

And related to that, I was cautious of making it Janetta’s main thing. I really built her character around her genius. I wanted her to be a visionary and not be hampered by anything other than her own emotions, and her fear of her own emotions. So that’s why being lesbian was just part of her, and not a big deal.

I liked that she was still in love and dealing with those relationships throughout the novel as well.

Thank you. I worried I made her too involved in relationships. But then I thought, but that’s the point. Because she needs to learn how to love and how to grieve. That’s how she becomes the person she needs to be.

Well, speaking of vulnerability, it’s very brave of you to keep going and actually get it published. 

Thank you. I think I reached a point where it was like, “Oh, this is the fourth novel, and it’s now or never.”

And you’re still interested in writing fiction?

Yes, definitely still speculative fiction. But I’m aware that when you write speculative fiction, you have to be open to your imagination going to unexpected places. At first the novel was only about automation. As I went along, though, I realised that when you write fiction about AI, you’re naturally drawn towards the idea of conscious AI – at least, I was. 

I could have written a smaller and more focused novel, but to me, the singularity is an irresistible part of the collective imaginary about AI! And this made things very complex, plot-wise. There was an arc about automation and the loss of jobs, and a second one about conscious AI, and interweaving them was hard!

Before we go – with conscious AI, do you think we should be striving for that, or should we not?

No. It’s fun for movies and books, but that would be a crazy world, no?

Agreed. Yup. We’ve got a lot of problems we need to solve already in the world today. Climate change, poverty, hunger. I don’t think we need a conscious AI to stir the pot even more.

Exactly. Do you think it’s ever likely to happen, though?

I think it could happen. I mean, people are working in that space for sure, but I don’t know if we’ll exactly know when it does. It would probably happen by accident, and surprise people. I think it’s a possibility, but I’m not keen for a world where that does happen.

I couldn’t agree with you more. 

Well, Eli, this has been wonderful speaking to you.

Thank you, it’s been really enjoyable. And your questions were excellent – it’s nice to have what you’ve written about reflected back at you by someone who asks such intelligent, thoughtful questions! So yes, thank you, that was great.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

2 thoughts on “Mackenzie Jorgensen interviews Eli Lee (part two)

  1. Pingback: 2021 Wrapped

Leave a comment