By Mark Rohtmaa-Jackson & Allan Hughes / Blue Mountain Arcturus
When not in the tower he haunted the room where he had set up his War Tables – high benches on which rested models of cities and castles occupied by thousands of other models of soldier. In his madness he had commissioned this huge array from Vaiyonn, the local craftsman. […] And Dorian Hawkmoon would move all these pieces about his vast boards, going through one permutation after another; fighting a thousand versions of the same battle in order to see how a battle which followed it might have changed.
In Moorcock’s The Chronicles of Castle Brass, Hawkmoon is consumed by a madness to commission his miniature armies, and finds their permutations and predictions more absorbing than the fine day outside his room of tables. Rather than turning inward like Hawkmoon, we, under the guise of the parafictional games company Blue Mountain Arcturus, find ourselves examining tabletop gaming as a means to turn our inward selves toward the wider world: as a language through which we try to alleviate our anxieties of the fine day. This text is a summary of how we hope to achieve alterations to our conditions through an experimental practice. It hopefully points towards areas of study that might be useful to others working with tabletop games as a means to learn strategies for survival: the challenge to critical games design in the wake of Guy Debord and Alice Becker-Ho’s Game of War (1987).
Citadel of Chaos (2019) is our case study for this article, an artwork made for the exhibition Polymorph Other at Queens Hall Arts Centre, Hexham, that same year. It was conceptualised, designed and built as a large piece of scenery or terrain for a hypothetical wargame table. It is a background rather than a focus; something that gives a place an environment that enables other things to happen. As such it is about the possibilities of things happening because of what we might have made. But this is not just on the small scale (a piece of scenery allows a story to be told between players through a game being played) but in the belief that this kind of work can change things outside of the system in which their world is contained (that such stories can lead to possibilities elsewhere).
In ‘Back to the Future: Wells, Sociology, Utopia, and Method,’ Ruth Levitas argues:
[…] we would be better served both as sociologists and as citizens by a more utopian method, one which embraces the Imaginary Reconstruction of Society (IROS) as an active device in reflexive and collective deliberations about possible and desirable futures.
Few activities dovetail better with Levitas’ proposal, one of collective deliberation and active imagination, than tabletop roleplaying. Indeed, both utopianism and tabletop roleplaying are often derided by their detractors as mere frivolity, and unworthy of serious consideration. However, as an interactive medium based on cooperative imagination of the possible, tabletop roleplaying games (TTRPGs) offer a unique opportunity for analysis of the practice of the IROS.
In this article, I analyze one such game: SoulJAR Games’ The Book of Cairn (Cairn). While at first glance, Cairn appears to be little more than yet another ‘fantasy heartbreaker,’ I argue that Cairn’s combination of unique rules and use of a pastoralist utopian setting function as a method of critique, of both contemporary social conditions, and of the themes embraced by the TTRPG industry more broadly. Specifically, I argue, two interlocking rhetorics are built into the rules of Cairn, producing through play of the game both a sense of what would be necessary to maintain (albeit imperfectly and abstractly) a small pastoralist utopian society, and also an enactment of those activities around the gaming table. Before turning to my analysis of Cairn and the implications of its rules, I first address the theoretical underpinnings of my approach. After my analysis, I conclude by discussing the limits of Cairn’s IROS.
The worldbuilding TTRPG Arium: Create by Adept Icarus promises a utopian procedure for creating gameworlds that are generative, safe, and liberating environments for roleplayers, an undertaking animated by recent debates over the prevalence of harmful, stereotypical, or simply repetitive tropes throughout the TTRPG industry. While the shift away from these problematic tropes is admirable and perhaps overdue within the industry, Arium’s approach to addressing this issue is also notable for its enthusiastic endorsement of creativity techniques stemming from the world of corporate management and innovation consulting. Specifically, Arium’s Lean Worldbuilding approach shares many commonalities with the Lean management philosophy that emerged in the 1990s, largely inspired by Toyota’s operating model. Both Arium’s Lean, and Lean as it is now understood in business, are associated with the pervasive use of Post-it notes for ideation and collaboration.
This article explores how Arium’s utopian solutionism and endorsement of a signature technique of post-Fordist management presents both pitfalls and opportunities for inventive, utopian roleplay. Beginning in the critical mode, I suggest that by adopting techniques that reduce the art of imaginative worldbuilding to a ritualized formula, Arium: Create risks building worlds that are creative merely for the sake of creativity, and consensual mainly in their appeal to the lowest common denominator. Inspired by Adorno and Horkheimer’s critiques of jazz and the culture industry, and following Eitan Wilf’s ethnography of Post-its and critiques of the innovation and creativity industry, the first movement of the article asks whether such strategies of routinized, commodified creativity can only ever produce the ‘freedom to choose what is always the same.’[i] Nevertheless, while this danger should not be ignored, I argue that it would be wrong to dismiss Arium or to label it as utopian in only the pejorative sense. Taking cues from theorists responding to Adorno’s pessimistic stance toward popular culture, notably Adorno’s Frankfurt School colleague and interlocutor Walter Benjamin, the second movement of the article suggests that despite its embrace of corporate solutionism, Arium’s collaborative worldbuilding contains a generative kernel, revealing an additional movement in the dialectic between oppressive technologies of control and the utopian impulse.
For a literature supposedly intent upon the new, the inventive, the futuristic, science fiction seems inordinately interested in its own past. I am as guilty as anyone of this, which may be why I notice the phenomenon so much. But the question is: what are we looking for in the past, and (a very different question) what are we finding? Generally, the past is assumed to hold the key to where we are now and what we might become. That, however, is far from always being the case. The history of science fiction is extraordinarily full of false trails, dead ends, U-turns, twists, side tracks, abrupt changes of direction, and so on. Somehow, where we are today emerged out of the mess of what we once were, but in retrospect the route is neither clear nor consistent. Simply diving willy-nilly into the science fiction of days gone by, shining a light at random onto a story over here, a novel over there, offers no clue as to how or even if those writings are connected. And it offers even less of a clue about the evolution of what came after.
That, in a nutshell, is my problem with this latest selection of hoary tales from the dusty and neglected by-ways of science fiction’s infancy. Or rather, since the various contributors seem wedded to Gary Westfahl’s bizarre argument that true science fiction only came into being with the launch of Amazing Stories, this is science fiction from before there was science fiction. There are seven stories and three novel extracts gathered here, the earliest of which was written in 1826 (though not published until 1863), and the most recent published in 1923. Ten pieces of writing drawn from near enough a century of science fiction, each accompanied by an introduction (to call them “critical essays” as the subtitle does is, to my mind, to over-inflate their role); there should be enough here to forge a narrative, give us a perspective from which to consider where we come from and where we might be going.
Kim Newman is the author of Anno Dracula (1992), a novel set in an alternate Victorian London where Dracula has become the Prince Consort and vampires have emerged as the new ruling class. Since then he has written many more books in the series. Anno Dracula is being reissued by Titan Books in October 2022 as a deluxe signed hardcover edition with an introduction by Neil Gaiman and a new short story by the author. Under the name Jack Yeovil, Newman has also published books which helped to build Games Workshop’s Warhammer Fantasy and Dark Futures universes. In addition to writing fiction, Newman is a major critic of horror cinema whose work can be found in Nightmare Movies (1985) and his Sight & Sound columns. He also served as the executive producer of Prano Bailey-Bond’s Censor (2021).
Updates about Newman’s work can be found at his website and on Twitter @annodracula. We are delighted to have Kim back to chat to Vector, as Jordan S. Carroll asks him about Anno Dracula, shared world writing, film criticism, as well as Kim’s latest novel, Something More Than Night (2021), a horror-detective mystery set in Hollywood during the 1930s and starring Boris Karloff and Raymond Chandler …
How did you get started writing?
I hate defaulting to other people’s quotes, but somebody asked George Bernard Shaw that question, and he said he couldn’t remember — because writing was like the taste of the water in his mouth. It was something he’d always done.
I mean, I wrote from childhood. I’m not quite sure at what point that went from writing stories for my own purposes to writing for an audience. I think I always wanted to communicate. It took me a while to consider that this might also be a way of making a living. But as a teenager, I wrote plays and comedy sketches with my friends at school. I wrote novels, or rather novel length manuscripts, in my teens.
The useful thing about starting early is you get a lot of the embarrassing stuff out of the way early on when nobody can see it. Now, you just put your stuff online free for people to read, but it is there forever. It comes back to haunt people. I’m not even sure if I have copies of the stuff I wrote as a kid. I think if I do, it’s in a trunk somewhere very deep.
What drew you to horror in particular?
I started out being interested in monsters, I suppose. I was one of those kids who liked monster movies. I liked the effect of horror, I read a lot of it. But I read a lot of general stuff as well. I’m interested in genre, but I’m not necessarily somebody who wants to be confined by it. I don’t self-identify as a horror writer, or a science fiction writer, a crime writer, a mystery writer. I’ve done all of those things. But I do recognize that I operate best in that kind of arena.
When you tag yourself as a horror writer, that comes with an obligation to be frightening, in the same way that picking comedy comes with an obligation to be amusing. And I think some of my stuff is scary. Certainly other readers have reported that. But I think for a lot of my writing, being frightening is not its primary purpose. I’m interested in exploring other things. I tend to write more about what makes me angry than what makes me frightened. Although obviously there’s an overlap.
So what is it that makes you angry?
The world! And what’s more, I have not calmed down with age. Having written a series of books about what happens when really truly terrible evil people come to power … well, the last ten years have just made me think I overestimated people.
Vector invites invites proposals for articles for a #298, a special issue on speculative fictionand libraries, as well as adjacent themes, e.g. speculative angles on archives, collections, repositories, simulations, antilibraries, catalogues, metadata, preservation, curation, media archaeology, literary publics, open access, search, big data, taxonomies, folksonomies, epistemes, architectures of knowledge, hypomnemata, the history and future of print, oral traditions, embodied knowledge, book stores, index cards, bibliographic management, scholarly apparatuses, indexes, performance archiving, back-ups, more-than-human knowledge systems, data futures, code libraries, toy libraries, tool libraries, etc.
Thanks for chatting! How are you? Are you working on anything at the moment?
Well, I haven’t gotten COVID and my son didn’t get COVID and my parents didn’t get COVID and my sister didn’t get COVID. I am purposefully not working on anything at the moment. I’m watching deadlines crumble like empires.
Back in the past, you wrote on Livejournal: “A subculture is not a counterculture. A consumer culture is not a subculture. We are not all in this together.” Recently there were ripples in SFF writer communities over the term “squeecore.” Raquel S. Benedict and JR talk about it on an episode of Rite Gud. They weren’t expecting their words to get fine-toothed, so their description of squeecore is a grab-bag of gripes and jibes, not some kind of elaborate legal case. But the core of squeecore, as I understand it, is something like a “subculture that thinks it’s a counterculture.” What do you think of the term?
Squeecore seems to be a name for the commercially published writing created by authors who got interested in writing by participating in post-fanfiction.net fan fiction cultures. So, it reads differently from previous writing, including previous fanfic-inflected writing from, say, the K/S photocopy generation. I think the podcasters were essentially right, but made the error of creating a taxonomy in order to dismiss a particular taxon as bad and their own stuff as good.
Yes, there was a lot about the episode I liked — and I fully get why they would want to move from critique to pointing out alternatives — but I did find the recommendations list a wee bit less convincing. To their credit, they are upfront about the personal connections.
This is every new writer’s impulse. I was teaching at an MFA program a decade ago, and had to sit through a meeting of students pitching their academic theses. They had to write one academic thesis, and one creative thesis. Every thesis was “Why do all these books suck, except for the ones that inspired me?” I once asked Rudy Rucker why he created “transrealism” and he said that it was because he was just starting out and hadn’t been published much, so he wanted to get some extra attention. It works every time!
I used to invent a new genre every Wednesday, and none of mine caught on. So not every time. Can squeecore claim to any countercultural credentials?
I wanted to ask about Janetta and her research into AI and emotion. There’s been a lot of research done into emotion detection, and a lot of critique. For example, what would it mean for a machine to ‘objectively’ know your emotions, when you may not even know yourself?
Yes. In the novel, Janetta is aspiring to teach AI about emotions, but she’s learning about emotions herself. She’s had a break-up and a rebound with someone who inspires her, but destabilises her as well. This experience is difficult but it helps her come into her own. She was a very unemotional person before that – she tried not to have emotions; but it turned out that she did.
So in that sense, the novel is more about Janetta being at peace with having emotions. Rather than the idea that emotional intelligence in auts is ever going to happen. I knew that it would be a novel about gaining emotional intelligence – but it was always meant to be in Janetta, someone who needed to do this.
You definitely see that growth throughout the novel. It’s such a hard thing to learn, but so important. Emotional intelligence, being able to be vulnerable, all of those things.
Thank you, that’s exactly it. Janetta has never been vulnerable. She’s used her work as a shield. I wanted this to be a story about being vulnerable, about screwing up, and about bringing yourself back from that.
Right, exactly. But it seemed like Janetta believed it could be done. So I was curious about your views.
I just don’t think it can be done at all, full stop. The research that’s been done, some based on facial recognition. One person could be smiling, but they could be desperately sad inside. Could an AI detect that? Humans don’t just detect emotions by observing from a distance. We interact, we probe, we learn. We use our own emotions to invite others how to feel theirs.
So yes, maybe AI can be trained in intersubjective standards of emotion recognition, enough to make reasonable ascriptions. Let’s say, to take a pretty clear emotion, in King Lear when Lear comes back on stage at the end, carrying the body of Cordelia, his beloved child. What does he say? “Howl, howl, howl, howl!” The majority of people can piece the evidence together and understand that he’s upset.
An AI could learn to do that. But in terms of the intricacies of people’s emotions, the depth and the context of them? No, I don’t think so. But what about you? Do you think that AI can be taught to read emotions?
I think researchers will continue to try, but I don’t think it’s really possible. Like you say, someone can be smiling yet struggling inside. And I think the attempts to develop that technology may do more harm than good – in relation to surveillance, for example.
I was thinking about care homes where they have companion AIs, seals and cats and things. That certainly has therapeutic potential. Otherwise, I don’t know how it could possibly read the nuances of human emotion. We don’t even understand our own behaviour sometimes!
I think with a lot of AI, the technology and the science behind it is very interesting. But at the end of the day, the real questions are around how it’s used. Who holds the power? Who has the data that it’s being trained on? That has a major, major impact.
Is that what you’re looking at in your PhD research?
I’m looking at Machine Learning classification settings. So an example of a binary classification setting might be, “Oh, we think this person will repay the bank if given a loan,” versus, “We think this person will default on the loan.” I’m exploring the potential delayed impact of a classification. For example, if you are a false positive, if an AI predicts you’ll repay but instead you default, then your credit score will probably drop. So there will be a negative impact on you too, even though you were given a loan.
How do you investigate this?
There isn’t much data, and it isn’t easy to track. It involves a lot of presumptions, and running simulations, and giving more weight to the false positives and the false negatives. I’m trying to understand, “Okay, maybe in these problems, we need to really focus on the false negatives, versus in these ones, the false positives.” Essentially, I’m exploring how we might mitigate the harm an AI decision has on a person. Also, I’m interested in investigating the impact on underrepresented or underprivileged groups, because we have a lot of issues with AI classification systems learning bias and perpetuating sexism and racism, for instance, from our society.
Is it a generally done thing? Say it was about applying for a loan – can the bank automatically exclude the people the AI doesn’t like, because they haven’t got enough income, or their credit’s bad, or because of some other factor?
Sure. Algorithmic fairness has been a field that has really boomed recently, but it’s been around for a while. It came into the light in the ’60s and ’70s, when a lot of Civil Rights work was being done. At the time, the focus was on education and employment settings. Nowadays, it’s still focused on those settings, but also in areas like finance and economics, and many others.
That’s really great, you’re actually doing something that’s potentially making a difference in people’s lives. People who do AI (rather than just write about it in novels!) blow my mind. It’s impressive to have a brain that can do data, logic and mathematics – I’m very jealous.
No, I mean, I think anyone can code and learn about it. I know it seems as if it’s unattainable, but…
That’s a good point. I could learn to code, potentially!
Well, we do need more women in this area, so … ?
I suspect I’ll never get around to it…
What’s your background? Did you do English?
Yeah, I did English at uni, and since then, I’ve worked in editing, comms and publishing. I wrote three novels before this one, but I never sent them to an agent because I thought they weren’t very good. All I ever wanted to do was become a writer, so I’ve ended up with a narrow range of competencies. Writing and editing, essentially. But if that gets automated … what’s it called, GPT-3?
One of the big language models?
Yeah. Janetta’s job, for example, is safe from automation. Right up until AI is able to start consciously self-replicating – like in the movie Her, that sort of singularity moment – Janetta’s job is safe. But in my day job, where I edit publications, how safe is that? My skills are going to become obsolete soon. I might give it fifteen, twenty years. But that’s all I’ve ever trained myself for. It’s not an everyday worry, but it is a distant worry.
I think creativity, especially with regards to novel writing, is not something I can see an AI doing. They most likely would only be re-making other people’s ideas that they were trained on. I think being a creative thinker is a great spot to be.
That’s definitely the pragmatic view! I think the kind of deeply pessimistic, slightly addled-with-dystopia view is that they’re going to be able to recreate Madame Bovary within thirty years, and then all writers will be out of a job.
But yes, I think the greater question is around how AI might transform creative expression, rather than take it over. There will undoubtedly still be ways for us to bring our humanity to books and music and art.
Realistically, AI is everywhere.
Right, right. And going back to the novel, you really showcase auts in hospitality settings. Is that the main place that you see them potentially going? Or do you see them in other settings?
Realistically, AI is everywhere. It’s in our Netflix algorithms, and it’s in our traffic lights. So in that sense, I didn’t portray reality – I didn’t convey all the hidden AI that shapes everyday life. In terms of hospitality, I guess there’s already automation in the supply chains and the logistics, and places like the Ocado warehouses. I don’t know if you know about Ocado, the delivery company that went really heavily automated?
Ocado has one of the most automated picking and packing systems in the world; these robot arms just picking up ketchup and putting it in bags! So I touched on that a bit, but yes, mostly cafés. Have you ever read Cloud Atlas by David Mitchell?
David Mitchell, the comedian?
Yet another uncanny double. There’s two David Mitchells. There’s the Peep Show comedian, and then there’s a novelist who doesn’t really write sci-fi, but he wrote a novel called Cloud Atlas, and there’s a chapter in a very futuristic setting. And I read it when I was quite young, and the imagery from it, where the utopia that masks a scary dystopia beneath, has stuck with me ever since.
Also, I love coffee shops. Coffee shops are so warm, cozy and human. I just knew that robot servers in a café were a way to have a real interface with humans coming in to get their coffees and being hit with, once again, uncanniness and unnerving futurity, and a slightly utopian, slightly dystopian vibe. In the novel, one of the characters, Van, sings while he works, and I imagined the coldness of that being replaced by auts.
I’m also someone who loves coffee shops. Their ambiance and conversations with the barista are two of my favourite things about them. They’re always in a very warm setting.
Coffee shops are a classic institution. You’re from Seattle, right?
Yes, I am.
The home of coffee shops!
You have the best coffee – all of Seattle is like one big coffee shop. And then you know exactly; a good coffee shop is the most wonderful place.
Can we expect a sequel?
Potentially! I’m curious, would you see it as a free-ranging AI utopia, where they’ve managed to create this benevolent AI that’s also autonomously functioning?
I guess I wondered about Lal’s decision in the last chapter, and seeing what that actually does to Tekna and their world.
That makes sense. To be honest, I found writing this novel so difficult. I’d written a sci-fi novel before, and I think the reason it was difficult was because, well … do you read a lot of sci-fi?
Honestly, this was my first sci-fi book! I’m usually a non-fiction person. Currently, I’m reading non-fiction, Caste: The Origins of Our Discontents by Isabel Wilkerson, which is really good. Very different from this though.
I read quite a bit of literary fiction – writers like Elena Ferrante, Alice Munro. But I feel almost compulsively drawn to sci-fi, like it’s where my imagination wants to go. I realised during the writing of this that it had to be driven by plot, and then the characters react to that. So the automation and conscious AI plots were the engines of the novel.
And I wonder if I’m better suited to something where the engine is people living their lives in a more scaled-down way. I haven’t worked it out yet; I only know that when the Guardian called A Strange and Brilliant Light ‘character-driven’, they weren’t wrong!
Having written the novel, though, what are the major takeaways that you want readers to come away with?
I know it sounds potentially counterintuitive, because the novel is about AI. But I think to me, it is more about some of that more mundane, slow-burn stuff. It’s about figuring out who you are and allowing yourself to make messes and wrong choices, and then being able to do something about this. So all three of the girls do make pretty terrible choices, and then they manage to figure out who they need to be in order to make things better. So it’s that hokey stuff about being true to yourself, and having faith in yourself. Because even Lal shows she has faith in herself, in the end.
The other message is about vulnerability and emotional intelligence. Lal shows that at the end, because the person she needs to be vulnerable to is her sister. And Rose needs to stop being vulnerable to powerful men and put some boundaries down. Vulnerability and self-assurance are connected.
It’s a feminist novel. When you’re in your twenties, you go through a lot of self-doubt. Most people I know, unless they’re bizarrely confident, struggled quite a lot internally with who they should be and whether they’re doing the right thing, especially in their twenties. And I wanted to show some women who also struggle, but manage to figure things out.
I loved that. Yeah, the emotional intelligence definitely was shining throughout. And yeah, it did seem like quite a progressive future, which was really cool to see, and very feminist as well.
I’m aware that there are other contemporary feminist issues it could have taken up. It could have contained more trans representation, for example – it could maybe have been more explicitly intersectional. I chose to not mention the main characters’ racial identities, too, beyond them being Iolran.
Yeah, I noticed that.
I think I knew that I wanted it to be about AI and automation, and I wanted to focus on class – you know: “let’s talk about class.” That doesn’t mean I wanted to ignore the other stuff, but not every book can be everything and this novel already packs so much in! And class and economics are deeply worthy of sustained focus, too.
Janetta is a queer character, but her sexuality is in no way definitive of her entire character.
I wanted it to not be an issue at all. There was a flashback scene that I ended up cutting, where she came out to her parents and they were totally unphased. Partly I felt like, as a straight person … it’s not that I can’t tell that story, but I asked myself: how qualified am I to tell this story?
And related to that, I was cautious of making it Janetta’s main thing. I really built her character around her genius. I wanted her to be a visionary and not be hampered by anything other than her own emotions, and her fear of her own emotions. So that’s why being lesbian was just part of her, and not a big deal.
I liked that she was still in love and dealing with those relationships throughout the novel as well.
Thank you. I worried I made her too involved in relationships. But then I thought, but that’s the point. Because she needs to learn how to love and how to grieve. That’s how she becomes the person she needs to be.
Well, speaking of vulnerability, it’s very brave of you to keep going and actually get it published.
Thank you. I think I reached a point where it was like, “Oh, this is the fourth novel, and it’s now or never.”
And you’re still interested in writing fiction?
Yes, definitely still speculative fiction. But I’m aware that when you write speculative fiction, you have to be open to your imagination going to unexpected places. At first the novel was only about automation. As I went along, though, I realised that when you write fiction about AI, you’re naturally drawn towards the idea of conscious AI – at least, I was.
I could have written a smaller and more focused novel, but to me, the singularity is an irresistible part of the collective imaginary about AI! And this made things very complex, plot-wise. There was an arc about automation and the loss of jobs, and a second one about conscious AI, and interweaving them was hard!
Before we go – with conscious AI, do you think we should be striving for that, or should we not?
No. It’s fun for movies and books, but that would be a crazy world, no?
Agreed. Yup. We’ve got a lot of problems we need to solve already in the world today. Climate change, poverty, hunger. I don’t think we need a conscious AI to stir the pot even more.
Exactly. Do you think it’s ever likely to happen, though?
I think it could happen. I mean, people are working in that space for sure, but I don’t know if we’ll exactly know when it does. It would probably happen by accident, and surprise people. I think it’s a possibility, but I’m not keen for a world where that does happen.
I couldn’t agree with you more.
Well, Eli, this has been wonderful speaking to you.
Thank you, it’s been really enjoyable. And your questions were excellent – it’s nice to have what you’ve written about reflected back at you by someone who asks such intelligent, thoughtful questions! So yes, thank you, that was great.
One grows accustomed, as someone working on one facet of the challenge of climate change, to keeping an anxious and wary eye out for one’s opponents.
I don’t mean climate change deniers—who it seems may always have been a lot rarer than their well-funded PR campaigns made them appear, and in any case were always fairly scarce in academia. We are now into a different and more difficult struggle, namely the struggle over what to do and how to do it.
To put it another way, while there is broad unity on the challenge itself, there is prodigious disunity on the matter of the response. This has arguably been brewing for a while—ever since the splintering (around the time of the UN’s Brundtland report of 1987) of “sustainable development” (SD) into two conceptual camps, ‘strong’ and ‘weak.’ The camp that elected to hollow out the “sustainability” bit while firmly emphasising the “development” part, predictably enough, has proven far more popular with the worsted-clad elves of the policy machine and the Davosean clades of Business Thought Leadership. Strong SD demanded hard limits on development. Weak SD implied soft limits, so soft that they, along with the evacuated and increasingly unqualified concept of sustainability itself—“sustainable” for exactly how long, and with exactly what consequences, to exactly whose benefit, one might well ask—might be stretched like warm caramel in the hands of a dextrous accountant. (For a more thoroughgoing look at the Strong / Weak split, still very much a live conflict, this briefing paper toward the UN’s 2015 Global Sustainable Development Report sets it out fairly concisely from the perspective of the Strong camp).
We might see this as a struggle between two paradigms of response to anthropogenic climate change—though as with most such binaries, it’s probably better thought of as a spectrum strung out between two extreme positions that almost no one holds as such. Their difference might be illustrated by the current spat over carbon capture and storage (CCS) technologies. Are CCS technologies a necessary and shovel-ready slab in the path to successful mitigation, or a vaporware accounting fudge from the business-as-usual (BAU) crowd that very deliberately leaves the door open for continued fossil fuel usage? From my phrasing, the reader may well be able to deduce which side of that particular fence I am positioned, though how far from the fence I’m stood is more a matter of perspective, as well as of time: the fence has moved many times. Indeed, if somewhat paradoxically, BAU seldom wears the same outfit twice, and corporations today are feverishly at work implementing their novel Net Zero strategies, no doubt innovating exciting new forms of heel-dragging, buck-passing, subterfuge, and slipshod dei ex machina as they go. For BAU has never really been a synonym for “do nothing”; doing nothing is anathema to the busyness of business. Rather, BAU refers to a tacit refusal to consider that the fundamental rules of the game are the problem, rather than the fouls of any particular player(s): new approaches to extraction and production in light of the reality of anthropogenic climate change are more than welcome, so long as opportunities for the accumulation of surplus value are left intact.
To reiterate: the struggle between two different paradigms of social and environmental transformation is an old one, with the role of CCS being only one of its latest battlefields. Yet though venerable, it is not timeless. The stakes have been ever-changing, as emissions pump out and temperatures rise. The upper hand has changed many times, and so too have the terms of engagement. And it is this dialectic of green hope that Garforth so thoroughly delineates in Green Utopias: Environmental Hope Before and After Nature in policy, in philosophy, in climate science, and in science fiction.
It’s not a struggle unique to academia, by any means—and for all the accusations of tribalist spats within the ivory tower, I doubt it’s any worse in here than it is elsewhere (particularly given that the remunerative stakes are far higher outside). Besides, the obligatory interface of climate change academia with “policy”—which the more cynical among us might define as the escheresque process that has come to replace governance in the neoliberal era—means that many researchers are far closer to the political machinery than they might prefer, particularly the hard-science types (who are justifiably somewhat leery of being dragged out into the kangaroo court of public scrutiny, thanks to underhanded exploits such as Climategate, way back in 2009). As Bruno Latour has so elegantly argued, the sciences were somewhat themselves to blame for this, having enjoyed and profited from a closeness to policy during an earlier period when policymakers were glad to encourage a deliberately distorted perception of science as a process by which “truth” (and thus policy itself) might be rubberstamped with an inarguable sense of impartial authority. That relationship started to sour when scientific “truths” began to misalign with policy goals already chosen for other reasons. Climate change is perhaps the most significant and obvious arena in which this ugly public break-up played out.