Second, Fission, the new annual fiction anthology from the BSFA, will shortly open to submissions. Editors Eugen Bacon and Gene Rowe say:
We’re excited to read your original science fiction stories (genre benders welcome)! The submissions window opens at midnight on 1 February, and closes at 11:59 pm on 15 March. Please submit to email@example.com and put “Fission #2 submission” in the subject header. We invite original stories of up to 5,000 words, and offer a contributor payment rate of 2 pence per word. You don’t need to be a BSFA member to submit. We will also be inviting submissions for cover art.
What has the Vector site has been up to this year? Quite a bit, it turns out! Most of the reviews were originally published in The BSFA Review ed. Sue Oke, and some of the articles originally appeared in Vector print editions. Going forward, fiction reviews have moved to the BSFA main site.
By Monica Evans. This academic article was first published in Vector #291.
Review: This article underwent editorial review from two editors.
License: (c) Monica Evans.
Citation: Evans, Monica. 2020. The Needle and the Wedge: Digital Games as a Medium for Science Fiction. Vector #291, pp.15-24. Summer, 2020.
Keywords: digital games, video games, science fiction, speculative fiction
In 1962, four computer science students at MIT, looking for something interesting to display on their new PDP-1 minicomputer, turned to science fiction. According to Steve Russell, the group’s core programmer, they started with “a two-dimensional maneuvering sort of thing, and decided that naturally the obvious thing to do was spaceships” (Brand 1972). Before long, two ships – one long and thin, the other a squat triangle – could engage in an interactive, physics-based dogfight, and Spacewar!, the world’s first digital game, was born.
Spacewar! may have been the first, but it was hardly the last. A staggering number of successful, influential, and critically-acclaimed games can be categorized as science fiction (Krzywinksa and MacCallum-Stewart 2009), from classic arcade games like Asteroids and Space Invaders to major franchises like Metroid, Halo, StarCraft, and Mass Effect; critical trailblazers like Portal, Half-Life, and Bioshock; indie darlings like Thomas Was Alone, Soma, and FTL; and recent critical and commercial favorites like Horizon Zero Dawn, Nier: Automata, and even The Legend of Zelda: Breath of the Wild. In the absence of science fiction, an equally staggering number of games can be classified as fantasy, horror, or broadly speculative – to the point that it’s uncommon, if not rare, for a digital game to be set in a non-speculative, mundane world.
Automation and Utopia: Human Flourishing in a World without Workis crafted as a response to fears over an automated future in which humans are made obsolete by technological developments. Written by John Danaher, senior lecturer of law at the National University of Ireland, Galway, the text consists of two main sections, which cover automation and the possibility of a utopian future, respectively.
After outlining the scope and purpose of his research, in the first chapter Danaher forecasts the obsolescence of humankind in an automated world. But this is not as catastrophic as it may sound since, for Danaher, “Obsolescence is the process or condition of being no longer useful or used; it is not a state of nonexistence or death” (2). In the rest of the automation section, Danaher responds to two propositions: that automation in the workplace is both possible and desirable, and that automation outside of the workplace is potentially dangerous and its threats must therefore be mitigated.
After making his case for why automation should be conditionally embraced, in the second section Danaher turns to two possible, ‘improved’ societies with automation fundamental to their economies, the cyborg and virtual utopias. While the cyborg utopia enables humankind to remain valuable members of the economy, occupying the cognitive niche that has historically provided an initial evolutionary advantage to the species, Danaher posits that such a future will likely maintain the degradations of employment, enhance our dependency upon machines, and disrupt humanist values while, due to the technological advancements it requires, ensuring no worthy improvements to human wellbeing in the near future.
Following up this analysis of the cyborg polity, Automation and Utopia concludes with a presentation of what Danaher views as the ideal, improved society, the virtual utopia. This improved society, in which humankind ventures into the virtual world to enhance its flourishing, is presented by Danaher as an ideal goal towards which humankind may aim since, as the author posits, it will ensure human agency, pluralism, stability, a myriad of alternative utopias, and a meaningful connection to the non-virtual, real world.
Pivotal to Danaher’s assessment of automation, and a possibly utopian future, are his views on labor and the avenue he identifies as optimal for human flourishing, the virtual utopia. For the purposes of his argument, he adopts a definition of work which he acknowledges as unusual and likely controversial, since it excludes “most domestic work (cleaning, cooking, childcare)” as well as “things like subsistence farming or slavery” (29). Defining work as “any activity (physical, cognitive, emotional etc.) performed in exchange for an economic reward, or in the ultimate hope of receiving an economic reward,” Danaher builds the case that obsolescence is almost certain and could result in as low as 10% or as high as 40% of the future population remaining employed (28). Such a development is framed as a positive result since work, he emphasizes, has a negative effect upon employees and improving it in the current economic milieu is, according to him, a more difficult route to take than shifting towards a virtual utopia. Specifically, Danaher argues that improving work, which often involves fissuring, precarity, colonization, classic collective action, domination, and distributive injustice is unlikely in our current system since it “would require reform of the basic rules of capitalism, some suppression or ban of widely used technologies as well as reform of the legal and social norms that apply to work” (83). Though this dismissal of the possibility of improving working conditions is short-sighted and ignores the likelihood that labor organizing will prove necessary as technological advances continue, this weakness of the text stands on its periphery. More important to Danaher’s vision of the future is his adoption of an approach that is interestingly more radical than such efforts to protect workers: the introduction of a universal basic income and the normalization of technological unemployment in current economic systems.
Danaher envisions this radically different distribution of economic power as a salient feature unique to the virtual utopia. Danaher rejects the cyborg utopia, believing it will threaten the prospect of universal basic income and technological unemployment and ensure the continuation of work and the injustices endemic to capitalistic systems. In considering the virtual utopia, Danaher’s audience must consider the ethics and consequences of a nation in which utopian games and escape become a salient feature of its culture. This ideal society is marked by its focus upon virtual worlds as the mechanism by which human flourishing may take place. By venturing into simulations that are shaped to satisfy the desires and needs of individual users, it avoids the problems of a single utopian ideal that must be enforced upon all citizens. It can therefore, as Danaher explains, “allow for the highest expressions of human agency, virtue, and talent… and the most stable and pluralistic understanding of the ideal society” (270).
Yet as with the cyborg utopia, the virtual utopia is plagued with ethical complications. The question of what actions are permissible in such a simulated environment is closely related to the ethical considerations surrounding cyborgs and artificial intelligence. In very briefly confronting this topic, Danaher asserts that the same moral constraints that shape human interactions in daily life will impact those occupying the virtual world. He supports this argument by pointing out that some of the characters inhabiting the simulation will be operated by human players and that interactions with such players will have ethical dimensions. In addition, he asserts that other actions may be deemed intrinsically immoral even without a corresponding ‘real-life’ consequence. Danaher asserts that, though there will be some moral frameworks unique to the virtual utopia, there will be no major alteration to human ethics. The virtual utopia, he claims, is therefore a reasonable goal for the post-work society since it enables human flourishing and protects values such as individualism and humanism.
Danaher is also keen to emphasize that “the distinction between the virtual and the real is fluid” (229). He rejects the “stereotypical” science fictional view of virtual reality, as something that is only produced within immersive technological simulations, like the Matrix or Star Trek’s Holodeck. On the other hand, he also rejects the “counterintuitive” view that everything humans experience is virtual reality in that our reality is constructed through language and culture. Instead, Danaher offers a middle position. Some things may be more virtual than others, but nothing is wholly virtual or wholly real. He sees virtual utopia as being filled with emotionally and morally meaningful interactions, but in the context of relatively inconsequential stakes (rather than survival, or struggle for hegemony). A Holodeck-style simulation is only one of many ways this could be accomplished.
Automation and Utopia delves significantly into the topic of possible futures at the intersection of ethics, technology, and humanism. It is a valuable resource for scholars, students, and laypeople engaged with conversations surrounding the advancement of automation in the 21st-century, its impact upon economics and workers, and optimal approaches to accommodating such new technologies through the advent of a post-work society. The work continues discussions at the intersection of technology and labor, but necessitates broader considerations related to the virtual utopia Danaher proposes. Namely, it does not convincingly explain how virtual utopia will avoid the ethical pitfalls outlined in relation to the cyborg utopia. It also does not thoroughly discuss how such simulations may be safeguarded from economic exploitation at the hands of those owning or operating these systems, or address the potential for intersectional inequalities. Finally, Danaher does not comprehensively discuss how such escapism and the further minimization of human interaction in the natural world may impact climate and the environment. Though it is difficult to accurately predict, estimations of both the ecological and psychological effects of a society in which the main mechanism of human interaction is not within nature but instead within a virtual world are vital to identifying optimal utopian aims.
Overall, Automation and Utopia productively dives into the topics of technological advancement and labor policy, proposes thought-provoking socioeconomic policies related to the challenges of automation, and necessitates further discussions concerning ‘the ideal society,’ its connection to technology, and the impact it may have upon human psychology and the environment.
Mackenzie Jorgensen is a Computer Science doctoral researcher working on the social and ethical implications of Artificial Intelligence. We invited Mackenzie to chat with novelist Eli Lee about her debut, A Strange and Brilliant Light (Jo Fletcher, 2021), and representations of AI and automation in speculative fiction. Should we fear or embrace the “rise of the robots”? Or perhaps the robots rose a long time ago, or perhaps that whole paradigm is mistaken? How might AI and automation impact the future of work? What would it mean for emotional work to be automated? How do human and machine stories intersect and blur?
Hi Eli, I’m really excited to talk to you today. I gave myself plenty of time to read A Strange and Brilliant Light, but I ended up going through it super quickly, because I enjoyed it so much.
Oh, thank you!
So I was curious – what made you decide to showcase three women’s stories?
Well, the genesis of the three stories was unexpected even to me. When I started, I wanted to write about a pair of best friends whose lives go in different directions. That’s based on my own relationship with my best friend, who became an incredible political activist whilst I just sat around and watched TV and read books. So that was the real kernel.
But as I wrote, it felt like something was missing. Lal and Rose came to me immediately – Rose was very passionate and active in the world whereas Lal had some of my own flaws – she was bossy, ambitious, and somewhat selfish.
But the dynamic needed a third person who was a contrast to both – and that’s when Lal’s sister Janetta came in. She works in AI, and she’s driven by her own hopes and fears. Once I had those three characters, it felt complete.
Did you see parts of yourself in Lal?
I did. I felt she was a good vehicle for the parts of me I’m less proud of – so she’s a bit selfish and insecure, and she feels belittled by her older sister, stuck in her shadow and ignored, but she’s still a decent person. She wants to work to make money for her family, but she’s just more … petty!
And then I put what I would aspire to be in Janetta. Janetta’s very self-sufficient. She’s dedicated to her work and pure of heart. She has insecurities and flaws like the rest of us, but she always works for the greater good. So I kind of separated some of my worst qualities, and the qualities I wish I had, and put them in those two.
And you made them sisters, which works well in that sense.
I’ve got two brothers, but I don’t have a sister. Have you?
No, I have a younger brother.
I mean, this is the thing. Sibling relationships can be so gendered. I wanted to investigate what it’s like if there’s an older sister who is very successful and leaping ahead academically, and then you’re the younger sister in that dynamic. What’s for you? How do you stand out – how are you different, or memorable? So that was Lal.
How far into the future did you kind of picture the novel to be?
One of the get-outs of setting it in an alternate universe is that you don’t have to specify, “This is ten years in the future,” or, “This is fifteen years in the future.” I could choose the kind of technology that fit with the plot. They’re not mind-reading, they’re using mobile phones.
To me, this says it’s not that far in the future? Eight or ten years, perhaps. I’d be interested to hear what you think, as an AI researcher, about when it could plausibly be set? When that early, deep automation of jobs is filtering through?
Eight to ten years, yeah. End of the 2020s.
Then again, part of me thinks maybe that’s too soon! You know when you watch Back to the Future II, and there’s a flying car. It’s set in 2015. We all watched it in the late ‘80s, early ‘90s, and there was this sense that 2015 would look futuristic like that. Now we’re past that date, and the changes don’t seem that drastic.
So in ten years’ time, maybe things will look the same as they do now? Maybe AI will still be in our lives, but in a way that’s similar to what it is now – essentially under the surface and hidden. Ubiquitous, but hidden. The robots still won’t be serving us coffee! So I’m willing to be proved completely wrong with my timeframe.
I think you’re good! I feel like oftentimes AI is portrayed, especially in media and films, as taking over everything in the very near future. It’s often a dystopian presentation. But actual AIs right now, they’re always just good at one thing. They’re very task-specific. We don’t really have anything like what Janetta was trying to work on, like emotional AI.
And there’s another question: do we want that? Because I feel like emotion is something that makes us human. At the end of the day, AI and tech are a bunch of zeros and ones. You can’t really instill that with real human emotion and experiences, in my opinion. There are scientists out there who disagree though.
I should say that, in terms of eight to ten years, I’m not talking about emotional intelligence and AI. Consciousness is way off, if it ever will happen. I think probably it won’t. But in terms of AI and automation …
Automation, yeah. No, definitely.
My friend works for an AI start-up. He often looks at stuff in my novel, and says, “What the … This is crazy!” And I say, “I know! It’s not meant to be real!” When you watch Ex Machina or Her, there’s a suspension of disbelief. But I guess as an AI researcher it must be even harder, not to just say, “Come on, come on now. That’s not going to happen!”
And that question of whether AI can be human is just such a long-running, fascinating topic, isn’t it? We just can’t let go of it. That uncanny other self, reflected in an AI.
Yeah, definitely. I agree with you that I can see automation coming more into play in the near future, especially with big companies like Amazon. Which is scary, because people do rely on those big corporations for jobs. We’ve seen recently that unionizing doesn’t necessarily work in those scenarios. That’s one reason Rose’s character is very interesting to me. She explores the future of social justice activism, in a near-future world increasingly dominated by automation.
I knew that you can’t talk about automation without talking about Universal Basic Income. But I didn’t want someone who straight out of the gate was like, “You guys, UBI: I’m going to sort it out.” I wanted to make sure that Rose’s activism wasn’t disconnected from the rest of her life.
So much of the novel is about these three women in their early twenties, figuring out who they are, especially who they are in their relationships. With Rose, an important part of this is how she relates to men of power, or men who have power. There’s her father, her brother, and this other guy Alek, and initially she’s unable to get out from under them.
And so she needed to come into her own power. So I thought, Rose is going to be this activist, but she’s also going to be not sure of herself initially. So a lot of it was their inner struggles, intersecting with those larger economic, social, political, or technological stories.
There was a quote I made note of. ‘Alek said, “True leisure, true creativity and true freedom are within our reach for the first time in human history. And so we must set up source gain and welcome the auts.”’ This seemed quite ironic to me because relinquishing more control of the world could seem like the opposite of freedom. And Rose did realize this as time went on, which was cool to see, as she was learning and growing.
So Alek was with these other two academics at that point in the novel. Alek’s initial point of view is: “Auts are bad, AIs are bad. We need to just destroy this stuff.” But then when these two guys come along, one of them mentions post-work utopias. John Maynard Keynes wrote about something similar in the 1930s, an essay called ‘Economic Possibilities for our Grandchildren’, and Herbert Marcus wrote Eros and Civilisationin the 1950s, and there has been lots of writing about post-work more recently.
Maybe machines can do everything, and then you can sit around and play all day, and not have to do things you don’t want to. This idea floats past Alek this evening, and suddenly he’s like, “Oh, wait! Yeah, we can just be free, because auts will do the boring stuff!”
But that’s obviously not a realistic suggestion, because if you take it a step further, like Rose does, the question is, “Who owns those auts?” Well, if it’s the corporations, that’s not freedom. So that brings Alek back to his original idea: we need source gain. We need some kind of UBI. So in that moment when he talks about post-work leisure, he’s speculating. He’s not thinking about what’s necessary now.
Can you see a world where AI grows in importance alongside human creativity and freedom? Or are they opposing forces?
In a post-work scenario, the AIs are doing the grunt work, doing the kind of cleaning and tidying, and fixing things, and all the behind-the-scenes organisational work, so humans can play and fulfil ourselves. So that’s what Alek would mean by welcoming the auts, I think. But do you mean in terms of AI more as an equal?
I guess, or at least AI growing in social importance, and taking on more and more roles?
The way Alek envisions AI, in that moment, they would be this kind of sub-caste. They’d work away in the background, and you wouldn’t need to worry about them because they wouldn’t be conscious. But I think for us, even without AI consciousness, this could still be a very unsettling and unnerving vision.
We’re already seeing that when AI creeps into more and more areas of life, that ideal of true leisure and creativity gets compromised. You’re surrounded by stuff that’s monitoring you, surveilling you, collecting and analysing your data, perhaps even filtering your reality, and steering you in various ways. It’s almost like the more AI we have, the more inhibited we might feel.
Right, and the more potential problems we might face. On the surveillance point, there’s that moment where Janetta and Taly discuss helping the government with docile spy dogs —
This is one of my cringe moments. I read it now and think, “Spy dogs? What?”
Well Boston Dynamics has a robotic dog. The New York City Police Department had a test run, and there was a huge backlash. So they said, “Okay, actually, no. We are not going to use this.” But about Janetta and Taly’s conversation, I was curious: were you critiquing how governments and the private sector collaborate over surveillance? How do you feel about that?
Attitudes about surveillance are deeply personal. I’ve got one friend who just does not care about his privacy – he’ll happily give all his data to everything and everyone. It’s not because he believes that it might make society better; he just doesn’t care. I suspect he’s not alone in that.
The bird on the front of the novel, illustrated by Sinjin Li, is a CCTV bird. If you look closely, it’s got a little robot-y eye. Taly’s company, Mutants, is all about making stuff that looks friendly and cutesy, but it’s actually spying on you.
Personally, I think we should be very scared about surveillance. And not just visual surveillance, but also the amount of data that we’re giving up to companies more generally. So yes, the book definitely includes a critique of DARPA and agencies like that, who are using AI to further cement their military power.
Early in the book, there’s a humanoid robot that looks like Lal. I wondered if you could talk about that choice? It felt like it might be symbolic of Lal’s almost robotic existence at that point.
That’s a fantastic interpretation of it! Even my editor asked me why I did that. Basically, I just wanted one of the main characters to get the experience of the uncanny valley. It was nothing more than that – a moment of AI spookiness.
It definitely was.
I wanted Lal to have that experience of gazing at a factory produced version of herself.
Another reason for Lal to have that experience is that she hasn’t quite figured out how she feels about the auts. She wants to be part of that world, so this is saying: “Here are versions of you who are part of that world … but they’re just auts. They’re just nothing. They’re also praised and loved by everyone. But they’re still soulless machines. Do you really want to be a soulless machine, Lal?” So you’re right, it does touch on the idea that she becomes a bit of a soulless machine.
People ask about that moment, and whether it’s a clue to a big conspiracy. But it’s not there for plot reasons. It’s more about Lal herself, and about the social experience of sharing a world with these uncanny others.
It was an intriguing thing to include early in the novel.
Well, I learned a lot about novel plotting during the writing of this book. And there are some things I’d probably change, because I think that ended up feeling like a red herring.
Lal goes to Tekna and gets absorbed into that world. She expects it’s going to be this shimmering, exciting experience. But actually it’s quite dreary.
Dhont is like an industrial estate. The Tekna Tower is where all the glamour happens, where Taly works, and where the conferences are. Lal sees that and she thinks, “That’s where I’m going to work! That’s where it’s going to happen for me!”
And then she’s deposited in the backend of nowhere instead. Dhont is meant to imply precarity and being low down on the chain at Tekna; it’s the opposite of the Tekna Tower.
Dhont has also been denuded of people, because of the automation. I don’t know if you saw the Richard Ayoade film, The Double?
It’s based on a Dostoyevsky novella, I think. Jesse Eisenberg goes to work at this very grim, dystopian factory. But after a while, he’s kind of struggling. Then there’s a double, like another version of him that turns up and aces everything. The film is about their conflict. It’s really good, and the surroundings are very grim and derelict. So I had that industrial dystopian feel in mind. With automation on the rise, and Lal fighting for her survival, I wanted her to realise that working for a glamorous company might not be so glamorous after all. Work in an Amazon warehouse is horrible. So I wanted to pull the rug out from under her.
And she could see the Tower from afar.
From her sad little room!
She does work her way up. But it doesn’t feel like she’s happy with that.
All that glitters isn’t gold. When she does get promoted, she’s aware that there’s something lurking underneath. Something’s not right. She thinks, “Well, okay. This is great, and I’ve got loads of money, loads of time. But things are a bit off…” But then, she’s also competitive, especially with her sister, so she also wants to believe everything’s great. I wanted capitalism to pull her in with all its glories, and then wring her dry.
Yes, it definitely did. At the end, we don’t quite know for sure what she decided. I got the impression she made the right decision.
I’m glad you think she made the right decision.
Keep your surveillance apparatus peeled for part II, coming soon.
In this academic article, Josephine Wideman explores themes of temporality and capital accumulation in Samuel R. Delany’s Dhalgren (1975).As Fredric Jameson suggests, we must find new methods of spatial and social mapping in order to navigate the geographical and cultural landscapes of late capitalism.Delany’s Dhalgren is deeply concerned with the fate of US hegemony, and with the uncertainty that capitalism has produced: the duality of its unsustainability and seeming inevitability. Bellona is a cityscape which has been devastated by the cycle of accumulation and taken off the map. Delany’s creation ultimately should not be read as a prophecy of what will come of late US capitalism, but it gives insight into the complex historical and apocalyptic consciousness that has been cultivated.
Review: This article underwent editorial review from two editors.
License: Copyright Josephine Wideman, all rights reserved.
Keywords: accumulation, Giovanni Arighi, postmodernism, Samuel R. Delany, temporality, urban space
Samuel R. Delany’s 1975 novel Dhalgren is lengthy, hallucinatory, and at times unnavigable science fiction. Its form is as dense and as wavering as the urban landscape it depicts, where Delany’s protagonist, Kid, can wonder whether ‘there isn’t a chasm in front of me I’ve hallucinated into plain concrete.’1 Bellona – the fictional city where events take place – is a space ‘fixed in the layered landscape, red, brass, and blue, but […] distorted as distance itself,’ a place where ‘the real’ is ‘all masked by pale diffraction.’2 Although the scenery and scenarios of Bellona may be fictional, and perhaps even fantastic, they are also true representations of real experience. The unfixed landscape we live in becomes ‘fixed’ before us in Delany’s book. The distortions and diffractions by which it is fictionalised only increase its representational precision. The gaps in our experience, usually masked, are made visible. For although it takes an unusual form, we can recognise
this timeless city […] this spaceless preserve where any slippage can occur, these closing walls, laced with fire-escapes, gates, and crenellations are too unfixed to hold it in so that, from me as a moving node, it seems to spread, by flood and seepage, over the whole uneasy scape.3
In looking at Dhalgren, I have borrowed from the political theorist and sociologist Giovanni Arrighi in order to trace the presence and effects of capitalist accumulation in Delany’s fiction. Arrighi, in The Long Twentieth Century, describes the ‘interpretative scheme’ of capitalism as a ‘recurrent phenomena.’4 Drawing on work by the historian Fernand Braudel, Arrighi follows the Genoese, the Dutch, and the British cycles of accumulation to the current North American cycle. By examining past economic patterns and anomalies, he suggests that we may be able to gesture at the fate of our current cycle. Arrighi sets out to demonstrate that the rise and fall of these hegemonies, while never identical, tend to follow a set of stages that begin ‘to look familiar.’5 To make his argument, he proposes a new use for Marx’s ‘general formula of capital’:
Marx’s general formula of capital (MCM’) can therefore be interpreted as depicting not just the logic of individual capitalist investments, but also a recurrent pattern of historical capitalism as world system. The central aspect of this pattern is the alternation of epochs of material expansion (MC phases of capital accumulation) with phases of financial rebirth and expansion (CM’ phases).6
In Das Kapital, Marx initially proposes the formula CMC to theorise how capital functions. This theory begins with the assumption that people have needs and desires they can’t satisfy by themselves. Thus we create the commodities we know how to make (C), which are sold for money (M), which allows us to buy the commodities we want (C). As this cycle repeats, those who are skilled presumably accrue more value than others, being able to sell their commodities for a greater profit. This theory centres around the individual and his role in a capitalist system. But Marx then sets CMC aside in favour of another formula – the formula borrowed by Arrighi in The Long Twentieth Century – MCM’. In MCM’, circulation does not begin with the dissatisfied individual, but with capital itself. Money is invested (M) into the materials and labour necessary to produce a commodity (C), which is then sold for money (M). The difference between CMC and MCM’ is subtle but crucial. The first formula implies that capitalism recurs, and things are made and exchanged, in order to satisfy human desire and need. The second formula implies that money is in charge, that production and exchange are ultimately subservient to profit, and that money begets more money. For Marx, what drives capitalism is not only MCM, but MCM’ – the apostrophe signifying ‘prime’ – or the concept that money increases in value through circulation. The source of this additional, or ‘surplus’ value, is where capital really loses its lustre. This value is gained within labour – in the time spent on the creation and production of a commodity from raw material – and for Marx, its appropriation by capitalists is inherently exploitative.
A roundup of some recent SF and SF-adjacent trailers.
Infinity Chamber, Marjorie Prime, It, Thor: Ragnarok, Justice League, Ready Player One, Blade Runner 2049, Close Encounters of the Third Kind, Wonder, Goodbye Christopher Robin, Flatliners, Legend of the Naga Pearls.
A word about bêtes: in so relentlessly English a novel, in which an outside world is scarcely even mentioned, it is never explained why a French word should be chosen to identify the talking animals. It makes them foreign, alien, but in a work that has more wordplay, puns and malapropisms even than is usual in an Adam Roberts novel, we have to take note of things like this. I suspect, therefore, that we are intended to hear an echo of ‘bet’ in the word, the novel details a huge gamble about the nature of consciousness and the future of humanity.
The same with editors–my editors at Orbit didn’t ask me to change the pronouns at all. It was, rather, one of the things they’d really liked about the novel. […] My takeaway from the whole experience is that laundry lists of what’s “commercial” or not aren’t actually terribly helpful, not in and of themselves. I am not a fan of aspiring writers worrying too much about whether their work is commercial or not, not because I have any sort of disdain for the commercial (I like to sell books as much as the next person!) but because what sells or doesn’t isn’t really that easily predictable.