Automation and Utopia: Human Flourishing in a World without Work by John Danaher

Reviewed by Michael Pitts.

Danaher, John. Automation and Utopia: Human Flourishing in a World without Work. Harvard UP, 2019. Hardcover. 248 pg. $99.95. ISBN 9780674984240.

Automation and Utopia: Human Flourishing in a World without Work is crafted as a response to fears over an automated future in which humans are made obsolete by technological developments. Written by John Danaher, senior lecturer of law at the National University of Ireland, Galway, the text consists of two main sections, which cover automation and the possibility of a utopian future, respectively.

After outlining the scope and purpose of his research, in the first chapter Danaher forecasts the obsolescence of humankind in an automated world. But this is not as catastrophic as it may sound since, for Danaher, “Obsolescence is the process or condition of being no longer useful or used; it is not a state of nonexistence or death” (2). In the rest of the automation section, Danaher responds to two propositions: that automation in the workplace is both possible and desirable, and that automation outside of the workplace is potentially dangerous and its threats must therefore be mitigated.

After making his case for why automation should be conditionally embraced, in the second section Danaher turns to two possible, ‘improved’ societies with automation fundamental to their economies, the cyborg and virtual utopias. While the cyborg utopia enables humankind to remain valuable members of the economy, occupying the cognitive niche that has historically provided an initial evolutionary advantage to the species, Danaher posits that such a future will likely maintain the degradations of employment, enhance our dependency upon machines, and disrupt humanist values while, due to the technological advancements it requires, ensuring no worthy improvements to human wellbeing in the near future.

Following up this analysis of the cyborg polity, Automation and Utopia concludes with a presentation of what Danaher views as the ideal, improved society, the virtual utopia. This improved society, in which humankind ventures into the virtual world to enhance its flourishing, is presented by Danaher as an ideal goal towards which humankind may aim since, as the author posits, it will ensure human agency, pluralism, stability, a myriad of alternative utopias, and a meaningful connection to the non-virtual, real world. 

Pivotal to Danaher’s assessment of automation, and a possibly utopian future, are his views on labor and the avenue he identifies as optimal for human flourishing, the virtual utopia. For the purposes of his argument, he adopts a definition of work which he acknowledges as unusual and likely controversial, since it excludes “most domestic work (cleaning, cooking, childcare)” as well as “things like subsistence farming or slavery” (29). Defining work as “any activity (physical, cognitive, emotional etc.) performed in exchange for an economic reward, or in the ultimate hope of receiving an economic reward,” Danaher builds the case that obsolescence is almost certain and could result in as low as 10% or as high as 40% of the future population remaining employed (28). Such a development is framed as a positive result since work, he emphasizes, has a negative effect upon employees and improving it in the current economic milieu is, according to him, a more difficult route to take than shifting towards a virtual utopia. Specifically, Danaher argues that improving work, which often involves fissuring, precarity, colonization, classic collective action, domination, and distributive injustice is unlikely in our current system since it “would require reform of the basic rules of capitalism, some suppression or ban of widely used technologies as well as reform of the legal and social norms that apply to work” (83). Though this dismissal of the possibility of improving working conditions is short-sighted and ignores the likelihood that labor organizing will prove necessary as technological advances continue, this weakness of the text stands on its periphery. More important to Danaher’s vision of the future is his adoption of an approach that is interestingly more radical than such efforts to protect workers: the introduction of a universal basic income and the normalization of technological unemployment in current economic systems. 

Danaher envisions this radically different distribution of economic power as a salient feature unique to the virtual utopia. Danaher rejects the cyborg utopia, believing it will threaten the prospect of universal basic income and technological unemployment and ensure the continuation of work and the injustices endemic to capitalistic systems. In considering the virtual utopia, Danaher’s audience must consider the ethics and consequences of a nation in which utopian games and escape become a salient feature of its culture. This ideal society is marked by its focus upon virtual worlds as the mechanism by which human flourishing may take place. By venturing into simulations that are shaped to satisfy the desires and needs of individual users, it avoids the problems of a single utopian ideal that must be enforced upon all citizens. It can therefore, as Danaher explains, “allow for the highest expressions of human agency, virtue, and talent… and the most stable and pluralistic understanding of the ideal society” (270). 

Yet as with the cyborg utopia, the virtual utopia is plagued with ethical complications. The question of what actions are permissible in such a simulated environment is closely related to the ethical considerations surrounding cyborgs and artificial intelligence. In very briefly confronting this topic, Danaher asserts that the same moral constraints that shape human interactions in daily life will impact those occupying the virtual world. He supports this argument by pointing out that some of the characters inhabiting the simulation will be operated by human players and that interactions with such players will have ethical dimensions. In addition, he asserts that other actions may be deemed intrinsically immoral even without a corresponding ‘real-life’ consequence.  Danaher asserts that, though there will be some moral frameworks unique to the virtual utopia, there will be no major alteration to human ethics. The virtual utopia, he claims, is therefore a reasonable goal for the post-work society since it enables human flourishing and protects values such as individualism and humanism.

Danaher is also keen to emphasize that “the distinction between the virtual and the real is fluid” (229). He rejects the “stereotypical” science fictional view of virtual reality, as something that is only produced within immersive technological simulations, like the Matrix or Star Trek’s Holodeck. On the other hand, he also rejects the “counterintuitive” view that everything humans experience is virtual reality in that our reality is constructed through language and culture. Instead, Danaher offers a middle position. Some things may be more virtual than others, but nothing is wholly virtual or wholly real. He sees virtual utopia as being filled with emotionally and morally meaningful interactions, but in the context of relatively inconsequential stakes (rather than survival, or struggle for hegemony). A Holodeck-style simulation is only one of many ways this could be accomplished. 

Automation and Utopia delves significantly into the topic of possible futures at the intersection of ethics, technology, and humanism. It is a valuable resource for scholars, students, and laypeople engaged with conversations surrounding the advancement of automation in the 21st-century, its impact upon economics and workers, and optimal approaches to accommodating such new technologies through the advent of a post-work society. The work continues discussions at the intersection of technology and labor, but necessitates broader considerations related to the virtual utopia Danaher proposes. Namely, it does not convincingly explain how virtual utopia will avoid the ethical pitfalls outlined in relation to the cyborg utopia. It also does not thoroughly discuss how such simulations may be safeguarded from economic exploitation at the hands of those owning or operating these systems, or address the potential for intersectional inequalities. Finally, Danaher does not comprehensively discuss how such escapism and the further minimization of human interaction in the natural world may impact climate and the environment. Though it is difficult to accurately predict, estimations of both the ecological and psychological effects of a society in which the main mechanism of human interaction is not within nature but instead within a virtual world are vital to identifying optimal utopian aims.

Overall, Automation and Utopia productively dives into the topics of technological advancement and labor policy, proposes thought-provoking socioeconomic policies related to the challenges of automation, and necessitates further discussions concerning ‘the ideal society,’ its connection to technology, and the impact it may have upon human psychology and the environment. 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

“Do we want that?” Mackenzie Jorgensen interviews Eli Lee

Mackenzie Jorgensen is a Computer Science doctoral researcher working on the social and ethical implications of Artificial Intelligence. We invited Mackenzie to chat with novelist Eli Lee about her debut, A Strange and Brilliant Light (Jo Fletcher, 2021), and representations of AI and automation in speculative fiction. Should we fear or embrace the “rise of the robots”? Or perhaps the robots rose a long time ago, or perhaps that whole paradigm is mistaken? How might AI and automation impact the future of work? What would it mean for emotional work to be automated? How do human and machine stories intersect and blur?

This is part one of two.

A Strange and Brilliant Light, By Eli Lee

Hi Eli, I’m really excited to talk to you today. I gave myself plenty of time to read A Strange and Brilliant Light, but I ended up going through it super quickly, because I enjoyed it so much.

Oh, thank you! 

So I was curious – what made you decide to showcase three women’s stories?

Well, the genesis of the three stories was unexpected even to me. When I started, I wanted to write about a pair of best friends whose lives go in different directions. That’s based on my own relationship with my best friend, who became an incredible political activist whilst I just sat around and watched TV and read books. So that was the real kernel.

But as I wrote, it felt like something was missing. Lal and Rose came to me immediately – Rose was very passionate and active in the world whereas Lal had some of my own flaws – she was bossy, ambitious, and somewhat selfish.

But the dynamic needed a third person who was a contrast to both – and that’s when Lal’s sister Janetta came in. She works in AI, and she’s driven by her own hopes and fears. Once I had those three characters, it felt complete.

Did you see parts of yourself in Lal?

I did. I felt she was a good vehicle for the parts of me I’m less proud of – so she’s a bit selfish and insecure, and she feels belittled by her older sister, stuck in her shadow and ignored, but she’s still a decent person. She wants to work to make money for her family, but she’s just more … petty!

Got it!

And then I put what I would aspire to be in Janetta. Janetta’s very self-sufficient. She’s dedicated to her work and pure of heart. She has insecurities and flaws like the rest of us, but she always works for the greater good. So I kind of separated some of my worst qualities, and the qualities I wish I had, and put them in those two.

And you made them sisters, which works well in that sense.

I’ve got two brothers, but I don’t have a sister. Have you?

No, I have a younger brother.

I mean, this is the thing. Sibling relationships can be so gendered. I wanted to investigate what it’s like if there’s an older sister who is very successful and leaping ahead academically, and then you’re the younger sister in that dynamic. What’s for you? How do you stand out – how are you different, or memorable? So that was Lal.

“I kind of separated some of my worst qualities, and the qualities I wish I had, and put them in those two.”

How far into the future did you kind of picture the novel to be?

One of the get-outs of setting it in an alternate universe is that you don’t have to specify, “This is ten years in the future,” or, “This is fifteen years in the future.” I could choose the kind of technology that fit with the plot. They’re not mind-reading, they’re using mobile phones.

To me, this says it’s not that far in the future? Eight or ten years, perhaps. I’d be interested to hear what you think, as an AI researcher, about when it could plausibly be set? When that early, deep automation of jobs is filtering through?

Eight to ten years, yeah. End of the 2020s.

Then again, part of me thinks maybe that’s too soon! You know when you watch Back to the Future II, and there’s a flying car. It’s set in 2015. We all watched it in the late ‘80s, early ‘90s, and there was this sense that 2015 would look futuristic like that. Now we’re past that date, and the changes don’t seem that drastic.

Right.

So in ten years’ time, maybe things will look the same as they do now? Maybe AI will still be in our lives, but in a way that’s similar to what it is now – essentially under the surface and hidden. Ubiquitous, but hidden. The robots still won’t be serving us coffee! So I’m willing to be proved completely wrong with my timeframe.

I think you’re good! I feel like oftentimes AI is portrayed, especially in media and films, as taking over everything in the very near future. It’s often a dystopian presentation. But actual AIs right now, they’re always just good at one thing. They’re very task-specific. We don’t really have anything like what Janetta was trying to work on, like emotional AI.

Exactly.

And there’s another question: do we want that? Because I feel like emotion is something that makes us human. At the end of the day, AI and tech are a bunch of zeros and ones. You can’t really instill that with real human emotion and experiences, in my opinion. There are scientists out there who disagree though.

I should say that, in terms of eight to ten years, I’m not talking about emotional intelligence and AI. Consciousness is way off, if it ever will happen. I think probably it won’t. But in terms of AI and automation …

Automation, yeah. No, definitely.

My friend works for an AI start-up. He often looks at stuff in my novel, and says, “What the … This is crazy!” And I say, “I know! It’s not meant to be real!” When you watch Ex Machina or Her, there’s a suspension of disbelief. But I guess as an AI researcher it must be even harder, not to just say, “Come on, come on now. That’s not going to happen!”

“Maybe AI will still be in our lives, but in a way that’s similar to what it is nowー essentially under the surface and hidden. Ubiquitous, but hidden.”

And that question of whether AI can be human is just such a long-running, fascinating topic, isn’t it? We just can’t let go of it. That uncanny other self, reflected in an AI.

Yeah, definitely. I agree with you that I can see automation coming more into play in the near future, especially with big companies like Amazon. Which is scary, because people do rely on those big corporations for jobs. We’ve seen recently that unionizing doesn’t necessarily work in those scenarios. That’s one reason Rose’s character is very interesting to me. She explores the future of social justice activism, in a near-future world increasingly dominated by automation.

I knew that you can’t talk about automation without talking about Universal Basic Income. But I didn’t want someone who straight out of the gate was like, “You guys, UBI: I’m going to sort it out.” I wanted to make sure that Rose’s activism wasn’t disconnected from the rest of her life.

So much of the novel is about these three women in their early twenties, figuring out who they are, especially who they are in their relationships. With Rose, an important part of this is how she relates to men of power, or men who have power. There’s her father, her brother, and this other guy Alek, and initially she’s unable to get out from under them.

And so she needed to come into her own power. So I thought, Rose is going to be this activist, but she’s also going to be not sure of herself initially. So a lot of it was their inner struggles, intersecting with those larger economic, social, political, or technological stories.

There was a quote I made note of. ‘Alek said, “True leisure, true creativity and true freedom are within our reach for the first time in human history. And so we must set up source gain and welcome the auts.”’ This seemed quite ironic to me because relinquishing more control of the world could seem like the opposite of freedom. And Rose did realize this as time went on, which was cool to see, as she was learning and growing. 

So Alek was with these other two academics at that point in the novel. Alek’s initial point of view is: “Auts are bad, AIs are bad. We need to just destroy this stuff.” But then when these two guys come along, one of them mentions post-work utopias. John Maynard Keynes wrote about something similar in the 1930s, an essay called ‘Economic Possibilities for our Grandchildren’, and Herbert Marcus wrote Eros and Civilisation in the 1950s, and there has been lots of writing about post-work more recently. 

Maybe machines can do everything, and then you can sit around and play all day, and not have to do things you don’t want to. This idea floats past Alek this evening, and suddenly he’s like, “Oh, wait! Yeah, we can just be free, because auts will do the boring stuff!” 

But that’s obviously not a realistic suggestion, because if you take it a step further, like Rose does, the question is, “Who owns those auts?” Well, if it’s the corporations, that’s not freedom. So that brings Alek back to his original idea: we need source gain. We need some kind of UBI. So in that moment when he talks about post-work leisure, he’s speculating. He’s not thinking about what’s necessary now.

Can you see a world where AI grows in importance alongside human creativity and freedom? Or are they opposing forces?

In a post-work scenario, the AIs are doing the grunt work, doing the kind of cleaning and tidying, and fixing things, and all the behind-the-scenes organisational work, so humans can play and fulfil ourselves. So that’s what Alek would mean by welcoming the auts, I think. But do you mean in terms of AI more as an equal?

I guess, or at least AI growing in social importance, and taking on more and more roles?

The way Alek envisions AI, in that moment, they would be this kind of sub-caste. They’d work away in the background, and you wouldn’t need to worry about them because they wouldn’t be conscious. But I think for us, even without AI consciousness, this could still be a very unsettling and unnerving vision.

We’re already seeing that when AI creeps into more and more areas of life, that ideal of true leisure and creativity gets compromised. You’re surrounded by stuff that’s monitoring you, surveilling you, collecting and analysing your data, perhaps even filtering your reality, and steering you in various ways. It’s almost like the more AI we have, the more inhibited we might feel.

Right, and the more potential problems we might face. On the surveillance point, there’s that moment where Janetta and Taly discuss helping the government with docile spy dogs —

This is one of my cringe moments. I read it now and think, “Spy dogs? What?”

Well Boston Dynamics has a robotic dog. The New York City Police Department had a test run, and there was a huge backlash. So they said, “Okay, actually, no. We are not going to use this.” But about Janetta and Taly’s conversation, I was curious: were you critiquing how governments and the private sector collaborate over surveillance? How do you feel about that? 

Attitudes about surveillance are deeply personal. I’ve got one friend who just does not care about his privacy – he’ll happily give all his data to everything and everyone. It’s not because he believes that it might make society better; he just doesn’t care. I suspect he’s not alone in that.

“We’re already seeing that when AI creeps into more and more areas of life, that ideal of true leisure and creativity gets compromised. You’re surrounded by stuff that’s monitoring you, surveilling you …”

The bird on the front of the novel, illustrated by Sinjin Li, is a CCTV bird. If you look closely, it’s got a little robot-y eye. Taly’s company, Mutants, is all about making stuff that looks friendly and cutesy, but it’s actually spying on you.

Personally, I think we should be very scared about surveillance. And not just visual surveillance, but also the amount of data that we’re giving up to companies more generally. So yes, the book definitely includes a critique of DARPA and agencies like that, who are using AI to further cement their military power.

Early in the book, there’s a humanoid robot that looks like Lal. I wondered if you could talk about that choice? It felt like it might be symbolic of Lal’s almost robotic existence at that point.

That’s a fantastic interpretation of it! Even my editor asked me why I did that. Basically, I just wanted one of the main characters to get the experience of the uncanny valley. It was nothing more than that – a moment of AI spookiness.

It definitely was.

I wanted Lal to have that experience of gazing at a factory produced version of herself.

Another reason for Lal to have that experience is that she hasn’t quite figured out how she feels about the auts. She wants to be part of that world, so this is saying: “Here are versions of you who are part of that world … but they’re just auts. They’re just nothing. They’re also praised and loved by everyone. But they’re still soulless machines. Do you really want to be a soulless machine, Lal?” So you’re right, it does touch on the idea that she becomes a bit of a soulless machine.

Okay.

People ask about that moment, and whether it’s a clue to a big conspiracy. But it’s not there for plot reasons. It’s more about Lal herself, and about the social experience of sharing a world with these uncanny others.

It was an intriguing thing to include early in the novel.

Well, I learned a lot about novel plotting during the writing of this book. And there are some things I’d probably change, because I think that ended up feeling like a red herring.

Lal goes to Tekna and gets absorbed into that world. She expects it’s going to be this shimmering, exciting experience. But actually it’s quite dreary.

Dhont is like an industrial estate. The Tekna Tower is where all the glamour happens, where Taly works, and where the conferences are. Lal sees that and she thinks, “That’s where I’m going to work! That’s where it’s going to happen for me!” 

And then she’s deposited in the backend of nowhere instead. Dhont is meant to imply precarity and being low down on the chain at Tekna; it’s the opposite of the Tekna Tower.

Dhont has also been denuded of people, because of the automation. I don’t know if you saw the Richard Ayoade film, The Double?

I haven’t.

It’s based on a Dostoyevsky novella, I think. Jesse Eisenberg goes to work at this very grim, dystopian factory. But after a while, he’s kind of struggling. Then there’s a double, like another version of him that turns up and aces everything. The film is about their conflict. It’s really good, and the surroundings are very grim and derelict. So I had that industrial dystopian feel in mind. With automation on the rise, and Lal fighting for her survival, I wanted her to realise that working for a glamorous company might not be so glamorous after all. Work in an Amazon warehouse is horrible. So I wanted to pull the rug out from under her.

And she could see the Tower from afar.

From her sad little room!

She does work her way up. But it doesn’t feel like she’s happy with that.

All that glitters isn’t gold. When she does get promoted, she’s aware that there’s something lurking underneath. Something’s not right. She thinks, “Well, okay. This is great, and I’ve got loads of money, loads of time. But things are a bit off…” But then, she’s also competitive, especially with her sister, so she also wants to believe everything’s great. I wanted capitalism to pull her in with all its glories, and then wring her dry.

Yes, it definitely did. At the end, we don’t quite know for sure what she decided. I got the impression she made the right decision.

I’m glad you think she made the right decision. 


Keep your surveillance apparatus peeled for part II, coming soon.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

Samuel R. Delany’s Dhalgren: Mapping economic landscapes in science fiction

By Josephine Wideman.

In this academic article, Josephine Wideman explores themes of temporality and capital accumulation in Samuel R. Delany’s Dhalgren (1975). As Fredric Jameson suggests, we must find new methods of spatial and social mapping in order to navigate the geographical and cultural landscapes of late capitalism. Delany’s Dhalgren is deeply concerned with the fate of US hegemony, and with the uncertainty that capitalism has produced: the duality of its unsustainability and seeming inevitability. Bellona is a cityscape which has been devastated by the cycle of accumulation and taken off the map. Delany’s creation ultimately should not be read as a prophecy of what will come of late US capitalism, but it gives insight into the complex historical and apocalyptic consciousness that has been cultivated.



Dhalgren by Samuel R. Delany

Samuel R. Delany’s 1975 novel Dhalgren is lengthy, hallucinatory, and at times unnavigable science fiction. Its form is as dense and as wavering as the urban landscape it depicts, where Delany’s protagonist, Kid, can wonder whether ‘there isn’t a chasm in front of me I’ve hallucinated into plain concrete.’1 Bellona – the fictional city where events take place – is a space ‘fixed in the layered landscape, red, brass, and blue, but […] distorted as distance itself,’ a place where ‘the real’ is ‘all masked by pale diffraction.’2 Although the scenery and scenarios of Bellona may be fictional, and perhaps even fantastic, they are also true representations of real experience. The unfixed landscape we live in becomes ‘fixed’ before us in Delany’s book. The distortions and diffractions by which it is fictionalised only increase its representational precision. The gaps in our experience, usually masked, are made visible. For although it takes an unusual form, we can recognise

this timeless city […] this spaceless preserve where any slippage can occur, these closing walls, laced with fire-escapes, gates, and crenellations are too unfixed to hold it in so that, from me as a moving node, it seems to spread, by flood and seepage, over the whole uneasy scape.3

In looking at Dhalgren, I have borrowed from the political theorist and sociologist Giovanni Arrighi in order to trace the presence and effects of capitalist accumulation in Delany’s fiction. Arrighi, in The Long Twentieth Century, describes the ‘interpretative scheme’ of capitalism as a ‘recurrent phenomena.’4 Drawing on work by the historian Fernand Braudel, Arrighi follows the Genoese, the Dutch, and the British cycles of accumulation to the current North American cycle. By examining past economic patterns and anomalies, he suggests that we may be able to gesture at the fate of our current cycle. Arrighi sets out to demonstrate that the rise and fall of these hegemonies, while never identical, tend to follow a set of stages that begin ‘to look familiar.’5 To make his argument, he proposes a new use for Marx’s ‘general formula of capital’:

Marx’s general formula of capital (MCM’) can therefore be interpreted as depicting not just the logic of individual capitalist investments, but also a recurrent pattern of historical capitalism as world system. The central aspect of this pattern is the alternation of epochs of material expansion (MC phases of capital accumulation) with phases of financial rebirth and expansion (CM’ phases).6

In Das Kapital, Marx initially proposes the formula CMC to theorise how capital functions. This theory begins with the assumption that people have needs and desires they can’t satisfy by themselves. Thus we create the commodities we know how to make (C), which are sold for money (M), which allows us to buy the commodities we want (C). As this cycle repeats, those who are skilled presumably accrue more value than others, being able to sell their commodities for a greater profit. This theory centres around the individual and his role in a capitalist system. But Marx then sets CMC aside in favour of another formula – the formula borrowed by Arrighi in The Long Twentieth Century – MCM’. In MCM’, circulation does not begin with the dissatisfied individual, but with capital itself. Money is invested (M) into the materials and labour necessary to produce a commodity (C), which is then sold for money (M). The difference between CMC and MCM’ is subtle but crucial. The first formula implies that capitalism recurs, and things are made and exchanged, in order to satisfy human desire and need. The second formula implies that money is in charge, that production and exchange are ultimately subservient to profit, and that money begets more money. For Marx, what drives capitalism is not only MCM, but MCM’ – the apostrophe signifying ‘prime’ – or the concept that money increases in value through circulation. The source of this additional, or ‘surplus’ value, is where capital really loses its lustre. This value is gained within labour – in the time spent on the creation and production of a commodity from raw material – and for Marx, its appropriation by capitalists is inherently exploitative.

Continue reading “Samuel R. Delany’s Dhalgren: Mapping economic landscapes in science fiction”

Vector #9

If you have read the Constitution thoroughly you will have seen that all your Officials are working on a voluntary basis. The work we do for and behalf of the BSFA is just a small part of our spare time hobby connected with sf. Right from the day the Association was inaugurated it has been Officered by people already involved in voluminous correspondence with other people from all parts of the world who read and enjoy sf. They belong to local clubs which hold regular meetings; they visit one another – distance no object; they publish amateur magazines of their own; many of them are married and with families that demand part of their time should be spent with them; there are all sorts of things going on all the time. So, Jimmy wants a column from me for the Newsletter or VECTOR – whichever is due – I have a houseful of fans visiting who have to be shown some semblance of hospitality. Jimmy has sent out some stencils to have the artistic headings drawn by one of our talented members who’s domestic commitments for the next few weeks don’t allow him the chance to do them. Attendance has to be made regularly at club meetings if the club is to flourish. ResultL Delayed publication. (Sometimes of course, the paper for printing is sent to the wrong address which doesn’t help either!). Now, I hope none of you are thinking that this is a complaint at all the things we are trying to do in limited spare time; it isn’t.

Ella Parker

Now that we have got that out of the way I’d like to pick a small bone with some of our members living in and near London. I warned you that no personal invitations would be sent out to attend the Friday night meetings at my house for BSFA members. This still applies. Some of you have taken me up on it and come regularly, but there are still more of you who, up to now, haven’t put in an appearance. We have roped in two members since these meetings began. Patrick Kearney who, unfortunately, after only two visits has had to go into hospital (I hope this was only coincidence, Pat?) and Phillip Slater who did the same as Pat and brought his membership fee with him and paid on the spot. Until those two did that the record for joining in the shortest possible time had been held by Mrs. Joyce Shorter (Sorry, Joyce. No pun intended there).

Ella Parker