Freeing art from the human artist: Hod Lipson speaks to Fiona Moore about AI and creativity

Interview with Hod Lipson

By Fiona Moore

Artist: Pix18, a robot ‘that conceives and creates art on its very own.’ Oil on Canvas. (Image source: http://www.pix18.com)

Hod Lipson is a professor of Engineering and Data Science at Columbia University in New York. With Melba Kurman he is co-author the award-winning Fabricated: The New World of 3D printing and Driverless: Intelligent cars and the road ahead. His often provocative work on self-aware and self-replicating robots has been influential across academia, industry, policy, and public discourse more generally (including this very popular TED talk), and his interests also encompass pioneering in the fields of open-source 3D printing, electronics 3D printing, bio-printing and food printing. Hod directs the Creative Machines Lab at Columbia, where they “build robots that do what you’d least expect robots to do.”

Fiona Moore is a writer and academic whose work, mostly involving self-driving cars and intelligent technology, has appeared in Clarkesworld, Asimov’s, Interzone and many other publications, with reprints in Forever Magazine and two consecutive editions of The Best of British SF. Her story “Jolene” was shortlisted for the 2019 BSFA Award for Shorter Fiction. Her publications include one novel, Driving Ambition, numerous articles and guidebooks on cult television, guidebooks to Blake’s Seven, The Prisoner, Battlestar Galactica and Doctor Who, three stage plays and four audio plays. When not writing, she is a Professor of Business Anthropology at Royal Holloway, University of London.

You are a celebrated figure in the world of artificial intelligence research. Can you tell me how you came to be interested in, and working in, this area?

Thanks. To me, issues like self-awareness, creativity, and sentience are the essence of being human, and understanding them is one of life’s big mysteries – on par with questions like the origin of life and of the universe. There are also many practical reasons to understand and replicate such abilities (like making autonomous machines more resilient to failure). I think that we roboticists are perhaps not unlike ancient alchemists, trying to breathe life into matter. That’s what brings me to this challenge.

My own interest in AI is, in part, as an anthropologist, looking at culture. To what extent will AI “learn” culture, at least initially, from humans, and to what extent do you see them as capable of developing culture on their own?

Yes, AIs learn culture (for better and worse) from humans and from a human-controlled world; but as AIs become more autonomous, they will gather their own data, and develop their own norms, perspectives, and biases.

Do you see this already happening? If so, what do AI cultures look like at present?

AIs today are still like children, and their cultures are heavily controlled by us humans– their “parents.” For example, AIs that generate music are influenced by existing human music genres; AI’s that generate human portraits are influenced by images of humans they find on the web – disproportionately favouring certain aesthetics, genders, and ethnicities, etc. AIs that generate text are influenced by prose that they are trained on, and so forth.

I have not seen AIs that have full autonomy on the data they consume, but this will eventually happen as artificial intelligence becomes more physically autonomous and can collect its own data. But again, we humans are also increasingly subjected to an information diet that is prescribed by the culture we live in, and we have to make a conscious effort to rise above our culture or go against it. 

To the extent that AI “learn” culture from humans, how can we avoid cases like the “racist algorithm” incident (or issues like helper-AIs such as Alexa being gendered female)?

At this point, an AI is like a child. You can shape what it learns by controlling its experiences, to some degree, but you are never 100% in control, and it learns what it learns. We can do extensive testing, but even testing is difficult and biased. It will be a long asymptotic process of testing, unbiasing, and retesting, by humans and by other AI’s.

Do you think that as AI culture becomes less like human culture, it might mitigate against these incidents?

There will not be a single “AI culture”; there will be many AIs and many AI-cultures – just like there are many humans and many human cultures. So yes – AIs will offer second opinions, alternative perspectives, which may also be biased, but perhaps biased in different ways. AIs will differ based on their differing life experiences (datasets). Some AI’s will reinforce human biases; some AI’s will expose them; some AIs will help counterbalance human biases. So I hope that overall, more AI’s will lead to more diverse opinions and hence less bias.

Should we shape the culture of AI to our own needs?

Yes and no. AI is not monolithic. Some forms of AI are practical, and yes, from a pragmatic point of view, we should shape it to our needs – like driving a car in a practical way. Other forms of AI are more exploratory, where we want to find out what and how it learns and what it can discover on its own – like automating scientific discovery and engineering design. These should perhaps evolve in a more open-ended way.

What are the ethics of shaping AI culture to our own needs?

As long as AI is a tool, the ethics of shaping AI are no different than the ethics of shaping any other massively-used tool, like a gun, a smartphone, or social networking platform: rife with good intentions but sometimes with unintended consequences. But when AI has its own self-awareness, things will become more complicated. AI could eventually (decades from now) have its own feelings; at that point, shaping an AI would be akin to shaping another being.

My other interest is, of course, as a speculative fiction writer. What do you think about how AI have been treated in SF?

Science fiction has been pretty good at recognizing some of the potential long-term challenges with AI; less so on the benefits, complexity, and diversity of AIs. Of course, conflicts make for better storytelling, but I think there is more to the story on the positive side.  

In your opinion as a professional, what should SF writers be writing about, as regards AI?

I would like to see a nuanced balance of positive and negative potential uses and future developments, instead of a predominantly negative outlook. Almost every SF story ends with humans “winning” or “losing” to some nefarious AI. But it doesn’t have to be so confrontational. I would love to see what nuanced coexistence might look like. That’s a more challenging storyline to write, but certainly with fiction involving humans, we depict more subtle and nuanced characters (like antiheroes) and multifaceted realities.

Can SF help us work through the problems and issues in developing AI?

Certainly. But it can also turn people off prematurely or set them against technology by presenting a skewed (biased!) dark prognosis. There’s a balance that’s more nuanced than typically portrayed.

As people become more used to the idea and work through their fears, will images of AI become more positive?

Probably. People used to be worried about the printing press and broad literacy; now we see it as a positive force. People used to be afraid of chemistry; now we see it as a positive force (mostly). In the 70s people used to think that genetics will lead to a dystopian future; now we see it mostly as a positive medical tool. People are afraid of AI and robots; but that may change as it is used mostly for good.

Your project PIX-8, the AI artist, piqued my curiosity. What relationship does AI art have with human art?

This is a long discussion. But in a nutshell, I think it frees art from the human artist.

In what sense does it do this?

Art has always had a parasitic relationship to artists (see, for example, writings of Walter Benjamin). For the first time in history, there can be art without an artist.

 To quote from your website about the project, “Some are even willing to concede that a robot can autonomously create art, but not that a robot is an artist.” Can you expand on the distinction?

Even if a machine is key in the process of creating art and does most of the work (e.g. a camera), it is rarely seen as the artist. There is always a human to take the credit. There is almost a sort of prejudice against the machine. But for the first time in history, something other than a human can be creative, and we humans have a hard time relinquishing the throne of creativity.

AI are creating art for humans; could humans create art for AI? What would it be like?

Interesting question! Once AIs become critics, humans will begin to create art for AI.

Why critics? Why not consumers or connoisseurs?

Yes: critics, consumers and connoisseurs, all decide what is valuable art, and what isn’t.

What sort of art would humans create for AI?

If AI becomes a consumer (e.g. ranks art), some artists will try to create art that AI might favour. I think it’s inevitable.

What can we learn about human ethical systems through the process of teaching ethics to AI?

It highlights what we kind of know already, that ethics and data (experience) are intertwined and always biased – but that’s all we have. So, it’s an ongoing battle, and one that we must continue to fight and improve over time.

I recently read the short story ‘Scar Tissue’ by Tobias S. Buckell which posits a world where robots will have to be raised like children. As someone who works on AI, what are your thoughts about this?

I haven’t read the story, but yes I agree – it was the premise of my TED talk. In fact, I would say there is no other way. But like children, robots will come with some pre-ordained choices, gifts, abilities, hardware sensors and actuators, preloaded learning software and data, etc. We have to make important design choices. This is the opposite of 20th century AI that was mostly based on logic, rules, and reasoning — that turned out to be a dead end.

Might this also normalise AI, literally making them part of the family?

Yes. I already feel that way towards some of our robots! AIs have strengths and weaknesses. Each one is different.

How will we be able to tell when we have created a truly sentient machine?

It won’t be immediately obvious. There will be many forms, kinds, and levels of sentience. It’s not black and white. A dog is sentient, a bit, in some ways, sometimes.

We can tell a dog is sentient, though, through our shared mammalian communication forms and embodied pursuits, e.g. a dog can tell us how it feels, a dog can empathise with us when we’re happy or in pain, a dog can trick us, etc. Since AI aren’t mammals, how will we tell?

True, it will be harder, initially, because humans and AIs share different roots. But AIs evolve much faster than any other form of life. So, what took dogs thousands of generations to evolve as they coexisted with humans, might take AI much less time. AI will learn to be understandable to humans, as AIs also co-evolve with humans.

Where do you see yourself, and the field of AI research, going in the future?

Who knows? We’re sailing west!

OK, then, what do you see as the most significant current trend in AI research?

I think there are quite a few: Creativity, Curiosity, Self-awareness, Physical embodiment. Language. All of it is happening right now.

Hod Lipson, thank you!

One thought on “Freeing art from the human artist: Hod Lipson speaks to Fiona Moore about AI and creativity

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s