Sunday, 18 April 2021

Interview: AI ethicist Kate Darling: ‘Robots can be our partners’

The MIT researcher says that for humans to flourish we must move beyond thinking of robots as potential future competitors

Dr Kate Darling is a research specialist in human-robot interaction, robot ethics and intellectual property theory and policy at the Massachusetts Institute of Technology (MIT) Media Lab. In her new book, The New Breed, she argues that we would be better prepared for the future if we started thinking about robots and artificial intelligence (AI) like animals.

What is wrong with the way we think about robots?
So often we subconsciously compare robots to humans and AI to human intelligence. The comparison limits our imagination. Focused on trying to recreate ourselves, we’re not thinking creatively about how to use robots to help humans flourish.

Why is an animal analogy better?
We have domesticated animals because they are useful to us – oxen to plough our fields, pigeon delivery systems. Animals and robots aren’t the same, but the analogy moves us away from the persistent robot-human one. It opens our mind to other possibilities – that robots can be our partners – and lets us see some of the choices we have in shaping how we use the technology.

There is a threat to people’s jobs. But that threat is not the robots - it is company decisions that are driven by a broader economic and political system of corporate capitalism. The animal analogy helps illustrate that we have some options. The different ways that we’ve harnessed animals’ skills in the past shows we could choose to design and use this technology as a supplement to human labour, instead of just trying to automate people away.

Who should be responsible when a robot causes harm? In the middle ages, animals were put on trial and punished…
We did it for hundreds of years of western history: pigs, horses, dogs and plagues of locusts – and rats too. And bizarrely the trials followed the same rules as human trials. It seems so strange today because we don’t hold animals morally accountable for their actions. But my worry when it comes to robots is, because of the robot-human comparison, we’re going to fall into this same type of middle ages animal trial fallacy, where we try to hold them accountable to human standards. And we are starting to see glimmers of that, where companies and governments say: “Oh, it wasn’t our fault, it was this algorithm.” .

Shouldn’t we hold robot manufacturers responsible for any harm?
My concern is that companies are being let off the hook. In the case of the cyclist killed by a self-driving Uber car in 2018, the back-up driver was held responsible instead of the manufacturer. The argument from the companies is that they shouldn’t be responsible for learning technology, because they aren’t able to foresee or plan for every possibility. I take inspiration from historical models of how we have assigned legal responsibility when animals cause unanticipated harm: for example, in some cases, we distinguish between dangerous and safer animals and solutions range from holding owners strictly responsible to allowing some flexibility, depending on the context. If your tiny poodle bites someone on the street, totally unexpectedly for the first time, you’re not going to be punished like you would if it were a cheetah. But the main point is that unforeseeable behaviour isn’t a new problem and we shouldn’t let companies argue that it is.

You don’t have any pets but you have many robots. Tell us about them
I have seven Pleo baby robot dinosaurs, an Aibo robotic dog, a Paro baby seal robot and a Jibo robot assistant. My first Pleo I named Yochai. I ended up learning from it first-hand about our capacity to empathise with robots. It turned out to mimic pain and distress very well. And, showing it to my friends and having them hold it up by the tail, I realised it really bothered me if they held it up too long. I knew exactly how the robot worked – that everything was a simulation – but I still felt compelled to make the pain stop. There’s a substantial body of research now showing that we do empathise with robots.

Some people, such as social psychologist Sherry Turkle, worry about companionship robots replacing human relationships. Do you share this fear?
It doesn’t seem to have any foundation in reality. We are social creatures able to develop relationships with all different types of people, animals and things. A relationship with a robot wouldn’t necessarily take away from any of what we already have.

What, if any, are the real issues with robot companions?
I worry that companies may try to take advantage of people who are using this very emotionally persuasive technology – for example, a sex robot exploiting you in the heat of the moment with a compelling in-app purchase. Similar to how we’ve banned subliminal advertising in some places, we may want to consider the emotional manipulation that will be possible with social robots.

What about privacy? Animals can keep your secrets, but a robot may not…
These devices are moving into intimate spaces of our lives and much of their functionality comes from their ability to collect and store data to learn. There’s not enough protection for these giant datasets these companies are amassing. I also worry that because a lot of social robotics deals with characters modelled on humans, it raises issues around gender and racial biases that we put into the design. Harmful stereotypes get reinforced and embedded into the technology. And I worry that we are looking to these robot companions as a solution to our societal problems such as loneliness or lack of care workers. Just as robots haven’t caused these problems, they also can’t fix them. They should be treated as supplemental tools to human care that provide something new.

This often comes up in science fiction, revolving around the question of whether robots are sufficiently like us. I don’t disagree that robots, theoretically, would deserve rights if they were to become conscious or sentient. But that is a far-future scenario. Animal rights are a much better predictor for how this conversation around robot rights is going to play out in practice, at least in western society. And on animal rights we are hypocrites. We like to believe that we care about animal suffering but if you look at our actual behaviour, we gravitate towards protecting the animals that we relate to emotionally or culturally. In the US you can get a burger at the drive-through, but we don’t eat dog meat. I think it’s likely we will do the same with robots: giving rights to some and not others.

Should we have human-looking robots at all?
I don’t think we’re ever going to stop doing it but, for most practical intents and purposes, the human form is overrated and overused. We can put emotions into everything from blobs to chairs. People may even respond better to non-human robots, because what’s often disappointing is when things that look like you don’t quite behave the way you expect.

 

No comments:

Post a Comment

The Veterinarians Preventing the Next Pandemic

Skip to main content Open Navigation Menu Most new diseases have their origins in animals. So why aren’t we paying more attention to their h...