The premise of Philip K Dick’s famous novel Do Androids Dream of Electric Sheep?, first published in March 1968, is that empathy defines our humanness. The novel and the movie directly inspired by it – Ridley Scott’s Blade Runner – describe androids that are indistinguishable from humans and can be exposed only through the Voight-Kampff empathy test, a kind of lie detector.
In the novel – which is set in what was, at the time of publication, a futuristic 1992 – people need machines to nurture their empathy. They log on to an “empathy machine” that connects them to the suffering of a quasi-religious figure. A domestic “mood organ” controls people’s moods and motivations according to a number which is dialled in to it.
Everyone is also expected to care for a creature. Since animals are virtually extinct in this post-nuclear apocalypse world, people buy artificial creatures to look after. Towards the novel’s end, the main character, Deckard, finds a toad in the wilderness and believes it’s real. His wife ascertains that the toad is a fake, and buys artificial flies to “feed” it.
Current technology is nowhere near producing androids that are indistinguishable from humans. But robotics researchers are apparently close to building machines capable of empathy. A 2011 paper described an experimental procedure where a human subject wearing an EEG – a network of brain activity sensors on the scalp – carried out various tasks. The robot (named ROBERT) detected the person’s mental workload using the sensors, and provided verbal information (text-to-speech from a database of personal details) about students that the person was meeting, in different styles according to the level of mental activity.
But that’s not the kind of empathy Dick had in mind. When scientists talk of empathy, they focus on cognitive empathy, sometimes called “theory of mind”. This is a capacity to understand others’ beliefs, feelings and intentions. It does not mean feeling sympathy or compassion. In 2015, a robot called Pepper received considerable attention for its capacity to read people’s emotional expressions and “offer appropriate content”. Pepper expresses itself by changing the colour of its eyes and tablet, or tone of voice. However, this does not mean that Pepper can feel emotions itself.
Having this theory of mind also enables humans to deceive other people – something that Dick’s androids were doing when passing themselves off as human. So what about real world robots? Are they good enough to deceive?
In September 2010, New Scientist reported that robot car ROVIO had tricked its opponent in a hide and seek game – an achievement that the article presented as “a step towards machines that can intuit our thoughts, intentions and feelings”. Yet, one expert stressed that this was “a far cry from human theory of mind” because ROVIO’s performance was highly specific to the task, and didn’t demonstrate the generalised concept of deception that humans have.
Appearances don’t deceive
There’s a huge gap between what robots can do and what humans do in terms of empathy. An important difference between the future we’re entering and the future imagined in Dick’s novel, however, may lie in the fact that real world social robots are not “replicants”. Many robot designers prefer cartoon forms. This sidesteps the “uncanny valley” effect identified by Japanese roboticist Masahiro Mori. This theory holds that highly humanoid artefacts can cause feelings of spookiness and repulsion in actual humans.
Designers opt for cartoon forms because they are fun, too, especially for children. Consider the robot Tega, described as a cross between a Furby and a Teletubby. Tega is promoted as a robot companion that can set a good example for kids when programmed with desirable behaviours.
For these machines, some developers prefer to use the term “socially assistive robot” as opposed to “social robot”. Their aim is to assist carers, therapists and teachers in their work. One such “social assistant” is Parowhich looks like a baby seal and is used as an alternative to pet therapy in residential care homes for the elderly.
Another promising use for these kinds of robots is therapeutic work with autistic children. Autism is associated with difficulties in reading social clues and interacting with others. Studies have shown that games played with an adult or another child through a robot such as the doll-like Kaspar can help autistic children come out of their shell.
The march of robots into our lives seems inevitable, but opinions are polarised on whether this is the best thing for humanity. Enthusiasts emphasise the benefits for learning. Yet research suggests that children growing up immersed in technology might be less likely to regard living animals as having the right not to be harmed.
The premise of Do Androids Dream of Electric Sheep? was that empathy separates human from android. Technology is certainly blurring this boundary, but whatever lies in store for humanity, Dick’s novel remains a poignant commentary on what it means to be truly human.
This article was originally published on The Conversation, first appearing on 6 March 2018. Raya A. Jones (Social Sciences, Cardiff University) formed one part of our panel for our December 2018 event on Philip K. Dick’s Do Androids Dream of Electric Sheep? and this article is based on the paper she gave as part of that event. The original article can be read here.
Further content related to that event is listed below.