Looking to 2020: The rise of robots with a face and a name
People are more likely to accept robots that take on human-like qualities like having a face and a name
| By Assistant Professor Sam Yam |
Humans have long sought to create robots that look like us. In 1495, Leonardo Da Vinci created a mechanical knight and more recently Hollywood movies have imagined robots that can speak, walk, and act as if they were humans.
Recent technological advances are turning our imagination into reality, with robots now guiding tourists in airports (Changi included), helping doctors perform surgeries, and assisting guests in stores.
Humanoid robots provide an important advantage over non-humanoid robots as they provide a sense of familiarity and this helps humans empathise with the machines.
Indeed, the prevalence of robots in the workplace is ever increasing and has considerable economic significance. The McKinsey Global Institute estimates that applying AI to organisational functions such as supply chain, marketing, sales, and manufacturing may generate profound value—upwards of US$2 trillion—over the next 20 years.
But what are the psychological impacts robots have on employees and consumers? My research suggests that employees feel a deep sense of anxiety for the prospect that they might be replaced by robots one day. In my ongoing research, I explore ways to reduce the negative psychological impacts robots have on humans by using anthropomorphism – imbuing robots with human-like characteristics, motivations, intentions, and emotions.
Perceiving the ‘mind’ of robots
To understand how anthropomorphism impact humans’ perceptions of robots, I must first introduce a psychological theory called mind perception. Work on mind perception has revealed three important findings.
- First, mind perception is ambiguous, subjective, and subject to disagreement. An animal lover may perceive that cattle have minds, but to many they do not have minds and are merely perceived as a food source.
- Second, people perceive others’ mind on two main dimensions: agency and experience. Agency refers to the ability to think, plan, and act, whereas experience is the ability to feel emotions and bodily sensations, such as hunger, pleasure, and pain. Research has found that people tend to perceive robots as having medium levels of agency but very low experience. In contrast, people tend to perceive pets as having high levels of experience but low agency.
- Third, perceptions of agency and experience affect how entities are evaluated and treated. When entities are perceived as having agency, for example, they are seen as autonomous — able to make decisions and act intentionally and volitionally — but they are seen also as responsible when things go astray. Conversely, when entities are perceived as having experience, people feel empathy toward them and harming them is seen as bad and is morally condemned.
Armed with these insights, my NUS Business colleagues and I conducted a field study in the world’s first robot-staffed hotel in Japan, in which we recruited about 200 hotel guests as participants.
Once guests entered the hotel, we verbally instructed them to think and write about the hotel robots in the same way they would think about other people. We furthermore asked guests to treat the robots as if they had humanl-ike traits, emotions, and intentions. We adopted this well-used verbal/written anthropomorphism prime from the social psychology and market literature. Then, a day after and during check-outs, we asked the guests to rate satisfaction with the robots and the hotel more generally.
We found that this simple psychological intervention enhances perceptions of agency and experience of the service robots, which both in turn leads guests to report higher satisfaction with the robots and the hotel. In addition, we found that when robots failed to serve guests properly (for example, delivering a wrong cocktail), guests who were instructed to anthropomorphise tend to forgive such service failures more. This is important given that the robotic technology is still emerging, and service failures are likely and common.
Results replicated in the lab
To replicate our findings and increase the practical implications of this work, I collaborated with colleagues at NUS Computing and conducted a lab experiment. In this lab experiment, participants were served food by a robotic arm, which we manipulated its humanness via three things.
In the anthropomorphism condition, the robotic arm introduced “herself” as “Allison.” The robotic arm also spoke in a normal female voice in an American accent. Finally, the screen mounted to the robotic arm displayed a smiley face during the study and its lips would move while “speaking.”
In the control condition, the robotic arm introduced itself as “robotic arm 57174,” spoke in a mechanistic voice, and only displayed a blank screen. All of our findings replicated – participants who interacted with the anthropomorphised robot reported higher perceived agency and experience with the robot and thus reported higher satisfaction with the robot.
We also had a condition in which the robot would purposely fail, by always delivering the wrong food choice to participants. Consistent with our field study in Japan, participants were much more likely to forgive the anthropomorphised robot, compared with the control robot.
Anecdotally, some participants even called the anthropomorphised robot cute when it failed, whereas a handful of participants were really upset when the non-anthropomorphised robot failed!
The New York Times recently published an article titled “Should Robots have a Face?” My research suggests that they definitively should, at least in the customer service setting.
As my research demonstrates, there are easy and non-intrusive ways to enhance anthropomorphism such as assigning robots with names, giving robots faces, and programming robots to speak in a local accent or even slang.
No one knows with certainty how robots will shape the future workplace, but my work suggests that anthropomorphism and the resulting enhanced perceptions of agency and experience could potentially be useful for organisations and managers should they wish to successfully utilise this new technology.
About the author
Assistant Professor Sam Yam is from the Department of Management & Organisation at NUS Business School. His research focuses on behavioural ethics, leadership, humour, as well as technology and automation. He has published widely in the very top journals in management and psychology, and is a regular contributor to the local and foreign media on these issues. In 2016, Asst Prof Yam was named by Poets and Quants as one of the Best 40 under 40 Business Professors in the world.
Looking to 2020 is a series of commentaries on what readers can expect in the new year. This is the final installment of the series.
Here are the earlier commentaries in the series:
- Professor Tommy Koh on his global outlook for the year
- Professor Danny Quah on the main risks to the business climate
- Professor Khong Yuen Foong on how Southeast Asian countries are choosing between aligning with the US or China
- Associate Professor Tan Ern Ser on tackling poverty and inequality in Singapore
- Associate Professor Simon Poh on what to expect from the 2020 Singapore Budget
- Professor Simon Chesterman on legal issues posed by the digital economy