Situation: Some people believe if a robot looks similar to a human, it will be trusted more and elicit positive emotions in humans, particularly if it is a robot assistant.
In fact: The more similar a robot is to a human, the less desirable it is for it to work with humans. This is how the ‘uncanny valley’ effect works.
Researchers from HSE University studied the perception of social robots (robots that are able to communicate with people and assist them with various needs) in everyday situations. They studied the perception of such factors as the robots’ appearance, speech, interaction situations, as well as the respondents’ characteristics. The scholars found that androids (robots that are most similar to humans) are more desirable in various situations than humanoids (robots that only vaguely resemble humans). This confirms the ‘uncanny valley’ effect identified at the end of the last century, where people dislike robots that achieve a certain level of human resemblance. The results of the study were published in The Journal of Sociology and Social Anthropology.
In 1970, Japanese robot engineer Masahiro Mori published the essay ‘Bukimi No Tani’, which was later translated into English as ‘Uncanny Valley’. Mori supposed that the more robots resemble humans, the more revulsion they would induce.
People feel revulsion when they perceive inanimate objects that resemble a human or human parts in detail. Based on experimental data, Mori built a diagram that demonstrates that sympathy towards a robot grows until a certain margin of similarity. Then, a decrease occurs — the so-called ‘uncanny valley’. Anthropomorphic robots elicit similar levels of emotional reaction as corpses, hand prosthetics with imitation skin, and zombies.
Mori’s concept explains this with an assumption that the unnaturalness of anthropomorphic robots may remind people of death and cause associated negative feelings. The twitchy movements, asynchronised speech and lip movement, frozen expressions, etc, typical of robots are not characteristic of living humans, which is why they cause a cognitive dissonance and an ‘uncanny’ feeling.
Half a century has passed since Mori’s essay was published. Robotics has made huge steps forward, but the problem of how people perceive robots is more relevant than ever. The question arises: what characteristics should machines have for successful interaction with humans?
The authors of the study, Roman Abramov from HSE University and Viktoria Katechkina, say that there are not a lot of robot classifications by level of anthropomorphism. However, some key types can be outlined based on several papers.
Intellectual systems that imitate the human appearance in maximum detail both structurally (the general forms of a robot’s appearance) and in terms of materials are often classified as android robots. Any other robots that copy human appearance, at least partially, are called humanoid. ‘In this case, we see different levels of resemblance, but most often, there are structural components, such as detectable forms of eyes, head, limbs and their combinations,’ the authors comment.
In the end, the Mori paper says that androids are the most likely to end up in the ‘uncanny valley’.
Today, there are a number of scientific explanations for the uncanny valley effect. One of them is related to the theory of unmet expectations and negative attitudes. ‘The theory of unmet expectations assumes that interaction with robots, particularly anthropomorphic ones, breaks the pattern of everyday microbehavioural interactions, under which it is important to understand any action as a result of the implementation of correctly understood expectations,’ the paper says. According to the authors, robots break the common pattern, since their human resemblance is imperfect.
‘At the same time, there is a popular perception of robotisation as a negative phenomenon that poses a threat not only to the uniqueness of humans, but to their physical existence. These negative attitudes, which are particularly intensively broadcast via films, books and media, are one reason for the uncanny valley effect in human-to-robot interactions,’ the researchers explain.
This said, it can be assumed that humanoid robot appearances are more preferable than those of androids. A robot’s appearance would be the most important factor in assessing its acceptability and viability. The researchers decided to check the validity of these assumptions in relation to social robots.
The study included students of Moscow universities aged 18 to 29 (with an average age of 21.7). In the first stage, the researchers compiled a list of 289 Moscow universities, including 29 technical ones and 260 in other fields. Then, a random number generator was used to select five target universities from each of the groups. In the second stage, the researchers looked for respondents via the VK social media groups of the selected universities. The online survey was conducted in 2021 and was sent out via personal messages to the groups’ members.
The study used the vignette method: the participants were given several options of artificially modelled situations, in which an android and a humanoid robot acted as assistants in a formal and an informal environment. The formal situation modelled applying for a passport; the informal one involved using the robot as a household worker. The respondents answered questions on how acceptable such a robot is as a companion, and how useful replacing humans with robots is in a specific situation.
The researchers used statistical analysis to assess the impact of several factors on perceptions: the robot’s appearance, speech, interaction situation, and the respondent’s educational background and gender.
The researchers observed the highest levels of appropriacy and viability in the formal interaction situation with a humanoid robot whose speech is identical to that of a human. The informal interaction at home with an android robot talking in computerised manner resulted in the lowest average assessment.
The results showed that the most important factor in perceiving a robot is its appearance. The authors say that they chose an android with feminine features, since this may minimise negative perceptions. However, it appears that feminine characteristics do not compensate for the ‘uncanny valley’ effect.
All else being equal, the humanoid robot was considered more acceptable and viable in terms of replacing a human. Regarding other factors, speech that is most similar to that of a human does not induce revulsion in respondents. Speaking of interaction situations, an android robot in a home environment is highly undesirable.
The researchers found certain differences in the perception of robots depending on the respondents’ genders and educational backgrounds. ‘One interesting outcome concerned evaluations by women with different backgrounds. The only case in which an android robot was evaluated higher than a humanoid one was in a formal interaction situation, in which female students of technical fields gave higher grades than female students of non-technical fields.’
Such a difference in perception may be related to the respondents’ education. Women who have studied technology are more likely to expect a robot to help in an everyday situation rather than act as an independent subject.
In terms of the viability of replacing humans with robots, the situation was different. The researchers say that the respondents gave consistently low grades to the viability of androids, independently of the participants’ genders and educational backgrounds.
At the same time, highly contrasting grades for humanoid robots were observed depending on the respondents’ individual characteristics. ‘Male non-technology students believed it more viable to replace humans with humanoids than male technology students, while female non-technology students, on the contrary, believed this replacement less viable than female technology students,’ the paper said.
According to one of the authors, Roman Abramov, Professor at the HSE Faculty of Social Sciences, the results of the study can serve as a practical guideline for developers on how to make social interfaces more acceptable and understandable, as well as which specific perception-influencing factors should be taken into account in development.
The study also has certain limitations related to the projective part of the method. ‘More reliable and objective results can be obtained by combining real robot interaction practice analysis with a projective approach,’ the researchers commented.
They also say that it is impossible to apply these conclusions to the general population. That is why further research is needed. In the future, the scholars believe, it will be possible to expand the sample and study other age groups in order to determine the specifics of human-robot interaction related to age, technology acceptance, or openness to innovation.