The Dawn of Social Robotics – How Social Robots Are Used

Every science fiction fan has their favorite social robot, whether that is Star Wars’ C-3PO, Star Trek Next Generations’ Mr. Data, or Pixar’s Wall-E and EVA. A social robot is a machine that is designed to interact naturally with humans by holding conversations, following social norms, making eye contact, and gesturing. 

Social robots vary in their “humanness,” from the online chatbot or the automated voice system that screens your call to robots that people mistake for being human.

As the technology improves, we can expect more interactions with social robots, and those interactions are likely to become even more realistic. The Nao (pronounced “now”) robot demonstrated one aspect of self-awareness. Three Nao robots were told that two had been given a “dummying” pill that prevented them from speaking while the third received a placebo. In fact, two of the robots were muted. When asked which had received the placebo, one robot responded “I don’t know.” After a pause, the robot continued by saying “Sorry, I know now. I was able to prove that I was not given a dumbing pill.”

In 2018, Google Duplex phoned a hairdresser and made an appointment without the hairdresser realizing that she was talking to a computer. Google Duplex used “uhh” and “umm” and paused naturally before speaking. 

Computer scientists continue to debate whether accomplishments like this pass the famous Turing Test. In 1950, Alan Turing suggested that if you held a conversation with a person and a computer and couldn’t tell which was which, the computer demonstrated “strong” artificial intelligence (AI), or full human cognition. Regardless of the outcome of this debate, we can stand in awe of the progress being made in AI.

What Kinds of Social Robots Are There?

We are quite familiar by now with human-like technology in the forms of virtual assistants like Alexa or Siri and the previously mentioned chatbots and automated voice systems, but where else might we expect to find social robots soon?

Developers of Hanson Robotics’ Sophia envision her fulfilling a wide range of roles, including serving as a companion in nursing homes and helping crowds in parks or at large events. Softbank Group’s Nao robot does not share Sophia’s human appearance, but is also expected to provide customer service, provide training, and serve as a companion. The Nao robots are being trained to provide customer service in banks, training, and companionship.

The use of social robots in healthcare is under frequent investigation. Social robots have been used to decrease stress among children undergoing hospitalization, much like service animals. Similarly, robots like Paro, which looks like a baby seal, show promise when used with patients with Alzheimer’s disease who are unable to care for a living pet. 

In particular, children with autism spectrum disorder appear to benefit from interacting with a range of social robots. The predictability of the robot’s behavior is well-suited to the children’s way of interacting. Research suggests that indicators of sociability, such as eye contact, can be improved following interactions with a social robot.

Perhaps the most controversial of the social robots is the sex robot. While a fully functional sex robot is not yet available, the sex toy industry is worth many billions, so resources are certainly available for robot development. Abyss Creations has developed a robotic head, named Harmony, that can be attached to their existing product, the RealDoll body. 

Harmony, like Sophia, can produce a number of facial expressions and hold short conversations. Users can program Harmony along 18 personality dimensions. Ethicists are divided regarding the implications of a sex robot. Some worry that interactions with a robot—a possession that does not give consent—could generalize to real-world interactions with other people. They also disagree about the potential of sex robots to reduce or increase problems like sex trafficking and sexual assault. 

Although conscious robots are not currently possible, evidence of consciousness behavior in robots raises its own set of ethical dilemmas. Unfortunately, it is likely that our ability to sort out the ethical implications of social robots, in general, will lag behind their development.

Although not strictly social in the sense that a robot is being designed to interact naturally with humans, ethicists are also raising concerns about Terminator-style autonomous weapons. Inventor Elon Musk sounded a warning about the possibility that such technology could be developed soon and he donated ten million dollars to researchers to find ways to support the beneficial development of artificial intelligence. 

Robots are already an important asset in today’s military operations, conducting surveillance and identifying hazards like IEDs. In these cases, however, a human operator is responsible for decision-making. It is not outside the realm of possibility for programmers to develop a robot that would make those decisions. There are no current laws prohibiting countries from using autonomous weapons and efforts on the part of the United Nations to consider banning them have been blocked by Russia and the United States. China is expected to have an arsenal of autonomous weapons in the very near future if this has not already occurred. 

The Campaign to Stop Killer Robots advocates for limits on the use of autonomous weapons. They use the term “digital dehumanization” to describe negative trends associated with the inappropriate use of artificial intelligence and robots.

What is the Uncanny Valley?

In the 1970s, robot expert Masahito Mori coined the term “uncanny valley” to explain variations in human responses to robots as their appearance becomes increasingly human-like. 

In general, a positive correlation is observed between human-like appearance and liking by humans. The more human-like the robot, the more its appearance is liked. However, this relationship holds just up until the point where a robot begins to look very human. At that point, a dramatic dip in liking, the uncanny valley, occurs. All of a sudden, we like that very realistic robot much less.

Why would this occur? Researchers have proposed a perceptual mismatch hypothesis to explain this relationship. We expect appearances to match, so when we see artificial-looking eyes in an otherwise natural-looking face or vice versa, it can upset our expectations and provoke a negative emotional reaction. 

We also might become more sensitive to exaggerated features as a stimulus becomes more natural-looking. A cartoon character with very large eyes, as in anime, does not disturb us. Digitized actors, like the characters in Alita: Battle Angel, however, take some getting used to. The actors’ original performances are completely converted to computer-generated images. One of the most obvious differences is the large eyes of the digitized characters. Although authentic to the story’s manga roots, the characters’ appearance can be somewhat distracting as our minds try to sort out why such a real-looking stimulus isn’t “quite right.” Similar reactions have been observed in response to the outlandish proportions of a Barbie or GI Joe doll.

The uncanny valley might disappear as we become more familiar with realistic social robots. If we simply compile a different set of expectations for what a robot “should” look like, rather than anticipating humanlike appearances, the discrepancies we see in the robot should be less disturbing. Then, too, there is the possibility that social robots that are indistinguishable in appearance from real humans might eventually be developed.

Appearance is not the only challenge faced when interacting with social robots. We have all had frustrating experiences while navigating artificially intelligent chats or phone menus, desperately calling out “representative” so that we can cut through the lengthy process and its many irrelevant twists and turns to speak to a real human being. These programs continue to improve, but still lack the important human connection so highly valued by members of our social species.

The Future of Social Robots

Our relationships with social robots are dependent on our own situations. For the foreseeable future, I do not anticipate competing with a robot for my job as a professor, but others are not so lucky. 

As robots become more capable of taking on roles in customer service and healthcare, the transition for those employed in the same areas is likely to be bumpy indeed. Ideally, we would anticipate where robots are most likely to co-opt jobs and begin to prepare humans to move in directions where robots are less likely to work. 

It is safe to assume that robots will play increasingly active roles in our lives in the future, but this is an area where additional psychological research is badly needed.

Laura Freberg, PhD

Laura Freberg, PhD

Writer & Contributing Expert

Laura Freberg serves as professor of psychology at Cal Poly, San Luis Obispo, where she teaches introductory psychology and behavioral neuroscience.

Dr. Freberg is the author or co-author of several textbooks, including Discovering Psychology: The Science of Mind, Discovering Behavioral Neuroscience, Applied Behavioral Neuroscience, and Research Methods in Psychological Science. She served as President of the Western Psychological Association (WPA) in 2018-2019.