Head picture: Geminoid™ HI-4 : Osaka University
The Japanese robotics professor whose creations were voted the world’s creepiest on the IEEE (Institute of Electrical and Electronics Engineers) website is not amused — especially since one of the androids is his robot twin. Director of the Intelligent Robotics Laboratory at Osaka University, Professor Hiroshi Ishiguro believes bots have much to offer humans, and like us, they certainly should not be judged solely on appearance.
With robots now doing our household chores, attending to the elderly and even teaching schoolchildren, researchers are exploring ways to make them more humanlike to better integrate them into the everyday lives of the people they serve.
That’s the aim of Ishiguro’s companion robot “Telenoid,” which ranked #1 on the creepy list. Osaka University and Japan’s Advanced Telecommunications Research Institute International (ATR) first released Telenoid in 2010 as a low cost humanoid conversation bot whose “soft and pleasant skin texture and small, child-like body size allows one to enjoy hugging and communicating with it easily.”
Number nine on the IEEE creepy list meanwhile is Ishiguro’s robotic doppelgänger “Geminoid HI-1,” which is controlled remotely and can mimic Ishiguro’s voice and face and head movements.
In an exclusive interview with Synced, the 56-year-old professor shared his thoughts on humanoids, human nature, human-robot interaction, knowledge and intelligence. The interview has been edited for brevity and clarity.
Robots you invented rank No.1 and No.9 in the IEEE list of “creepiest robots.” How do you feel about that?
Obviously the persons who gave my Telenoid robot that ranking just evaluated it on appearance rather than actually interacting with it. Telenoid has become very popular in many other media, as it can encourage conversations with adults, elderlies especially. I think if people interact with my robot nobody would say it’s creepy.
Also, I don’t know the definition of ‘creepiness’ that people use, maybe ‘uncanny’ or something like a zombie? Usually, when we see a humanlike robot, we expect everything to be humanlike, the voice and the movement and so on. But a zombie doesn’t have that humanlike movements, it’s quite jerky. In that sense we could feel a creepiness.
But I think this ranking was just based on pictures. Robots should be evaluated through real interaction.
You’ve said your robot twin Geminoid HI-1 has helped you better understand society. Can you explain how?
I think along with understanding society, this robot was also good for understanding humans. When I created my copy, our staff had carefully implemented my appearance, voice and behaviors to the robot. Other people said it was identical to me, but I didn’t think so, because I cannot observe myself objectively. That was an interesting finding, that humans cannot observe ourselves objectively. For example I’m hearing my voice right now but this voice is different from the recorded voice. And my mirror image is a flipped image.
We also researched things such as eye movement, which is very complicated. We can use Geminoid, to investigate the meaning of eye movement. There are many things we’re looking at, and we’re writing a lot of technical papers.
You’ve previously said you believe robots might have souls. Can you elaborate on that?
That was an exaggeration (laughs). Basically, Japanese believe everything has a soul, we never distinguish humans and others. Someday humans are going back to nature, the same as all others. So it’s spiritual, not factual or objective.
Can you speak on your research into distinctions between human to human interactions and interactions between humans and robots?
There are so many things. The most important is that we can replicate the feeling of persons by using humanlike robots. This was the first challenge when I created my copy. I sent my copy to foreign countries, and then I teleoperated it. In this way I could adapt and accept the android body as my own body, and I could exist in a distant place. I think this is an interesting finding: if we operate a humanlike robot through the Internet, we might accept that android robot body as our own body.
For example we did a psychological test with a human remotely operating the humanoid which is injected (with a syringe). Although the operator is only watching on monitors, usually they have a strong feeling as if they themselves are being injected. We also saw sweat on their palms, etc.
Another interesting finding is that people can trust robots more. Usually, we’re going to have some doubts about humans, but people never doubt robots, For example, we have many vending machines in Japan, but we never check the change from vending machines, we trust it. But we would check the change from human shopkeepers in convenience stores.
What have you learned from robots about what it means to be human?
It’s a difficult question and an endless study for science. Our finding is that robots need to be minimally humanlike, but we don’t care so much about their appearance and movement. The more important thing is interactions and conversations. I can return to the first question here, where some people said Telenoid was most creepy robot, but everybody who has used Telenoid says it’s friendly and easy to talk to. That means that conversations are most important.
Can robots develop their own personalities?
There are many levels. Robots are complicated machines that can have a kind of identity based on some differences from other identical robots because of that complexity. And now of course we can also design robot personalities. So robots can have a natural personality coming from their complexities and can also have personalities given by programmers and designers.
Also there are many kinds of people in this world, and each may have different preferences about robots. So robots will need to adapt to people, and I think having personalities can help the adaptations. But that’s the next plan, we’re not studying robot personalities yet, we’re focused on basic humanlikeness.
What types of human knowledge are easiest and most difficult to transfer to robots.
Wikipedia and encyclopedias can be easily downloaded onto the robots, and if we ask questions the robot can answer based on knowledge from the encyclopedias. But we don’t know if the robot understands that knowledge. Understanding knowledge is so difficult. This is the “symbol grounding problem,” without bodies and without experience we never know if the robot knows the real meanings of the symbols we use.
And we don’t even know the exact meaning of “knowledge.” For example I am sitting on a chair, and you can imagine that I am sitting on a chair and talking to you. But robots cannot do that. Again, this is a symbol grounding problem — when we hear a word, we can quickly have some imagination about that word, but robots cannot. It’s possible that recent research in deep learning may solve this problem, but we don’t know yet how a robot can have an imagination about words.
We can teach them the text and the word, but we cannot teach imagination about that word. And for deep learning, we need to have thousands of pictures of a cat or a dog for example for training. But a human child given seven or eight pictures could quickly understand what a cat is, and can also have imagination about cats. Deep learning is completely different from human learning. The human brain has more clever functions, but nobody knows why.
Are you working on any other android projects?
We have a robot called “Ibuki.” The idea was to create a child android for a new approach we call “socially developmental robotics.” An adult android is expected to have adult intelligence, otherwise, people won’t want to talk with it. But it is not so easy to give proper knowledge to an adult android. We thought if the android is a child, everybody will treat it gently and try to teach new things to it. So a child android can gather knowledge through interactions with other people, especially adults in society.
Further information on these and other Hiroshi Ishiguro robots is available on the Ishiguro Lab Project pages.
Journalist: Fangyu Cai | Editor: Michael Sarazen