Story
Jennifer has a daughter who is seven years old. Jennifer is busy at work, and she can not spare much time for her daughter. Jennifer’s friend Ashley recently bought a humanoid robot to take care of her child. This robot is designed by a company that targets parents who can not look after their children during the daytime. Lots of parents chose to buy humanoid robots to take care of their children instead.
After one month, Ashley was so satisfied with the humanoid’s overall performance on child care service that she suggested Jennifer purchase the same robot, so that she does not have to be concerned about her daughter’s well-being while she is at work. Based on Ashley’s shared experience, the robot performs outstanding communication skills and provides excellent entertainment services while caring for her child. She further emphasized that the childcare robot has identical features and external appearance to a human, which helped her child better communicate with the humanoid robot.
Nonetheless, Jennifer has her own thoughts. On one hand, she needs someone to help her take care of her daughter, and it seems like the robot’s companionship could be beneficial for both the physical and mental health of her child. On the other hand, Jennifer has a lot of worries about those human-looking machines. In the end, she still decides to bring it home.
At first, Jennifer has difficulty introducing the robot to her daughter because she does not know whether she should introduce the robot as a friend or a machine. A robot’s human-like appearance sometimes can be confusing, especially to a child whose cognition is still developing. Once Jennifer’s daughter was crying at home about arguments with friends at school. The robot was not able to show feelings or compassion, instead it just said some common phrases pre-programmed to comfort kids. However, after hearing those words, Jennifer’s daughter became more upset because she felt that the robot did not understand her situation. After a few months, Jennifer found that her daughter was always giving other people orders because she got used to talking to robots, which influenced her social skills. Her friends at school stopped hanging out with her because of her authoritative behavior. Then, she started to imitate the robot’s behavior, from speaking to facial expression, making Jennifer worried. Jennifer’s daughter learned to hide her emotion and became not as expressive as other kids her age. Jennifer thinks this humanoid robot is gradually undermining her daughter’s mental health. In the future, if her daughter has some emotional disorder such as alienation, depression, etc, who should take full responsibility for that?
What are Humanoid Robots?
A Humanoid Robot is a robot whose body shape and face resemble the human appearance. Most humanoid robots can interact with humans. Building a robot that looks like a human has generated heated discussions on moral and ethical issues, as humanoid robots’ appearance is controversial. This unique feature can make it easier for humans to build emotional connections compared to other robot types. The professional responsibility in this field is to think about the necessity and applications of anthropomorphization with the potential risks of co-existing with robots. Preventing discrimination between the human species and humanoid robots can also be a challenge we need to think about.
Discussion Questions
Click the + on the right of a question to view related perspectives and potential starting points for considering these ethical concerns.
Can people accept the external appearance of a humanoid robot? How will robot’s human traits influence the way people treat them?[1, 2]
- “When things go wrong with robots in Sci-Fi, they almost always take human form. Beyond the “robots out of control” trope, there’s the psychological factors that come to play as we can’t decide how to treat them, or how they should treat us.”[1]
What do the children's social and moral relationships with a humanoid robot look like? How do children perceive humanoid robots?[3]
- “The interview data showed that the majority of children believed that Robovie had mental states (e.g., was intelligent and had feelings) and was a social being (e.g., could be a friend, offer comfort, and be trusted with secrets).”[3]
Should we introduce humanoid robots to young children? If so, would it affect the child's interaction with other people based on their lack of ability to differentiate between humanoids and human beings? How can machines affect children's behavior and interaction based on their age and cognition level?
- “In terms of Robovie’s moral standing, children believed that Robovie deserved fair treatment and should not be harmed psychologically but did not believe that Robovie was entitled to its own liberty (Robovie could be bought and sold) or civil rights (in terms of voting rights and deserving compensation for work performed).”[3]
Since the humanoid robots only work for profit, will their involvement undermine society's trust?[4]
- “Cooperation is a key feature of our species, essential for social life. And trust and generosity are crucial in differentiating successful groups from unsuccessful ones. If everyone pitches in and sacrifices in order to help the group, everyone should benefit. When this behavior breaks down, however, the very notion of a public good disappears, and everyone suffers. The fact that AI might meaningfully reduce our ability to work together is extremely concerning.”[4]
Should we program humanoid robots to have emotion? If so, would they deserve to be treated humanely? Should they possess rights and accountabilities?[5]
- “If we did want to build a robot with real sensations, how should we proceed? When I ask my students this question, they often respond with “Why would anyone want to do that?” That’s a good question that reflects an understanding that robots, as we usually think of them, don’t feel anything, and so can’t suffer. That’s why we think they’re ideal for jobs that would be dangerous to people, like fixing damaged nuclear facilities.”[5]
How could humanoid robots make some humans believe that they do have feelings even if it is not true? What can be the potential impact that some people believe humanoid robots have feelings when they actually do not?
- “The results revealed that the developed robot has a positive effect on the teacher’s impression about reliability and sympathy.”[6]
Can humanoid robots be moral without sensations? If not, how can people stay in control of the fast-developing complex artificial intelligence? How can people protect themselves against unintended consequences caused by humanoid robots? Who should take the responsibility if it happens?
- “The conceptions of morality and creativity interplay with linguistic human beings instead of non-linguistic humanoid robots, as humanoid robots are indeed docile automata that cannot be responsible for their actions.”[7]
Do people hold a humanoid robot morally accountable for the harm it causes? Who should take more responsibility if the humanoid robots cause harm?[8]
- “Sixty-five percent of the participants attributed some level of moral accountability to Robovie. Statistically, participants held Robovie less accountable than they would a human, but more accountable than they would a vending machine.”[8]
What considerations do we need to make when maintaining a society where humans coexist with humanoid robots? Should humanoid robots be fairly treated like humans?[9]
- “Do robots have moral and legal rights – the right not to be tortured, the right to consent to sex (can you consent to your own programming)? As Jinks points out, if robots are indistinguishable from humans but have restrictions placed upon their behaviour and movement, “aren’t we legislating discrimination?””[9]
Themes
(Primary) Promotion of Human Values, Anthropomorphization, Fairness and Non-discrimination, Identity
(Secondary) Human Control of Technology, Human Right, Safety and Security, Accountability, Professional responsibility, Government
Resources
- Thomas Hornigold (December 07, 2017). Why Humanoid Robots Are Still So Hard to Make Useful. Singularity Hub. Retrieved Nov 17, 2020,from https://singularityhub.com/2017/12/07/why-the-most-useful-robots-still-dont-look-much-like-us/
- Ryan Hickman (April 10, 2019). Humanoid Robots are the Wrong Answer to the Right Problem. Medium. Retrieved Nov 17, 2020, from https://medium.com/@ryanhickman/humanoid-robots-are-the-wrong-answer-to-the-right-problem-28b6bd8370f4
- Kahn, P. H., Jr., Kanda, T., Ishiguro, H., Freier, N. G., Severson, R. L., Gill, B. T., Ruckert, J. H., & Shen, S. (2012). “Robovie, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Developmental Psychology, 48(2), 303–314. Retrieved Nov 17, 2020, from https://doi.org/10.1037/a0027033
- Nicholas A. Christakis (August 30, 2019). How AI Will Rewire Us. The Atlantic. Retrieved Nov 17, 2020, from https://www.theatlantic.com/magazine/archive/2019/04/robots-human-relationships/583204/
- William S. Robinson (June 6, 2011). Challenges for a Humanoid Robot. On the Human: a project of the National Humanities Center. Retrieved Nov 17, 2020, from https://nationalhumanitiescenter.org/on-the-human/2011/06/challenges-for-a-humanoid-robot/
- Kanda, T., Kamasima, M., Imai, M. et al (2007). A humanoid robot that pretends to listen to route guidance from a human. Auton Robot 22, 87. Retrieved Nov 17, 2020 from https://doi.org/10.1007/s10514-006-9007-6
- Chakraborty, Sanjit. (2018). Can humanoid robots be moral? Ethics in Science and Environmental Politics. 18. 49-60. Retrieved Nov 17, 2020, from https://www.researchgate.net/publication/338512662_Can_humanoid_robots_be_moral
- Peter H. Kahn, Takayuki Kanda, Hiroshi Ishiguro, Brian T. Gill, Jolina H. Ruckert, Solace Shen, Heather E. Gary, Aimee L. Reichert, Nathan G. Freier, and Rachel L. Severson (2012). Do people hold a humanoid robot morally accountable for the harm it causes? In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (HRI ’12). Association for Computing Machinery, New York, NY, USA, 33–40. Retrieved Nov 17, 2020, from DOI: https://doi.org/10.1145/2157689.2157696
- Sam Jinks (November 11, 2019). The ethics of human robots: Sam Jinks brings an artist’s perspective to the discourse. The Conversation. Retrieved Nov 19, 2020, from https://theconversation.com/the-ethics-of-human-robots-sam-jinks-brings-an-artists-perspective-to-the-discourse-86228
- Stephy Chung (November 2, 2019). Meet Sophia: The robot who laughs, smiles and frowns just like us. CNN. Retrieved Nov 17, 2020, from https://www.cnn.com/style/article/sophia-robot-artificial-intelligence-smart-creativity/index.html
0 Comments