Cheating and unfair? So, its human. Educational social robots and synthetic ethics
DOI:
https://doi.org/10.51302/tce.2024.18841Keywords:
educational technology, social robots, artificial intelligence, cheating behavior, robot abuse, dishonesty, academic integrity, synthetic ethicsAbstract
Education begins to make use of emotional artificial intelligence through anthropomorphized educational robots. Evidence supports that students (men and women) are able to create emotional bonds with these agents. However, more and more cases of abusive disinhibition are being found in such interactions, such as racist or sexist degradation, abuse of power and violence. Some researchers warn about the negative consequences that this type of behavior can have in the long term, both for the ethical education of students and for robots that learn from these behaviors. Despite their relevance from a social and educational perspective, there are few studies that attempt to understand the mechanisms underlying these immoral or collectively harmful practices. The aim of this article is to review and analyze the research that has tried to study the unethical behavior of the human being through its interaction with anthropomorphic social robots. A descriptive bibliometric study was carried out following the criteria of the PRISMA declaration. The results show that, under certain circumstances, anthropomorphization and attribution of intentionality to robotic agents could be disadvantageous causing attitudes of rejection, dehumanization and even violence. However, a more realistic view of both the capabilities and limitations of these agents and the mechanisms that guide human behavior could help harness the great potential of this technology to promote students' moral development and ethical awareness.
Downloads
References
Ahmad, M. I. y Refik, R. (2022). «No chit chat!». A warning from a physical versus virtual robot invigilator: Which matters most? Frontiers in Robotics and AI, 9, 1-11. https://doi.org/10.3389/frobt.2022.908013
Angeli, A. de y Brahnam, S. (2008). I hate you! Disinhibition with virtual partners. Interacting with Computers, 20(3), 302-310. https://doi.org/10.1016/j.intcom.2008.02.004
Arroyo, A. M., Kyohei, T., Koyama, T., Takahashi, H., Rea, F., Sciutti, A., Yoshikawa, Y., Ishiguro, H. y Sandini, G. (2018). Will people morally crack under the authority of a famous wicked robot? 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 35-42). IEEE. https://doi.org/10.1109/ROMAN.2018.8525744
Ayub, A., Hu, H., Zhou, G., Fendley, C., Ramsay, C. M., Jackson, K. L. y Wagner, A. R. (2021). If you cheat, I cheat: cheating on a collaborative task with a social robot. 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN) (pp. 229-235). IEEE. https://doi.org/10.1109/RO-MAN50785.2021.9515321
Bartneck, C., Hoek, M. van der, Mubin, O. y Mahmud, A. A. (2007). «Daisy, Daisy, give me your answer do!»-Switching off a robot. Proceedings of the 2nd ACM/IEEE International Conference on Human-Robot Interaction (pp. 217-222). ACM. https://doi.org/10.1145/1228716.1228746
Bartneck, C. y Keijsers, M. (2020). The morality of abusing a robot. Paladyn. Journal of Behavioral Robotics, 11(1), 271-283. https://doi.org/10.1515/pjbr-2020-0017
Bartneck, C., Rosalia, C., Menges, R. y Deckers, I. (2005). Robot abuse: a limitation of the media equation. En A. De Angeli, S. Brahnam y P. Wallis (Eds.), Abuse: the Darker Side of Human-Computer Interaction: An INTERACT 2005 Workshop (pp. 54-57). http://www.agentabuse.org/Abuse_Workshop_WS5.pdf
Becker, C., Prendinger, H., Ishizuka, M. y Wachsmuth, I. (2005). Evaluating affective feedback of the 3D agent max in a competitive cards game. Affective Computing and Intelligent Interaction: First International Conference, ACII 2005, Beijing, China, October 22-24, 2005. Proceedings 1 (pp. 466-473). Springer Berlin Heidelberg. https://doi.org/10.1007/11573548_60
Behnk, S., Hao, L. y Reuben, E. (2022). Shifting normative beliefs: on why groups behave more antisocially than individuals. European Economic Review, 145. https://doi.org/10.1016/j.euroecorev.2022.104116
Bernotat, J., Eyssel, F. y Sachse, J. (2017). Shape it-The influence of robot body shape on gender perception in robots. Social Robotics: 9th International Conference, ICSR 2017, Tsukuba, Japan, November 22-24, 2017, Proceedings 9 (pp. 75-84). Springer International Publishing. https://doi.org/10.1007/978-3-319-70022-9_8
Bleher, H. y Braun, M. (2022). Diffused responsibility: attributions of responsibility in the use of AI-driven clinical decision support systems. AI and Ethics, 2(4), 747-761. https://doi.org/10.1007/s43681-022-00135-x
Brščić, D., Kidokoro, H., Suehiro, Y. y Kanda, T. (2015). Escaping from children's abuse of social robots. 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 59-66). ACM. https://doi.org/10.1145/2696454.2696468
Cameron, D., Saille, S. de, Collins, E. C., Aitken, J. M., Cheung, H., Chua, A., Loh, E. J. y Law, J. (2020). The effect of social-cognitive recovery strategies on likability, capability and trust in social robots. Computer in Human Behabior, 114, 1-41. https://doi.org/10.1016/j.chb.2020.106561
Darling, K. (2021). The New Breed: How to Think About Robots. Penguin UK.
Darling, K., Nandy, P. y Breazeal, C. (2015). Empathic concern and the effect of stories in human-robot interaction. 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 770-775). IEEE. https://doi.org/10.1109/ROMAN.2015.7333675
Esteban, P. G., Bagheri, E., Elprama, S. A., Jewell, C. I. C., Cao, H.-L., Beir, A. de, Jacobs, A. y Vanderborght, B. (2022). Should I be introvert or extrovert? A pairwise robot comparison assessing the perception of personality-based social robot behaviors. International Journal of Social Robotics, 14, 1-11. https://doi.org/10.1007/s12369-020-00715-z
Eyssel, F. A. y Hegel, F. (2012). (S)he's got the look: gender-stereotyping of social robots. Journal of Applied Social Psychology, 42(9), 2.213-2.230. https://doi.org/10.1111/j.1559-1816.2012.00937.x
Eyssel, F. y Kuchenbrandt, D. (2012). Social categorization of social robots: anthropomorphism as a function of robot group membership. The British Journal of Social Psychology, 51(4), 724-731. https://doi.org/10.1111/j.2044-8309.2011.02082.x
Feng, S., Wang, X., Wang, Q., Fang, J., Wu, Y., Yi, L. y Wei, K. (2018). The uncanny valley effect in typically developing children and its absence in children with autism spectrum disorders. PloS One, 13(11), 1-14. https://doi.org/10.1371/journal.pone.0206343
Fink, J., Mubin, O., Kaplan, F. y Dillenbourg, P. (Mayo 2012). Anthropomorphic language in online forums about Roomba, AIBO and the iPad. IEEE Workshop on Advanced Robotics and Its Social Impacts (ARSO) (pp. 54-59). IEEE. https://doi.org/10.1109/ARSO.2012.6213399
Forlizzi, J., Saensuksopa, T., Salaets, N., Shomin, M., Mericli, T. y Hoffman, G. (2016). Let's be honest: a controlled field study of ethical behavior in the presence of a robot. Robot and Human Interactive Communication (ROMAN). 25th IEEE International Symposium on (pp. 769-774). IEEE. https://doi.org/10.1109/ROMAN.2016.7745206
Garcia-Goo, H., Winkle, K., Williams, T. y Strait, M. K. (2022). Robots need the ability to navigate abusive interactions. 2022 ACM/IEEE International Conference on Human-Robot Interaction (pp. 1-9). IEEE. https://scholarworks.utrgv.edu/cs_fac/92/
Gómez-León, M.ª I. (2022). Desarrollo de la empatía a través de la inteligencia artificial socioemocional. Papeles del Psicólogo, 43(3), 218-224. https://doi.org/10.23923/pap.psicol.2996
Gómez-León, M.ª I. (2023). Robots sociales y crecimiento ético en educación infantil. Edutec. Revista Electrónica de Tecnología Educativa, 83, 41-54. https://doi.org/10.21556/edutec.2023.83.2697
Hoffman, G., Forlizzi, J., Ayal, S., Steinfeld, A., Antanitis, J., Hochman, G., Hochendoner, E. y Finkenaur, J. (2015). Robot presence and human honesty: experimental evidence. 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 181-188). ACM. https://doi.org/10.1145/2696454.2696487
Hsieh, T.-Y., Chaudhury, B. y Cross, E. S. (2023). Human-robot cooperation in economic games: people show strong reciprocity but conditional prosociality toward robots. International Journal of Social Robotics, 1-15. https://doi.org/10.1007/s12369-023-00981-7
Hundt, A., Agnew, W., Zeng, V., Kacianka, S. y Gombolay, M. (2022). Robots enact malignant stereotypes. ACM Conference on Fairness, Accountability, and Transparency (pp. 743-756). ACM. https://doi.org/10.1145/3531146.3533138
Jackson, R. B., Williams, T. y Smith, N. (2020). Exploring the role of gender in perceptions of robotic noncompliance. Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (pp. 559-567). ACM. https://doi.org/10.1145/3319502.3374831
Keijsers, M., Bartneck, C. y Eyssel, F. (2022). Pay them no mind: the influence of implicit and explicit robot mind perception on the right to be protected. International Journal of Social Robotics, 14, 499-514. https://doi.org/10.1007/s12369-021-00799-1
Kennedy, J., Baxter, P. E. y Belpaeme, T. (2015). The robot who tried too hard: social behaviour of a robot tutor can negatively affect child learning. 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 67-74). ACM. https://doi.org/10.1145/2696454.2696457
Kirby, R., Forlizzi, J. y Simmons, R. (2010). Affective social robots. Robotics and Autonomous Systems, 58(3), 322-332. https://doi.org/10.1016/j.robot.2009.09.015
Litoiu, A., Ullman, D., Kim, J. y Scassellati, B. (2015). Evidence that robots trigger a cheating detector in humans. 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 165-172). ACM. https://doi.org/10.1145/2696454.2696456
Luria, M., Zoran, A. y Forlizzi, J. (2019). Challenges of designing HCI for negative emotions. arXiv:1908.07577, 1-3. https://doi.org/10.48550/arXiv.1908.07577
Maggi, G., Dell'Aquila, E., Cucciniello, I. y Rossi, S. (2021). «Don't get distracted!»: the role of social robots' interaction style on users' cognitive performance, acceptance, and non-compliant behavior. International Journal of Social Robotics, 13, 2.057-2.069. https://doi.org/10.1007/s12369-020-00702-4
Mamak, K. (2022). Should violence against robots be banned? International Journal of Social Robotics, 14(4), 1.057-1.066. https://doi.org/10.1007/s12369-021-00852-z
Maninger, T. y Shank, D. B. (2022). Perceptions of violations by artificial and human actors across moral foundations. Computers in Human Behavior Reports, 5. https://doi.org/10.1016/j.chbr.2021.100154
Mirnig N., Stollnberger G., Miksch M., Stadler S., Giuliani M. y Tscheligi, M. (2017). To Err is robot: how humans assess and act toward an erroneous social robot. Frontiers in Robotics and AI, 21(4), 1-15. https://doi.org/10.3389/frobt.2017.00021
Mubin, O., Cappuccio, M., Alnajjar, F., Ahmad, M. I. y Shahid, S. (Diciembre 2020). Can a robot invigilator prevent cheating? AI & Society, 35(4), 981-989. Springer. https://doi.org/10.1007/s00146-020-00954-8
Nass, C. y Moon, Y. (2000). Machines and mindlessness: social responses to computers. Journal of Social Issues, 56(1), 81-103. https://doi.org/10.1111/0022-4537.00153
Nomura, T., Kanda, T., Kidokoro, H., Suehiro, Y. y Yamada, S. (2016). Why do children abuse robots? Interaction Studies: Social Behaviour and Communication in Biological and Artificial Systems, 17(3), 348-370. https://doi.org/10.1075/is.17.3.02nom
Okanda, M. y Taniguchi, K. (2021). Is a robot a boy? Japanese children's and adults' gender-attribute bias toward robots and its implications for education on gender stereotypes. Cognitive Development, 58, 101044. https://doi.org/10.1016/j.cogdev.2021.101044
Parreira, M. T., Gillet, S., Winkle, K. y Leite, I. (2023, March). How did we miss this? A case study on unintended biases in robot social behavior. Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction (pp. 11-20). https://doi.org/10.1007/s12369-022-00864-3
Petisca, S., Leite, I., Paiva, A. y Esteves, F. (2022). Human dishonesty in the presence of a robot: the effects of situation awareness. International Journal of Social Robotics, 14(5), 1.211-1.222. https://doi.org/10.1007/s12369-022-00864-3
Rajaonah, B. y Zio, E. (2022). Social Robotics and synthetic ethics: a methodological proposal for research. International Journal of Social Robotics, 1-11. https://doi.org/10.1007/s12369-022-00874-1
Rehm, M. y Krogsager, A. (2013). Negative affect in human robot interaction: impoliteness in unexpected encounters with robots. Proceedings of the 22nd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN): Living Together, Enjoying Together, and Working Together with Robots! (pp. 45-50). IEEE Computer Society Press. IEEE RO-MAN Proceedings. https://doi.org/10.1109/ROMAN.2013.6628529
Rhee, S., Lee, S.-Y. y Jung, S.-H. (2017). Ethnic differences in bullying victimization and psychological distress: a test of an ecological model. Journal of Adolescence, 60, 155-160. https://doi.org/10.1016/j.adolescence.2017.07.013
Riddoch, K. A. y Cross, E. S. (2021). «Hit the robot on the head with this mallet»-Making a case for including more open questions in HRI research. Frontiers in Robotics and AI, 8, 1-17. https://doi.org/10.3389/frobt.2021.603510
Salvini, P., Ciaravella, G., Yu, W., Ferri, G., Manzi, A., Mazzolai, B. y Dario, P. (2010). How safe are service robots in urban environments? Bullying a robot. 19th International Symposium in Robot and Human Interactive Communication (pp. 1-7). https://doi.org/10.1109/ROMAN.2010.5654677
Shank, D. B. y DeSanti, A. (2018). Attributions of morality and mind to artificial intelligence after real-world moral violations. Computers in Human Behavior, 86, 401-411. https://doi.org/10.1016/j.chb.2018.05.014
Spatola, N., Anier, N., Redersdorff, S., Ferrand, L., Belletier, C., Normand, A. y Huguet, P. (2019). National stereotypes and robots' perception: the «made in» effect. Frontiers in Robotics and AI, 6, 1-12. https://doi.org/10.3389/frobt.2019.00021
Spatola, N., Belletier, C., Normand, A., Chausse, P., Monceau, S., Augustinova, M., Barra, V., Huguet, P. y Ferrand, L. (2018). Not as bad as it seems: when the presence of a threatening humanoid robot improves human performance. Science Robotics, 3(21). https://doi.org/10.1126/scirobotics.aat5843
Stange, S., Hassan, T., Schröder, F., Konkol, J. y Kopp, S. (2022). Self-explaining social robots: an explainable behavior generation architecture for human-robot interaction. Frontiers in Artificial Intelligence, 5, 1-19. https://doi.org/10.3389/frai.2022.866920
Strait, M., Ramos, A. S., Contreras, V. y Garcia, N. (2018). Robots racialized in the likeness of marginalized social identities are subject to greater dehumanization than those racialized as white. 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) (pp. 452-457). IEEE. https://doi.org/10.1109/ROMAN.2018.8525610
Tan, X. Z., Vázquez, M., Carter, E. J., Morales, C. G. y Steinfeld, A. (2018). Inducing bystander interventions during robot abuse with social mechanisms. 13th ACM/IEEE International Conference on Human-Robot Interaction (HRI) (pp. 169-177). ACM. https://doi.org/10.1145/3171221.3171247
Veletsianos, G., Scharber, C. y Doering, A. (2008). When sex, drugs, and violence enter the classroom: conversations between adolescents and a female pedagogical agent. Interacting with Computers, 20(3), 292-301. https://doi.org/10.1016/j.intcom.2008.02.007
Wiese, E., Metta, G. y Wykowska, A. (2017). Robots as intentional agents: using neuroscientific methods to make robots appear more social. Frontiers in Psychology, 8, 1-19. https://doi.org/10.3389/fpsyg.2017.01663
Zonca, J., Folsø, A. y Sciutti, A. (2021). The role of reciprocity in human-robot social influence. Iscience, 24(12). https://doi.org/10.1016/j.isci.2021.103424
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 María Isabel Gómez-León
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.