Social Engineering and Human-Robot Interactions' Risks

Open Access
Article
Conference Proceedings
Authors: Ilenia Mercuri

Abstract: Modern robotics seems to have taken root from the theories of Isaac Asimov, in 1941. One area of research that has become increasingly popular in recent decades is the study of artificial intelligence or A.I., which aims to use machines to solve problems that, according to current opinion, require intelligence. This is related to the study on “Social Robots”. Social Robots are created in order to interact with human beings; they have been designed and programmed to engage with people by leveraging a "human" aspect and various interaction channels, such as speech or non-verbal communication. They therefore readily solicit social responsiveness in people who often attribute human qualities to the robot. Social robots exploit the human propensity for anthropomorphism, and humans tend to trust them more and more. Several issues could arise due to this kind of trust and to the ability of “superintelligence” to "self-evolve", which could lead to the violation of the purposes for which it was designed by humans, becoming a risk to human security and privacy. This kind of threat concerns social engineering, a set of techniques used to convince users to perform a series of actions that allow cybercriminals to gain access to the victims' resources. The Human Factor is the weakest ring of the security chain, and the social engineers exploit Human-Robots Interaction to persuade an individual to provide private information.An important research area that has shown interesting results for the knowledge of the possibility of human interaction with robots is "cyberpsychology". This paper aims to provide insights into how the interaction with social robots could be exploited by humans not only in a positive way but also by using the same techniques of social engineering borrowed from "bad actors" or hackers, to achieve malevolent and harmful purposes for man himself. A series of experiments and interesting research results will be shown as examples. In particular, about the ability of robots to gather personal information and display emotions during the interaction with human beings. Is it possible for social robots to feel and show emotions, and human beings could empathize with them? A broad area of research, which goes by the name of "affective computing", aims to design machines that are able to recognize human emotions and consistently respond to them. The aim is to apply human-human interaction models to human-machine interaction. There is a fine line that separates the opinions of those who argue that, in the future, machines with artificial intelligence could be a valuable aid to humans and those who believe that they represent a huge risk that could endanger human protection systems and safety. It is necessary to examine in depth this new field of cybersecurity to analyze the best path to protect our future. Are social robots a real danger? Keywords: Human Factor, Cybersecurity, Cyberpsychology, Social Engineering Attacks, Human-Robot Interaction, Robotics, Malicious Artificial Intelligence, Affective Computing, Cyber Threats

Keywords: Human Factor, Cybersecurity, Cyberpsychology, Social Engineering Attacks, Human-Robot Interaction, Robotics, Artificial Intelligence, Cyber Threats

DOI: 10.54941/ahfe1002199

Cite this paper:

Downloads
115
Visits
315
Download