Decision Making in Autonomous Robots: Cybersecurity and Explainability (DMARCE)

Human-robot interaction is exponentially increasing, and the frequency of events involving robots keeps growing, requiring, in addition to safety and security requirements, the inclusion of explicability systems that let us understand what had happened, and why, in order to keep us trusting autonomous robots and to make them accountable. The goal of this joint research project is to investigate if these requirements can be fulfilled and how to do it in the context of the state of the art of software development frameworks for robots. Autonomous robots are capable of sensing the environment, generating information from the data obtained, and using it to make the decisions that let them interact with the world around them. Thus, robots are continuously gathering information about the environment and the humans with which it shares it, what also raises privacy concerns. Additionally, if a robot is compromised, a two-dimensional security problem arises: first, security issues regarding the virtual side of the robot (data, communications, and so on), and second, those problems associated with physical safety and those issues associated with both robot and human integrity.

In 2019, the European Commission defined the strategy for ethics guidelines for trustworthy AI based on 3 elements lawful, ethical and robust. The set of guidelines define a set of 7 requirements which will define an AI, therefore all robot systems should meet them in order to be deemed trustworthy: Human agency and oversight, Technical Robustness and safety, Privacy and data governance, Transparency, Diversity, non-discrimination and fairness, Societal and environmental well-being and Accountability. This project will provide an answer to four of these requirements: Accountability, Transparency, Privacy and Technical Robustness and Safety. These answers are grouped in the two sub projects presented here.

First subproject will be mainly devoted to Accountability. This will mean modeling and generating an engine for guaranteeing the responsibility of each element in the robotic system and for defining a system for blaming software elements in case of events. Traversal to this, it is necessary to deploy a traceability system that can help achieve this. The aim here is to provide a mechanism able to offer information to all kinds of stakeholders of the systems capabilities, behaviors and limitations. As a result, we will offer enough information for providing transparency of these elements.

Second subproject will be devoted in guaranteeing cybersecurity and technical robustness and safety of the robots. This subproject will transversally cover the privacy of the data gathered. The cibersecurity system, joint with the explainability engine, will increase the trustworthiness of the robotic systems.

In summary, the aim of the research proposed in this project is the design, development, and evaluation of software systems to provide explainability capabilities to autonomous robotic systems. These systems will provide the translation from robot behavior to human language taking specially into account the cybersecurity dimension. The project will generate a technical engine capable of avoiding threats to the system that could potentially have an impact on the safety of the robot and the people interacting with it.

Explainability in the Decision Making for Autonomous Robots (EDMAR)

The ability to understand why, what, when, where, how and to whom a particular robot behavior was triggered are a cornerstone if we want to have robots socially acceptable by humans. Every robot action should be able to be explained and audited. Beyond that, expected and unexpected robot behaviors should generate a fingerprint showing the components and events that produce them. A whole field of research, Explainable AI (XAI) tries to address this issue to better understand the systems underlying mechanisms and find solutions to their explainability.

In this way, this project will stand on the concept of accountability, which implies that an agent should be held responsible for its activities and provide verifiable evidences of the decisions made. Therefore, all robot actions should be traceable, and it should be also possible to identify the events that triggered that action afterwards.

This project proposes a complete lifecycle approach to the robotic software, identifying and modeling the main characteristics of an accountable system, providing these models for the roboticist community, then analyzing the real robot behavior when deployed in robotic competition challenges, and offering the information generated to different members of the robotics community as training pills. Thus, visits to European centers focused on research on the safety and security of robots are envisioned as a way to validate the model.The goal is to promote the framework designed and developed in the project, and to provide the same training established here along Europe. The aim of this project is to generate a framework for generating conformance explainability in the robot. The project proposes the idea of an auditing system based on logs, which is commonly accepted as the "by default" mechanism in robotics, it also proposes a mechanism for translating this information into language understandable by non-technical users.

This framework will be built as a two-level system. The first layer would deal with the raw information coming from the logs, generation accountability reports useful for developers and deployers. The second layer would generate the explanations at the level of the robot behaviors, understandable by the general public. The idea is to reduce the fear of the unknown associated with robot deployment and simplify the understanding of robot behaviors.

The project will also face the problem of the standardization of the auditing. Most autonomous robots deployed in real-world environments do not have standardized mechanisms for letting auditing when the robot is autonomously generating behavior, and when it does, that mechanism creates problems in the robot performance. It also has to be taken into account, that when this assessment is forensic, that is to find out who is legally responsible for the actions performed by an autonomous agent, it is necessary to establish a set of monitoring, registration, and secure data-recording mechanisms in order to guarantee that the data has not been tampered with, these problems that will be faced in the second subproject.

Cybersecure And Safe Cognitive Architectures for Robots (CASCAR)

Deploying robots in human-inhabited environments is a major security challenge. This joint project addresses the new security issues that arise when robots coexist with humans, which have yet to be addressed. While the subproject associated with this addresses how to analyze the actions and reasoning that lead a robot to damage or compromise privacy, this subproject addresses how to detect threats and their effects. These threats come mainly from intrusions that alter a robot's expected behavior or access to its sensors. We want to explore the relationship between security and cybersecurity.

In cybersecurity, there are tools to protect computer systems from viruses and intrusions (which we call threats) in systems, networks, and applications. Robots have the unique feature of having actuators that can damage the environment or humans if they are maliciously manipulated. That is why cybersecurity aspects are specifically necessary for a robot's software. A threat could inject false images to make a robot make wrong decisions, alter the robot's plans to carry out a mission, generate navigation routes to forbidden or dangerous places, or steal the information from the robot cameras.

This subproject focuses on cybersecurity in robot programming frameworks and cognitive architectures built on robot perception, reasoning, and performance. We want to study what types of tools and standards are explicitly needed in robot software to detect and mitigate threats. This research includes the mechanisms to ensure that the evidence that shows these threats' activity is not hidden, making a subsequent explicability process reliable.

We also want to study what mechanisms can be applied to ensure people's safety and the environment when a cybersecurity problem occurs. In industrial systems, modes of operation are used to ensure workers' safety when working with robots. If a

List of publications:

Journal:

  • Detecting and bypassing frida dynamic function call tracing: exploitation and mitigation. Enrique Soriano-Salvador and Gorka Guardiola-Múzquiz. Journal of Computer Virology and Hacking Techniques. Volume 19, pages 503–513 (2023). Journal article. DOI: 10.1007/s11416-022-00458-7
  • SealFSv2: combining storage-based and ratcheting for tamper-evident logging. Gorka Guardiola-Múzquiz and Enrique Soriano-Salvador. International Journal of Information Security. 2022-12-06. Volume 22, pages 447–466 (2023). Springer Nature. DOI: 10.1007/s10207-022-00643-1
  • MERLIN2: MachinEd Ros 2 pLanINg. Miguel Á. González-Santmarta, Francisco J. Rodríguez-Lera, Camino Fernández-Llamas, Vicente Matellán-Olivera, Software Impacts, Volume 15, 2023, 100477, ISSN 2665-9638, DOI: 10.1016/j.simpa.2023.100477
  • Malicious traffic detection on sampled network flow data with novelty-detection-based models. Campazas-Vega, A., Crespo-Martínez, I.S., Guerrero-Higueras, Á.M. et al. Sci Rep 13, 15446 (2023). DOI: 10.1038/s41598-023-42618-9
  • Analyzing the influence of the sampling rate in the detection of malicious traffic on flow data. Campazas-Vega, A., Crespo-Martínez, I. S., Guerrero-Higueras, Á. M., Álvarez-Aparicio, C., Matellán, V., & Fernández-Llamas, C. (2023). Computer Networks, 235, 109951. DOI: 10.1016/j.comnet.2023.109951

Conferences:

  • Accountability and Explainability in Robotics: A Proof of Concept for ROS 2- And Nav2-Based Mobile Robots. Fernández-Becerra, L., González-Santamarta, M.A., Sobrín-Hidalgo, D., Guerrero-Higueras, Á.M., Lera, F.J.R., Olivera, V.M. (2023). In: García Bringas, P., et al. International Joint Conference 16th International Conference on Computational Intelligence in Security for Information Systems (CISIS 2023) 14th International Conference on EUropean Transnational Education (ICEUTE 2023). CISIS ICEUTE 2023 2023. Lecture Notes in Networks and Systems, vol 748. Springer, Cham. DOI: 10.1007/978-3-031-42519-6_1
  • Ciberseguridad en sistemas ciberfísicosentorno simulado para la evaluación de competencias en ciberseguridad en sistemas con capacidades autónomas David Sobrín Hidalgo; Laura Fernández Becerra; Miguel A. González Santamarta; Claudia Álvarez Aparicio; Ángel Manuel Guerrero Higueras ; Miguel Ángel Conde González; Francisco J. Rodríguez Lera; Vicente Matellán Olivera Actas de las VIII Jornadas Nacionales de Investigación en Ciberseguridad: Vigo, 21 a 23 de junio de 2023 / coord. por Yolanda Blanco Fernández Árbol académico, Manuel Fernández Veiga Árbol académico, Ana Fernández Vilas Árbol académico, José María de Fuentes García-Romero de Tejada Árbol académico, 2023, ISBN 978-84-8158-970-2, págs. 461-467
  • Using Large Language Models for Interpreting Autonomous Robots Behaviors. González-Santamarta, M.Á., Fernández-Becerra, L., Sobrín-Hidalgo, D., Guerrero-Higueras, Á.M., González, I., Lera, F.J.R. (2023). In: García Bringas, P., et al. Hybrid Artificial Intelligent Systems. HAIS 2023. Lecture Notes in Computer Science(), vol 14001. Springer, Cham. DOI: 978-3-031-40725-3_45
  • RIPS: Robotics Intrusion Prevention System. Enrique Soriano-Salvador and Gorka Guardiola Múzquiz. ROSCon Madrid 2023. September, 2023.

About

This project is fully funded by Ministerio de Ciencia e Innovación/ Agencia Estatal de Investigación under grant PID2021-126592OB-C21.