-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
326 lines (261 loc) · 21.5 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
<!doctype html>
<html>
<head>
<!-- Page setup -->
<meta charset="utf-8">
<title>DMARCE Project</title>
<meta name="description" content="Decision Making in Autonomous Robots: Cybersecurity and Explainability (DMARCE)">
<meta name="author" content="Universidad de León - Universidad Rey Juan Carlos">
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"/>
<link rel="icon" type="image/png" href="logos/logoDMARCE.png">
<!-- Stylesheets -->
<!-- Reset default styles and add support for google fonts -->
<link href="https://cdnjs.cloudflare.com/ajax/libs/normalize/8.0.1/normalize.min.css" rel="stylesheet" type="text/css" />
<link href="http://fonts.googleapis.com/css?family=Roboto" rel="stylesheet" type="text/css" />
<!-- Custom styles -->
<link href="style.css" rel="stylesheet" type="text/css" />
<!-- jQuery -->
<script src="https://code.jquery.com/jquery-3.4.1.min.js" integrity="sha256-CSXorXvZcTkaix6Yvo6HppcZGetbYMGWSFlBw8HfCJo=" crossorigin="anonymous"></script>
<!-- Want to add Bootstrap? -->
<!-- Visit: https://getbootstrap.com/docs/4.3/getting-started/introduction/ -->
</head>
<body>
<header id="header">
<img src="logos/192087445-9aa45366-1fec-41f5-a7c9-fa612901ecd9b.png">
<h1>DMARCE - Project</h1>
<!-- Menu link fragment #id should match a div id. Example: <a href="#home"> links to <div id="home"></div> -->
<ul class="main-menu">
<li><a href="#home">DMARCE Project</a></li>
<li><a href="#edmar">EDMAR</a></li>
<li><a href="#cascar">CASCAR</a></li>
<li><a href="#publications">Publications</a></li>
<li><a href="#about">About</a></li>
<li><a href="#contact">Contact</a></li>
</ul>
</header>
<div id="container">
<div class="inner">
<div id="content">
<div id="home" class="content-region hide">
<h2>Decision Making in Autonomous Robots: Cybersecurity and Explainability (DMARCE)</h2>
<p> Human-robot interaction is exponentially increasing, and the frequency of events involving robots keeps growing, requiring, in addition to
safety and security requirements, the inclusion of explicability systems that let us understand what had happened, and why, in order to
keep us trusting autonomous robots and to make them accountable. The goal of this joint research project is to investigate if these
requirements can be fulfilled and how to do it in the context of the state of the art of software development frameworks for robots.
Autonomous robots are capable of sensing the environment, generating information from the data obtained, and using it to make the
decisions that let them interact with the world around them. Thus, robots are continuously gathering information about the environment and
the humans with which it shares it, what also raises privacy concerns. Additionally, if a robot is compromised, a two-dimensional security
problem arises: first, security issues regarding the virtual side of the robot (data, communications, and so on), and second, those problems
associated with physical safety and those issues associated with both robot and human integrity.</p>
<p>In 2019, the European Commission defined the strategy for ethics guidelines for trustworthy AI based on 3 elements lawful, ethical and
robust. The set of guidelines define a set of 7 requirements which will define an AI, therefore all robot systems should meet them in order
to be deemed trustworthy: Human agency and oversight, Technical Robustness and safety, Privacy and data governance, Transparency,
Diversity, non-discrimination and fairness, Societal and environmental well-being and Accountability. This project will provide an answer to
four of these requirements: Accountability, Transparency, Privacy and Technical Robustness and Safety. These answers are grouped in
the two sub projects presented here.</p>
<p>First subproject will be mainly devoted to Accountability. This will mean modeling and generating an engine for guaranteeing the
responsibility of each element in the robotic system and for defining a system for blaming software elements in case of events. Traversal to
this, it is necessary to deploy a traceability system that can help achieve this. The aim here is to provide a mechanism able to offer
information to all kinds of stakeholders of the systems capabilities, behaviors and limitations. As a result, we will offer enough information
for providing transparency of these elements.</p>
<p>Second subproject will be devoted in guaranteeing cybersecurity and technical robustness and safety of the robots. This subproject will
transversally cover the privacy of the data gathered. The cibersecurity system, joint with the explainability engine, will increase the
trustworthiness of the robotic systems.</p>
<p>In summary, the aim of the research proposed in this project is the design, development, and evaluation of software systems to provide
explainability capabilities to autonomous robotic systems. These systems will provide the translation from robot behavior to human
language taking specially into account the cybersecurity dimension. The project will generate a technical engine capable of avoiding
threats to the system that could potentially have an impact on the safety of the robot and the people interacting with it.</p>
</div>
<div id="edmar" class="content-region hide">
<h2>Explainability in the Decision Making for Autonomous Robots (EDMAR)</h2>
<p>
</p>
<p>The ability to understand why, what, when, where, how and to whom a particular robot behavior was triggered are a
cornerstone if we want to have robots socially acceptable by humans. Every robot action should be able to be explained and audited.
Beyond that, expected and unexpected robot behaviors should generate a fingerprint showing the components and events that produce
them. A whole field of research, Explainable AI (XAI) tries to address this issue to better understand the systems underlying mechanisms
and find solutions to their explainability.</p>
<p>In this way, this project will stand on the concept of accountability, which implies that an agent should be held responsible for its activities
and provide verifiable evidences of the decisions made. Therefore, all robot actions should be traceable, and it should be also possible to
identify the events that triggered that action afterwards.</p>
<p>This project proposes a complete lifecycle approach to the robotic software, identifying and modeling the main characteristics of an
accountable system, providing these models for the roboticist community, then analyzing the real robot behavior when deployed in robotic
competition challenges, and offering the information generated to different members of the robotics community as training pills. Thus, visits
to European centers focused on research on the safety and security of robots are envisioned as a way to validate the model.The goal is to
promote the framework designed and developed in the project, and to provide the same training established here along Europe.
The aim of this project is to generate a framework for generating conformance explainability in the robot. The project proposes the idea of
an auditing system based on logs, which is commonly accepted as the "by default" mechanism in robotics, it also proposes a mechanism
for translating this information into language understandable by non-technical users.</p>
<p>This framework will be built as a two-level system. The first layer would deal with the raw information coming from the logs, generation
accountability reports useful for developers and deployers. The second layer would generate the explanations at the level of the robot
behaviors, understandable by the general public. The idea is to reduce the fear of the unknown associated with robot deployment and
simplify the understanding of robot behaviors. </p>
<p>The project will also face the problem of the standardization of the auditing. Most autonomous robots deployed in real-world environments
do not have standardized mechanisms for letting auditing when the robot is autonomously generating behavior, and when it does, that
mechanism creates problems in the robot performance.
It also has to be taken into account, that when this assessment is forensic, that is to find out who is legally responsible for the actions
performed by an autonomous agent, it is necessary to establish a set of monitoring, registration, and secure data-recording mechanisms in
order to guarantee that the data has not been tampered with, these problems that will be faced in the second subproject.</p>
</div>
<div id="cascar" class="content-region hide">
<h2>Cybersecure And Safe Cognitive Architectures for Robots (CASCAR) </h2>
<p> Deploying robots in human-inhabited environments is a major security challenge. This joint project addresses the new security issues that
arise when robots coexist with humans, which have yet to be addressed.
While the subproject associated with this addresses how to analyze the actions and reasoning that lead a robot to damage or compromise
privacy, this subproject addresses how to detect threats and their effects. These threats come mainly from intrusions that alter a robot's
expected behavior or access to its sensors. We want to explore the relationship between security and cybersecurity.</p>
<p>In cybersecurity, there are tools to protect computer systems from viruses and intrusions (which we call threats) in systems, networks, and
applications. Robots have the unique feature of having actuators that can damage the environment or humans if they are maliciously
manipulated. That is why cybersecurity aspects are specifically necessary for a robot's software. A threat could inject false images to make
a robot make wrong decisions, alter the robot's plans to carry out a mission, generate navigation routes to forbidden or dangerous places,
or steal the information from the robot cameras.</p>
<p>This subproject focuses on cybersecurity in robot programming frameworks and cognitive architectures built on robot perception,
reasoning, and performance. We want to study what types of tools and standards are explicitly needed in robot software to detect and
mitigate threats. This research includes the mechanisms to ensure that the evidence that shows these threats' activity is not hidden,
making a subsequent explicability process reliable.</p>
<p> We also want to study what mechanisms can be applied to ensure people's safety and the environment when a cybersecurity problem
occurs. In industrial systems, modes of operation are used to ensure workers' safety when working with robots. If a</p>
</div>
<div id="publications" class="content-region hide">
<h2>List of publications:</h2>
<h2>Journal:</h2>
<h3>2023:</h3>
<ul>
<li><b>Detecting and bypassing frida dynamic function call tracing: exploitation
and mitigation.</b>
Enrique Soriano-Salvador and Gorka Guardiola-Múzquiz.
Journal of Computer Virology and Hacking Techniques.
Volume 19, pages 503–513 (2023). Journal article. DOI:
<a href="https://doi.org/10.1007/s11416-022-00458-7">10.1007/s11416-022-00458-7</a>
<li><b>SealFSv2: combining storage-based and ratcheting for tamper-evident logging.</b>
Gorka Guardiola-Múzquiz and Enrique Soriano-Salvador.
International Journal of Information Security. 2022-12-06.
Volume 22, pages 447–466 (2023). Springer Nature.
DOI: <a href="https://doi.org/10.1007/s10207-022-00643-1">10.1007/s10207-022-00643-1</a>
<li><b> MERLIN2: MachinEd Ros 2 pLanINg.</b>
Miguel Á. González-Santmarta, Francisco J. Rodríguez-Lera, Camino Fernández-Llamas, Vicente Matellán-Olivera,
Software Impacts, Volume 15, 2023, 100477, ISSN 2665-9638,
DOI: <a href="https://doi.org/10.1016/j.simpa.2023.100477">10.1016/j.simpa.2023.100477</a>
<li><b> Malicious traffic detection on sampled network flow data with novelty-detection-based models.</b>
Campazas-Vega, A., Crespo-Martínez, I.S., Guerrero-Higueras, Á.M. et al. Sci Rep 13, 15446 (2023).
DOI: <a href="https://doi.org/10.1038/s41598-023-42618-9">10.1038/s41598-023-42618-9</a>
<li><b> Analyzing the influence of the sampling rate in the detection of malicious traffic on flow data.</b>
Campazas-Vega, A., Crespo-Martínez, I. S., Guerrero-Higueras, Á. M., Álvarez-Aparicio, C., Matellán, V., & Fernández-Llamas, C. (2023). Computer Networks, 235, 109951.
DOI: <a href="https://doi.org/10.1016/j.comnet.2023.109951">10.1016/j.comnet.2023.109951</a>
</ul>
<h3>2024:</h3>
<ul>
<li><b>Accountability as a service for robotics: Performance assessment of different accountability strategies for autonomous robots, </b>
Laura Fernández-Becerra, Ángel Manuel Guerrero-Higueras, Francisco Javier Rodríguez-Lera, Vicente Matellán,
Logic Journal of the IGPL, Volume 32, Issue 2, April 2024, Pages 243–262,
<a href="https://doi.org/10.1093/jigpal/jzae038">10.1093/jigpal/jzae038</a>
<li><b>A robot-based surveillance system for recognising distress hand signal,</b>
Virginia Riego del Castillo, Lidia Sánchez-González, Miguel Á González-Santamarta, Francisco J Rodríguez Lera,
Logic Journal of the IGPL, 2024;
<a href="https://doi.org/10.1093/jigpal/jzae067">10.1093/jigpal/jzae067</a>
<li><b>Optimized network for detecting burr-breakage in images of milling workpieces, </b>
Virginia Riego del Castillo, Lidia Sánchez-González, Nicola Strisciuglio,
Logic Journal of the IGPL, 2024;,
<a href="https://doi.org/10.1093/jigpal/jzae024">10.1093/jigpal/jzae024</a>
</ul>
<h2>Conferences:</h2>
<h3>2023:</h3>
<ul>
<li><b>Accountability and Explainability in Robotics: A Proof of Concept for ROS 2- And Nav2-Based Mobile Robots.</b>
Fernández-Becerra, L., González-Santamarta, M.A., Sobrín-Hidalgo, D., Guerrero-Higueras, Á.M., Lera, F.J.R., Olivera, V.M. (2023).
In: García Bringas, P., et al. International Joint Conference 16th International Conference on Computational Intelligence
in Security for Information Systems (CISIS 2023) 14th International Conference on EUropean Transnational Education (ICEUTE 2023).
CISIS ICEUTE 2023 2023. Lecture Notes in Networks and Systems, vol 748. Springer, Cham.
DOI: <a href="https://doi.org/10.1007/978-3-031-42519-6_1">10.1007/978-3-031-42519-6_1 </a>
<li><b>Ciberseguridad en sistemas ciberfísicos: entorno simulado para la evaluación de competencias en ciberseguridad en sistemas con capacidades autónomas</b>
David Sobrín Hidalgo; Laura Fernández Becerra; Miguel A. González Santamarta; Claudia Álvarez Aparicio; Ángel Manuel Guerrero Higueras ;
Miguel Ángel Conde González; Francisco J. Rodríguez Lera; Vicente Matellán Olivera
Actas de las VIII Jornadas Nacionales de Investigación en Ciberseguridad: Vigo, 21 a 23 de junio de 2023 / coord. por
Yolanda Blanco Fernández Árbol académico, Manuel Fernández Veiga Árbol académico, Ana Fernández Vilas Árbol académico,
José María de Fuentes García-Romero de Tejada Árbol académico, 2023, ISBN 978-84-8158-970-2, págs. 461-467
<li><b>Using Large Language Models for Interpreting Autonomous Robots Behaviors.</b>
González-Santamarta, M.Á., Fernández-Becerra, L., Sobrín-Hidalgo, D., Guerrero-Higueras, Á.M., González, I., Lera, F.J.R. (2023).
In: García Bringas, P., et al. Hybrid Artificial Intelligent Systems. HAIS 2023. Lecture Notes in Computer Science(), vol 14001. Springer, Cham.
DOI: <a href="https://doi.org/10.1007/978-3-031-40725-3_45">978-3-031-40725-3_45</a>
<li><b>RIPS: Robotics Intrusion Prevention System. </b>
Enrique Soriano-Salvador and Gorka Guardiola Múzquiz.
ROSCon Madrid 2023. September, 2023.<a href="https://gsyc.urjc.es/~esoriano/roscon2023.pdf"> Slides deck</a>
<li><b>llama_ros:Unleashing the power of LLMs as Embedded AI in Robotics. </b>
Miguel Á. González-Santamarta
ROSCon Madrid 2023. September, 2023.<a href="https://github.com/mgonzs13/llama_ros/blob/main/docs/ROSCon_Spain_2023.pdf">Slides deck</a>
<li><b> Fuzzing Robotic Software Using HPC.</b>
Del Río, F.B.G., Lera, F.J.R., Llamas, C.F., Olivera, V.M. (2023).
In: García Bringas, P., et al. International Joint Conference 16th International Conference on Computational Intelligence in Security for Information Systems (CISIS 2023)
14th International Conference on EUropean Transnational Education (ICEUTE 2023). CISIS ICEUTE 2023 2023.
Lecture Notes in Networks and Systems, vol 748. Springer, Cham.
<a href="https://doi.org/10.1007/978-3-031-42519-6_3">978-3-031-42519-6_3</a>
</ul>
<h3>2024:</h3>
<ul>
<li><b>Performance Impact of Strengthening the Accountability and Explainability System in Autonomous Robots.</b>
Alejandro González Cantón, Miguel Angel Gonzalez Santamarta, Francisco J Rodríguez Lera, Enrique Soriano
Salvador and Gorka Guardiola Muzquiz. Accepted for publication. Proceedings of
the CISIS 2024 (17th International Conference on Computational Intelligence in Security for
Information Systems). To be published in Lecture Notes in Network and Systems (LNNS), Springer.
</ul>
<h2>Public Repositories:</h2>
<ul>
<li><b> <a href="https://github.com/laurafbec/immutable_explainable_BBR"> GitHub: Immutable & Explainable Black Box Recorder </a>
<li><b> <a href="https://github.com/uleroboticsgroup/nav2_accountability_explainability"> GitHub: Accountability and Explainability: Nav2 </a>
<li><b> <a href="https://github.com/inflfb00/accountability-sysdig-kafka"> GitHub: Accountability using Sysdig y Kafka </a>
<li><b> <a href="https://github.com/Dsobh/explainable_ROS"> GitHub: Explainability in ROS 2 with LLMs </a>
<li><b> <a href="https://github.com/mgonzs13/llama_ros"> GitHub: Llama_ROS y Llava_ROS </a>
<li><b> <a href="https://github.com/inflfb00/accountability-docker-solution"> GitHub: Docker-based accountability solution based on Sysdig, Librdkafka producer, Kafka and MongoDB </a>
<li><b> <a href="https://github.com/DMARCE-PROJECT/rips"> GitHub: RIPS: Robotics Intrusion Prevention System Engine </a> and <a href="https://github.com/DMARCE-PROJECT/ripspy"> monitor</a>
<li><b> <a href="https://github.com/DMARCE-PROJECT/sealfs"> GitHub: SealFSv2 </a> and <a href="https://github.com/DMARCE-PROJECT/rossealfs"> ROS 2 adapter</a>
</ul>
</div>
<div id="about" class="content-region hide">
<h2>About</h2>
<p>
This project is fully funded by Ministerio de Ciencia e Innovación/ Agencia Estatal de Investigación under grant PID2021-126592OB-C21.
</p>
<h2>Team EDMAR</h2>
<p>
<ul>
<li>Álvarez Aparicio, Claudia
<li>García Sierra, Juan Felipe
<li>Guerrero Higueras, Ángel Manuel
<li>Sanchez González, Lidia
<li>Campazas Vega, Adrian
<li>Fernández Becerra, Laura
<li>González Santamarta, Miguel Ángel
<li>Riego Del Castillo, Virginia
<li>Sobrín Hidalgo, David
<li><b>Matellán Olivera, Vicente (IP) </b>
<li><b>Rodriguez Lera, Francisco J. (IP) </b>
</ul>
</p>
</div>
<div id="contact" class="content-region hide">
<h2>Contact</h2>
<p>
<a href="https://robotica.unileon.es/" >
<img src="logos/GR-ule.jpg" >
</a>
<a href="https://robotica.unileon.es/"> Grupo de Robótica - Universidad de León.</a>
</p>
<p>
<a href="https://intelligentroboticslab.gsyc.urjc.es/">
<img src="logos/IRL-urjc.jpg">
</a>
<a href="https://intelligentroboticslab.gsyc.urjc.es/"> Intelligent Robotics Lab - Universidad Rey Juan Carlos.</a>
</p>
</div>
</div>
</div>
</div>
<footer>
<hr>
DMARCE (EDMAR+CASCAR) Project PID2021-126592OB-C21 + PID2021-126592OB-C22 funded by MCIN/AEI/10.13039/501100011033 and by ERDF A way of making Europe
<img src="logos/micin-uefeder-aei.png">
</footer>
<!-- Load additional JS scripts here -->
<script type="text/javascript" src="script.js"></script>
</body>
</html>