Distributed SLAM for a team of planetary robots: the ARCHES moon analogue demonstration mission
German Aerospace Center, Germany
Simultaneous Localization and Mapping (SLAM) is a component of critical importance with respect to enabling a team of robotic agents to act autonomously in previously unseen environments. Differently from indoor applications, or in man-made environments, mobile robots operating in the field encounter several challenges when performing SLAM, such as unreliable data transmission channels between the agents, or the self-similarity of the environment, which has a significant impact in the possibility of recognizing previously seen places and reduce localization drift. At the DLR Institute of Robotics and Mechatronics we developed, over the years, a robust decentralized SLAM architecture for a heterogeneous team of robots, which is computationally feasible, thanks to a submap-based formulation, robust to connectivity losses, and capable of performing place recognition in absence of stable visual or structural features. This tutorial presents the experience and results obtained after the 2022 Helmholtz project "ARCHES", a large scale multi-partner (DLR, KIT, ESA) mission to demonstrate complex robotic behaviors, towards the development of a network of cooperating agents to act as a natural extension of astronauts during exploration of planetary environments. The 4-week long mission took place on the volcanic slopes of Mt. Etna, Sicily, at an altitude of 2600 meters, an environment designated as analogous to the Moon and Mars due to its peculiar geolological and perceptual properties.
Riccardo Giubilato received a Ph.D. degree in Space Sciences, Technologies and Measurements in 2020 from the Centre of Studies and Activities for Space (CISAS) “G. Colombo,” University of Padova. Since 2019, he is a Post-Doctoral researcher with the German Aerospace Center (DLR), Institute of Robotics and Mechatronics, Weßling, Germany.
His current research interests include collaborative multi-robot SLAM, sensor fusion for vision systems and range sensors, multimodal and learning-based place recognition in unstructured environments.