~~NOTOC~~ ====== Simultaneous Localization and Mapping (SLAM) for Indoor Navigation ====== Authors: [[https://github.com/harleylara|Harley Lara]], Rolf Becker ===== Introduction ===== When we humans enter and discover an unknown environment we are implicitly performing SLAM. **Mapping:** With our senses we detect features, e.g. landmarks like street name signs, crossroads, remarkable buildings or trees, parks, lakes, etc. We memorize these "features" and set them in a spatial relationship: "The parking lot is behind the tall building. The street leading to it is framed with these beautiful linden trees. We have to take the third turn to the right." This is mapping. This implicit map in our head is slowly growing, densified, corrected and improved as we are roaming around. **Localization:** We are positioning ourselves in that map: "I am close to the windmill, three minutes away from the museum." **SLAM in Robotics:** This simplified behavioral model can be transferred to autonomous mobile robots. To make them really autonomous they have do do something link SLAM. It is a standard problem in robotics. ===== Methods and Material ===== **Flight Controller and Mission Control Computer:** Our research assistant Harley Lara built the first flying robot (aka drone) in our team which is capable of SLAM. The flight attitude and stability of the drone is controlled by the standard open source flight controller PX4. This is responsible for the core functionality of the flying platform. It makes it flyable. The SLAM algorithm is run on a companion computer (here NVIDIA Jetson Xavier NX) which at the final stage of development will be responsible for the mission, i.e. the autonomous navigation, path decision as well as other higher level mission intention such as search and rescue (SAR) of people in disaster areas. This mission control computer tells the flight controller where to go. **Perception:** In this first experiment we use a combination of two special cameras which cooperate closely: a tracking camera (Intel T265) and a depth camera (Intel D435i). They form the visual perception system. It allows to measure distances to "obstacles" such as walls, and to determine the linear as well as rotational motion of the robot. This perception system is also equipped with an intertial measurement unit (IMU) on chip. Such an IMU usually consists of an accelerometer measuring linear acceleration (force) along three spatial axes as well as a gyroscope sensing the rotational speed (angular velocity) about the three spatial axes. Sometimes a magnetometer (compass) is also included in the integrated IMU chip. All of these sensor outputs have to be combined. This is called sensor fusion. This sensor fusion is partly done already in the cameras, partly this has to be done externally. **"Brain":** The misson control computer (in our case NVIDIA Jetson Xavier NX) is the brain of the flying robot. It collects all the data from the connected sensors and performs the sensor fusion needed for SLAM. We use the Robot Operating System ROS 2 to solve these problems. ROS 2 is a modular multitasking real-time operating system designed for robotic applications. "ROS 2 is a middleware based on a strongly-typed, anonymous publish/subscribe mechanism that allows for message passing between different processes." (from [[https://docs.ros.org/en/rolling/Concepts/Basic.html|ROS 2 Basic Concepts]]). Simply speaking: Sensors, actuators and algorithms are realized as independent nodes (like processes) communicating with each other by exchanging messages. A sensor node represents a real electronic sensing element like a digital twin. The same holds for actuators, e.g. motors, servos, etc. The system is highly flexible and extensible. A new sensor or actuator can be integrated by deploying new nodes in the ROS 2 communication graph representing their hardware elements. With this modular software architecture interoprability is easy to achieve. ===== Results ===== The video shows the first experimental results. The top two video streams show two external observation cameras filming the drone flight in the university's drone cage. The bottom video stream shows the slowly emerging 3D world reconstruction resulting from SLAM in real-time. You see the walls and their elements emerging. It is a dense point cloud, overlayed with color information from the cameras. This is the mapping. The localization of the drone, i.e. its flight track, is indicated by a thin blue line (hardly visible!) starting form the drone starting point to the current drone position. The drone creates the map and sets itself in realtion to it. BTW: Video and audio are produced and cut by Harley, too. He also played the piano. | {{youtube>F-jyS8qcMC4?large}} | | //Fig.: First SLAM test by Harley Lara.// | | {{ :drones:slam:slam_drone_4_.jpg?400 |}} | Our self-made drone (a flying robot) for SLAM demonstration consists of: * PX4 flight controller * NVIDIA Jetson Xavier NX as companion computer running ROS2 * Intel T265 Tracking Camera * Intel D435i Depth Camera * Hexacopter frame DJI Flamewheel F550