User Tools

Site Tools


drones:slam:start

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
drones:slam:start [2023/09/17 13:12] rolf.beckerdrones:slam:start [2023/09/18 09:36] (current) – [Methods and Material] rolf.becker
Line 1: Line 1:
 ~~NOTOC~~ ~~NOTOC~~
 ====== Simultaneous Localization and Mapping (SLAM) for Indoor Navigation ====== ====== Simultaneous Localization and Mapping (SLAM) for Indoor Navigation ======
 +
 +Authors: [[https://github.com/harleylara|Harley Lara]], Rolf Becker
 +
 +===== Introduction =====
  
 When we humans enter and discover an unknown environment we are implicitly performing SLAM.  When we humans enter and discover an unknown environment we are implicitly performing SLAM. 
Line 12: Line 16:
 **SLAM in Robotics:**  **SLAM in Robotics:** 
 This simplified behavioral model can be transferred to autonomous mobile robots. To make them really autonomous they have do do something link SLAM. It is a standard problem in robotics. This simplified behavioral model can be transferred to autonomous mobile robots. To make them really autonomous they have do do something link SLAM. It is a standard problem in robotics.
 +
 +===== Methods and Material =====
 +
 +
  
 **Flight Controller and Mission Control Computer:**  **Flight Controller and Mission Control Computer:** 
Line 17: Line 25:
    
 **Perception:**  **Perception:** 
-In this first experiment we use a combination of two special cameras which cooperate closely: a tracking camera (Intel T265) and a depth camera (Intel D435i). They form the visual perception system. It allows to measure distances to "obstacles" such as walls, and to determine the linear as well as rotational motion of the robot. This perception system is also equipped with an intertial measurement unit (IMU) on chip. Such an IMU usually consists of an accelerometer measuring linear acceleration (aka force) along three spatial axes as well as a gyroscope sensing the rotational speed (aka angular velocity) about the three spatial axes. Sometimes a magnetometer (aka compass) All of these sensor outputs have to be combined. This is called sensor fusion. This sensor fusion is partly done already in the cameras, partly this has to be done externally.  +In this first experiment we use a combination of two special cameras which cooperate closely: a tracking camera (Intel T265) and a depth camera (Intel D435i). They form the visual perception system. It allows to measure distances to "obstacles" such as walls, and to determine the linear as well as rotational motion of the robot. This perception system is also equipped with an intertial measurement unit (IMU) on chip. Such an IMU usually consists of an accelerometer measuring linear acceleration (force) along three spatial axes as well as a gyroscope sensing the rotational speed (angular velocity) about the three spatial axes. Sometimes a magnetometer (compass) is also included in the integrated IMU chip. All of these sensor outputs have to be combined. This is called sensor fusion. This sensor fusion is partly done already in the cameras, partly this has to be done externally.   
 + 
 +**"Brain":** The misson control computer (in our case NVIDIA Jetson Xavier NX) is the brain of the flying robot. It collects all the data from the connected sensors and performs the sensor fusion needed for SLAM. We use the Robot Operating System ROS 2 to solve these problems. ROS 2 is a modular multitasking real-time operating system designed for robotic applications. "ROS 2 is a middleware based on a strongly-typed, anonymous publish/subscribe mechanism that allows for message passing between different processes." (from [[https://docs.ros.org/en/rolling/Concepts/Basic.html|ROS 2 Basic Concepts]]). Simply speaking: Sensors, actuators and algorithms are realized as independent nodes (like processes) communicating with each other by exchanging messages. A sensor node represents a real electronic sensing element like a digital twin. The same holds for actuators, e.g. motors, servos, etc. The system is highly flexible and extensible. A new sensor or actuator can be integrated by  deploying new nodes in the ROS 2 communication graph representing their hardware elements. With this modular software architecture interoprability is easy to achieve.  
 + 
 +===== Results =====
  
-**"Brain":** The misson control computer (in our case NVIDIA Jetson Xavier NX) is the brain of the flying robotIt collects all the data from the connected sensors and performs the sensor fusion needed for SLAMWe use the Robot Operating System ROS 2 to solve these problems. ROS 2 is a modular multitasking real-time operating system designed for robotic applications"ROS 2 is a middleware based on a strongly-typedanonymous publish/subscribe mechanism that allows for message passing between different processes." (from [[https://docs.ros.org/en/rolling/Concepts/Basic.html|ROS 2 Basic Concepts]])+The video shows the first experimental resultsThe top two video streams show two external observation cameras filming the drone flight in the university's drone cageThe bottom video stream shows the slowly emerging 3D world reconstruction resulting from SLAM in real-time. You see the walls and their elements emerging. It is a dense point cloudoverlayed with color information from the camerasThis is the mappingThe localization of the drone, i.e. its flight track, is indicated by a thin blue line (hardly visible!starting form the drone starting point to the current drone position. The drone creates the map and sets itself in realtion to it. 
  
-The video shows the first experiment+BTW: Video and audio are produced and cut by Harley, too. He also played the piano.
    
  
drones/slam/start.1694949152.txt.gz · Last modified: 2023/09/17 13:12 by rolf.becker