Simultaneous Localization and Mapping (SLAM) is a challenging research area in robotics. SLAM is used to estimate the location of a mobile platform and simultaneously build a map of its surrounding environment. The essence of the SLAM algorithm is creating a map of everything that the mobile robot senses using its onboard sensors while simultaneously localizing itself on the built map. SLAM is used in the various robotics applications. `Roomba' robot was the first SLAM based vacuum cleaner introduced by iRobot. SLAM is also used in various applications such as autonomous navigation of mining robots, space explorations rovers, toy robots, and self-driving cars. For path planning or collision avoidance, the robot uses its poses and mapped environment to perform the specified task. Real-time SLAM systems perform the required processing such as capturing sensor data and processing the captured data in a real time. With limited computational power, the best policies as well as algorithms are being developed to obtain correct robot poses and a map of the environment in real time. The main challenge in SLAM is to design real time application with onboard processing for applications like autonomous navigation wherein robot may need to operate at night or under low illumination conditions.
This research aimed at development of monocular Visual SLAM for indoor environment. This chapter presents the literature review of the various visual SLAM algorithms. First section discusses theoretical background of visual SLAM. The feature-based and direct approaches are presented in second section. The third section describes different embedded platforms used followed by review of SLAM algorithms. Various evaluation metrics used and the bench marking datasets, applications, open challenges in visual SLAM are discussed.
Chronology of SLAM
The development of visual SLAM took place in three phases: namely SLAM problems, Visual SLAM and Visual SLAM with IMU integration to achieve as robustness shown in Figure 2-1. In, first stage focused various mathematical expressions presented to solve the SLAM problem. During second phase the attention of SLAM shifted to visual approaches. Various visual SLAM algorithms were presented with RGB-D cameras, and stereo cameras. Fundamental properties of Visual SLAM such as consistency and convergence were studied. Various SLAM methods were developed around the visual LAM. In third phase the robustness of visual SLAM algorithm is improved. The goal of this stage is to improve the reliability of visual SLAM for various real-life applications. This lead to the development of visual inertial SLAM methods.
he second phase of SLAM development is called as golden phase, since most of the problems in SLAM were solved. In 2007, The biggest achievement is keyframe based PTAM (Parallel Tracking and Mapping) was proposed[12]. This approach allowed task parallelization, good utilization of global optimization, reduction in computation time and reduction in tracking drift error. Nowadays, PTAM's framework is used in almost every visual SLAM algorithm. The Visual SLAM became reliable with integration of global optimization, effective use of loop closing, keyframe and map point culling policies for memory management and computation time. The parallelization using multi-threading helped to achieve real time performance. Different hardware's like RGB-D and stereo cameras were integrated with vision-based SLAM algorithms.
In third phase, main focus was on improving the robustness of visual SLAM. The combination of camera and IMU became an important research topic. In 2010's, the combination of camera and IMU was used to implement the visual inertial approach.