Occupancy grid mapping for rover navigation based on semantic segmentation





Obstacle mapping is a fundamental building block of the autonomous navigation pipeline of many robotic platforms such as planetary rovers. Nowadays, occupancy grid mapping is a widely used tool for obstacle perception. It foreseen the representation of the environment in evenly spaced cells, whose posterior probability of being occupied is updated based on range sensors measurement. In more classic approaches, the cells are updated to occupied at the point where the ray emitted by the range sensor encounters an obstacle, such as a wall. The main limitation of this kind of methods is that they are not able to identify planar obstacles, such as slippery, sandy, or rocky soils. In this work, we use the measurements of a stereo camera combined with a pixel labeling technique based on Convolution Neural Networks to identify the presence of rocky obstacles in planetary environment. Once identified, the obstacles are converted into a scan-like model. The estimation of the relative pose between successive frames is carried out using ORB-SLAM algorithm. The final step consists of updating the occupancy grid map using the Bayes’ update Rule. To evaluate the metrological performances of the proposed method images from the Martian analogous dataset, the ESA Katwijk Beach Planetary Rover Dataset have been used. The evaluation has been performed by comparing the generated occupancy map with a manually segmented ortomosaic map, obtained by drones’ survey of the area used as reference.






Research Papers