Towards semantic maps for mobile robots

Autor(en): Nuechter, Andreas
Hertzberg, Joachim 
Stichwörter: 3D mapping; Automation & Control Systems; Computer Science; Computer Science, Artificial Intelligence; GD SLAM; Object detection; Robotics; Scene interpretation; Semantic mapping
Erscheinungsdatum: 2008
Herausgeber: ELSEVIER
Volumen: 56
Ausgabe: 11
Startseite: 915
Seitenende: 926
Intelligent autonomous action in ordinary environments calls for maps. 3D geometry is generally required for avoiding collision with complex obstacles and to self-localize in six degrees of freedom (6 DoF) (x, y, z positions, roll,yaw, and pitch angles). Meaning, in addition to geometry, becomes inevitable if the robot is supposed to interact with its environment in a goal-directed way. A semantic stance enables the robot to reason about objects; it helps disambiguate or round off sensor data; and the robot knowledge becomes reviewable and communicable. The paper describes an approach and an integrated robot system for semantic mapping. The prime sensor is a 3D laser scanner. Individual scans are registered into a coherent 3D geometry map by 6D SLAM. Coarse scene features (e.g., walls, floors in a building) are determined by semantic labeling. More delicate objects are then detected by a trained classifier and localized. In the end, the semantic maps can be visualized for inspection. We sketch the overall architecture of the approach, explain the respective steps and their underlying algorithms, give examples based on a working robot implementation, and discuss the findings. (C) 2008 Elsevier B.V. All rights reserved.
IEEE International Conference on Robotics and Automation, Rome, ITALY, APR 10-14, 2007
ISSN: 09218890
DOI: 10.1016/j.robot.2008.08.001

Show full item record

Google ScholarTM