Perceiving Systems Conference Paper 2023

Simulation of Dynamic Environments for SLAM

Grade

Simulation engines are widely adopted in robotics. However, they lack either full simulation control, ROS integration, realistic physics, or photorealism. Recently, synthetic data generation and realistic rendering has advanced tasks like target tracking and human pose estimation. However, when focusing on vision applications, there is usually a lack of information like sensor measurements or time continuity. On the other hand, simulations for most robotics tasks are performed in (semi)static environments, with specific sensors and low visual fidelity. To solve this, we introduced in our previous work a fully customizable framework for generating realistic animated dynamic environments (GRADE) [1]. We use GRADE to generate an indoor dynamic environment dataset and then compare multiple SLAM algorithms on different sequences. By doing that, we show how current research over-relies on known benchmarks, failing to generalize. Our tests with refined YOLO and Mask R-CNN models provide further evidence that additional research in dynamic SLAM is necessary. The code, results, and generated data are provided as open-source at https://eliabntt.github.io/grade-rr .

Author(s): Bonetto, Elia and Xu, Chenghao and Ahmad, Aamir
Book Title: ICRA 2023 Workshop on the Active Methods in Autonomous Navigation
Year: 2023
Month: June
Bibtex Type: Conference Paper (inproceedings)
Event Name: ICRA 2023
Event Place: London
State: Published
URL: https://robotics.pme.duth.gr/workshop_active2/wp-content/uploads/2023/05/01.-Simulation-of-Dynamic-Environments-for-SLAM.pdf%7D%7D
Electronic Archiving: grant_archive
Links:
Attachments:

BibTex

@inproceedings{bonetto2023dynamicSLAM,
  title = {Simulation of Dynamic Environments for SLAM},
  booktitle = {ICRA 2023 Workshop on the Active Methods in Autonomous Navigation},
  abstract = {Simulation engines are widely adopted in robotics. However, they lack either full simulation control, ROS integration, realistic physics, or photorealism. Recently, synthetic data generation and realistic rendering has advanced tasks like target tracking and human pose estimation. However, when focusing on vision applications, there is usually a lack of information  like sensor measurements or time continuity. On the other hand, simulations for most robotics tasks are performed in (semi)static environments, with specific sensors and low visual fidelity. To solve this, we introduced in our previous work a fully customizable framework for generating realistic animated dynamic environments (GRADE) [1]. We use GRADE to generate an indoor dynamic environment dataset and then compare multiple SLAM algorithms on different sequences. By doing that, we show how current research over-relies on known benchmarks, failing to generalize. Our tests with refined YOLO and Mask R-CNN models provide further evidence that additional research in dynamic SLAM is necessary. The code, results, and generated data are provided as open-source at https://eliabntt.github.io/grade-rr .},
  month = jun,
  year = {2023},
  slug = {bonetto2023dynamicslam},
  author = {Bonetto, Elia and Xu, Chenghao and Ahmad, Aamir},
  url = {https://robotics.pme.duth.gr/workshop_active2/wp-content/uploads/2023/05/01.-Simulation-of-Dynamic-Environments-for-SLAM.pdf%7D%7D},
  month_numeric = {6}
}