Embodied Vision Article 2020

Visual-Inertial Mapping with Non-Linear Factor Recovery

Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this paper, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches.

Author(s): Vladyslav Usenko and Nikolaus Demmel and David Schubert and Jörg Stückler and Daniel Cremers
Journal: IEEE Robotics and Automation Letters (RA-L)
Volume: 5
Number (issue): 2
Pages: 422--429
Year: 2020
Project(s):
Bibtex Type: Article (article)
State: Published
URL: https://ieeexplore.ieee.org/document/8938825
Electronic Archiving: grant_archive
Note: presented at IEEE International Conference on Robotics and Automation (ICRA) 2020, preprint arXiv:1904.06504
Links:

BibTex

@article{usenko19vinfr,
  title = {Visual-Inertial Mapping with Non-Linear Factor Recovery},
  journal = {IEEE Robotics and Automation Letters (RA-L)},
  abstract = { 	
  
  Cameras and inertial measurement units are complementary sensors for
  ego-motion estimation and environment mapping. Their combination makes
  visual-inertial odometry (VIO) systems more accurate and robust. For
  globally consistent mapping, however, combining visual and inertial
  information is not straightforward. To estimate the motion and geometry
  
  with a set of images large baselines are required. Because of that,
  most systems operate on keyframes that have large time intervals
  between each other. Inertial data on the other hand quickly degrades
  with the duration of the intervals and after several seconds of
  integration, it typically contains only little useful information.
  
  In this paper, we propose to extract relevant information for
  visual-inertial mapping from visual-inertial odometry using non-linear
  factor recovery. We reconstruct a set of non-linear factors that make
  an optimal approximation of the information on the trajectory
  accumulated by VIO. To obtain a globally consistent map we combine
  these factors with loop-closing constraints using bundle adjustment.
  The VIO factors make the roll and pitch angles of the global map
  observable, and improve the robustness and the accuracy of the mapping.
  
  In experiments on a public benchmark, we demonstrate superior
  performance of our method over the state-of-the-art approaches.},
  volume = {5},
  number = {2},
  pages = {422--429},
  year = {2020},
  note = {presented at IEEE International Conference on Robotics and Automation (ICRA) 2020, preprint arXiv:1904.06504},
  slug = {usenko19vinfr},
  author = {Usenko, Vladyslav and Demmel, Nikolaus and Schubert, David and St{\"u}ckler, J{\"o}rg and Cremers, Daniel},
  url = {https://ieeexplore.ieee.org/document/8938825}
}