Perceiving Systems Conference Paper 2022

Deep Residual Reinforcement Learning based Autonomous Blimp Control

Blimp rl cover new

Blimps are well suited to perform long-duration aerial tasks as they are energy efficient, relatively silent and safe. To address the blimp navigation and control task, in previous work we developed a hardware and software-in-the-loop framework and a PID-based controller for large blimps in the presence of wind disturbance. However, blimps have a deformable structure and their dynamics are inherently non-linear and time-delayed, making PID controllers difficult to tune. Thus, often resulting in large tracking errors. Moreover, the buoyancy of a blimp is constantly changing due to variations in ambient temperature and pressure. To address these issues, in this paper we present a learning-based framework based on deep residual reinforcement learning (DRRL), for the blimp control task. Within this framework, we first employ a PID controller to provide baseline performance. Subsequently, the DRRL agent learns to modify the PID decisions by interaction with the environment. We demonstrate in simulation that DRRL agent consistently improves the PID performance. Through rigorous simulation experiments, we show that the agent is robust to changes in wind speed and buoyancy. In real-world experiments, we demonstrate that the agent, trained only in simulation, is sufficiently robust to control an actual blimp in windy conditions. We openly provide the source code of our approach at https://github.com/robot-perception-group/AutonomousBlimpDRL .

Author(s): Liu, Yu Tang and Price, Eric and Black, Michael J and Ahmad, Aamir
Book Title: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)
Pages: 12566--12573
Year: 2022
Month: October
Publisher: IEEE
Bibtex Type: Conference Paper (conference)
Address: Piscataway, NJ
DOI: 10.1109/IROS47612.2022.9981182
Event Name: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)
Event Place: Kyoto, Japan
State: Published
Electronic Archiving: grant_archive
ISBN: 978-1-6654-7927-1

BibTex

@conference{Liu_IROS_22,
  title = {Deep Residual Reinforcement Learning based Autonomous Blimp Control},
  booktitle = {2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)},
  abstract = {Blimps are well suited to perform long-duration aerial tasks as they are energy efficient, relatively silent and safe. To address the blimp navigation and control task, in previous work we developed a hardware and software-in-the-loop framework and a PID-based controller for large blimps in the presence of wind disturbance. However, blimps have a deformable structure and their dynamics are inherently non-linear and time-delayed, making PID controllers difficult to tune. Thus, often resulting in large tracking errors. Moreover, the buoyancy of a blimp is constantly changing due to variations in ambient temperature and pressure. To address these issues, in this paper we present a learning-based framework based on deep residual reinforcement learning (DRRL), for the blimp control task. Within this framework, we first employ a PID controller to provide baseline performance. Subsequently, the DRRL agent learns to modify the PID decisions by interaction with the environment. We demonstrate in simulation that DRRL agent consistently improves the PID performance. Through rigorous simulation experiments, we show that the agent is robust to changes in wind speed and buoyancy. In real-world experiments, we demonstrate that the agent, trained only in simulation, is sufficiently robust to control an actual blimp in windy conditions. We openly provide the source code of our approach at https://github.com/robot-perception-group/AutonomousBlimpDRL .},
  pages = {12566--12573},
  publisher = {IEEE},
  address = {Piscataway, NJ},
  month = oct,
  year = {2022},
  slug = {liu_iros_22},
  author = {Liu, Yu Tang and Price, Eric and Black, Michael J and Ahmad, Aamir},
  month_numeric = {10}
}