Facilitating navigation in pedestrian environments is critical for enabling people who are blind and visually impaired (BVI) to achieve independent mobility. A deep-reinforcement-learning-based assistive guiding robot with ultrawide-bandwidth (UWB) beacons that can navigate through routes with designated waypoints was designed in this study. Typically, a simultaneous localization and mapping (SLAM) framework is used to estimate the robot pose and navigational goal; however, SLAM frameworks are vulnerable in certain dynamic environments. The proposed navigation method is a learning approach based on state-of-the-art deep reinforcement learning and can effectively avoid objects. When used with UWB beacons, the proposed strategy is suitable for environments with dynamic pedestrians. We also designed a harness device with an audio interface that enables BVI users to interact with the guiding robot through intuitive feedback. The UWB beacons were installed with an audio interface to obtain environmental information The on-harness and on-beacon verbal feedback provides information on points-of-interest (POI) and turn-by-turn information to BVI users. BVI users were recruited in this study to conduct navigation tasks in different scenarios. A route was designed in a simulated ward to represent daily activities. In real-world situations, SLAM-based state estimation might be affected by dynamic obstacles, and the visual-based trail may suffer from occlusions from pedestrians or other obstacles. The proposed system successfully navigated through environments with dynamic pedestrians, in which systems based on existing SLAM algorithms have failed.
We provide pre-trained weight of the DRL-based goal navigation policy, and sample inputs in a Colab notebook. Google Drive
In the paper, we compared the performance of navigation using different localization source, i.e. SLAM and localization from UWB. 10 trials were carried out with 5 trials each localization source. We hereby provided the original data (rosbags) which were recorded during the experiments. The data included tf, wheel odometry, lidar pointclouds, and UWB positioning (regardless of the localization source used in the trial). Thumbnails of side capturing the trails were added for reference as well. The experiment was conducted in a corridor with a hallway in the midpoint.
# | Trial | Rosbag | Localization Source | Duration (sec.) | |
---|---|---|---|---|---|
1 | UWB 1 | Link | UWB Localization | 223 | |
2 | UWB 2 | Link | UWB Localization | 221 | |
3 | UWB 3 | Link | UWB Localization | 229 | |
4 | UWB 4 | Link | UWB Localization | 219 | |
5 | UWB 5 | Link | UWB Localization | 193 | |
6 | SLAM 1 | Link | SLAM (GMapping) | 365 | |
7 | SLAM 2 | Link | SLAM (GMapping) | 288 | |
8 | SLAM 3 | Link | SLAM (GMapping) | 331 | |
9 | SLAM 4 | Link | SLAM (GMapping) | 284 | |
10 | SLAM 5 | Link | SLAM (GMapping) | 317 |
@article{lu2021assistive,
title={Assistive navigation using deep reinforcement learning guiding robot with UWB/voice beacons and semantic feedbacks for blind and visually impaired people},
author={Lu, Chen-Lung and Liu, Zi-Yan and Huang, Jui-Te and Huang, Ching-I and Wang, Bo-Hui and Chen, Yi and Wu, Nien-Hsin and Wang, Hsueh-Cheng and Giarr{\'e}, Laura and Kuo, Pei-Yi},
journal={Frontiers in robotics and AI},
pages={176},
year={2021},
publisher={Frontiers}
}