|Title||Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons|
|Publication Type||Conference Paper|
|Year of Publication||2021|
|Authors||Laschowski, B., W. McNally, A. Wong, and J. McPhee|
|Conference Name||Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)|
|Keywords||Biomechatronics, Computer Vision, Deep Learning, Exoskeletons, Rehabilitation, Robotics|
Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., high-level controllers), we designed an environment recognition system using computer vision and deep learning. We collected over 5.6 million images of indoor and outdoor real-world walking environments using a wearable camera system, of which ~923,000 images were annotated using a 12-class hierarchical labelling architecture (called the ExoNet database). We then trained and tested the EfficientNetB0 convolutional neural network, designed for efficiency using neural architecture search, to predict the different walking environments. Our environment recognition system achieved ~73% image classification accuracy. While these preliminary results benchmark EfficientNetB0 on the ExoNet database, further research is needed to compare different image classification algorithms to develop an accurate and real-time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.
Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons