Computer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons

TitleComputer Vision and Deep Learning for Environment-Adaptive Control of Robotic Lower-Limb Exoskeletons
Publication TypeConference Paper
Year of Publication2021
AuthorsLaschowski, B., W. McNally, A. Wong, and J. McPhee
Conference NameAnnual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)
Date Published04/2021
PublisherIEEE
KeywordsBiomechatronics, Computer Vision, Deep Learning, Exoskeletons, Rehabilitation, Robotics
Abstract

Robotic exoskeletons require human control and decision making to switch between different locomotion modes, which can be inconvenient and cognitively demanding. To support the development of automated locomotion mode recognition systems (i.e., high-level controllers), we designed an environment recognition system using computer vision and deep learning. We collected over 5.6 million images of indoor and outdoor real-world walking environments using a wearable camera system, of which ~923,000 images were annotated using a 12-class hierarchical labelling architecture (called the ExoNet database). We then trained and tested the EfficientNetB0 convolutional neural network, designed for efficiency using neural architecture search, to predict the different walking environments. Our environment recognition system achieved ~73% image classification accuracy. While these preliminary results benchmark EfficientNetB0 on the ExoNet database, further research is needed to compare different image classification algorithms to develop an accurate and real-time environment-adaptive locomotion mode recognition system for robotic exoskeleton control.

URLhttps://ieeexplore.ieee.org/document/9630064
DOI10.1109/EMBC46164.2021.9630064
Related files: