Interpretable Deep Learning Models for Single Trial Prediction of Balance Loss
Published in 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2020
Recommended citation: Ravindran, A. S., Cestari, M., Malaya, C., John, I., Francisco, G. E., Layne, C., & Vidal, J. L. C. (2020, October). Interpretable Deep Learning Models for Single Trial Prediction of Balance Loss. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (pp. 268-273). IEEE.
Wearable robotic devices are being designed to assist the elderly population and other patients with locomotion disabilities. However, wearable robotics increases the risk from falling. Neuroimaging studies have provided evidence for the involvement of frontocentral and parietal cortices in postural control and this opens up the possibility of using decoders for early detection of balance loss by using electroencephalography (EEG). This study investigates the presence of commonly identified components of the perturbation evoked responses (PEP) when a person is in an exoskeleton. We also evaluated the feasibility of using single-trial EEG to predict the loss of balance using a convolution neural network. Overall, the model achieved a mean 5-fold cross-validation test accuracy of 75.2 % across six subjects with 50% as the chance level. We employed a gradient class activation map-based visualization technique for interpreting the decisions of the CNN and demonstrated that the network learns from PEP components present in these single trials. The high localization ability of Grad-CAM demonstrated here, opens up the possibilities for deploying CNN for ERP/PEP analysis while emphasizing on model interpretability.