Comparing user-dependent and user-independent training of CNN for SSVEP BCI

Citation:

A. Ravi, Beni, N. Heydari, Manuel, J. , and Jiang, N. , “Comparing user-dependent and user-independent training of CNN for SSVEP BCI”, Journal of Neural Engineering, 2020.

Abstract:

Objective. We presented a comparative study on the training methodologies of Convolutional Neural Network (CNN) for detection of steady-state visual evoked potentials (SSVEP). Two training scenarios were also compared: user-independent (UI) training and user-dependent (UD) training. Approach. The CNN was trained in both UD and UI scenarios on two types of features for SSVEP classification: magnitude spectrum features (M-CNN) and complex spectrum features (C-CNN). And the Canonical Correlation Analysis (CCA), widely used in SSVEP processing, was used as the baseline. Additional comparisons were performed with Task-Related Components Analysis (TRCA) and Filter-bank Canonical Correlation Analysis (FBCCA). The performance of the proposed CNN pipelines, CCA, FBCCA and TRCA were evaluated with two datasets: a seven-class SSVEP dataset collected on 21 healthy participants and a twelve-class publicly available SSVEP dataset collected on 10 healthy participants. Main results. The UD based training methods consistently outperformed the UI methods when all other conditions were the same, as one would expect. However, the proposed UI-C-CNN approach performed similar to the UD-M-CNN across all cases investigated on both datasets. On Dataset 1, the average accuracies of the different methods for 1 s window length were: CCA: 69.1±10.8%, TRCA: 13.4±1.5%, FBCCA: 64.8±15.6%, UI-M-CNN: 73.5±16.1%, UI-C-CNN: 81.6±12.3%, UD-M-CNN: 87.8±7.6% and UD-C-CNN: 92.5±5%. On Dataset 2, the average accuracies of the different methods for data length of 1 s were: UD-C-CNN: 92.33±11.1%, UD-M-CNN: 82.77±16.7%, UI-C-CNN: 81.6±18%, UI-M-CNN: 70.5±22%, FBCCA: 67.1±21%, CCA: 62.7±21.5%, TRCA: 40.4±14%. Using t-SNE, visualizing the features extracted by the CNN pipelines further revealed that the C-CNN method likely learned both the amplitude and phase related information from the SSVEP data for classification, resulting in superior performance than the M-CNN methods. The results suggested that UI-C-CNN method proposed in this study offers a good balance between performance and cost of training data. Significance. The proposed C-CNN based method is a suitable candidate for SSVEP-based BCIs and provides an improved performance in both UD and UI training scenarios.

Notes:

Publisher's Version