Skip to main content
Training a Convolutional Neural Network to Classify Motor Imagery EEG  Signals for Brain-Computer Interface Applications

Abstract

The purpose of this paper is to train a convolutional neural network (CNN) to perform real-time classification of electroencephalography (EEG) signals generated during motor imagery (MI), which is a cognitive process in which subjects imagine themselves performing a movement without even tensing their muscles. Provided that the CNN classifier is consistent, it can be used to generate commands to control a wide variety of systems such as wheelchairs, prosthetics, robots, and computer programs. Traditional methods that do not make use of artificial neural networks (ANN) rely on expensive high-tech lab equipment to counteract the nonlinear, nonstationary, and noisy nature of EEG signals. In contrast, ANNs are more robust to noisy inputs because they depend on the amount of data and training provided to them in order to find informative relationships hidden in signals like EEG. We propose to assess the robustness of this approach by using cheaper equipment and lower electrode counts to generate a classification model and testing its ability to control the position of a robot manipulator in two dimensions. To conduct our experiment, we collected EEG data from one subject using eight electrodes while the subject performed MI tasks that involved imagining the movement of different body parts (4 tasks in total). The data was then transformed into a time-frequency representation and saved as a spectrogram image for each electrode and each trial for each MI condition. Finally, these images were separated into training and testing sets. The first set was used to train the CNN to classify the images into 5 categories (up, down, left, right, and idle), while the second set was used to determine the accuracy of the trained classifier. Finally, although the best classification accuracy of this model peaked at 40%, which, although significantly better than random guessing, was still too low to reliably generate control commands. To overcome this challenge, we attempted to use a leaky-integrate-and-fire (LIF) neuron model to build up commands over time. However, the bias in the network resulted in all commands being classified as a dominant category, which in turn resulted in the robot arm moving in a single direction.

How to Cite

Mino, F. R., (2021) “Training a Convolutional Neural Network to Classify Motor Imagery EEG Signals for Brain-Computer Interface Applications”, Capstone, The UNC Asheville Journal of Undergraduate Scholarship 34(1).

Downloads

Download PDF

5

Views

2

Downloads

Share

Author

Downloads

Issue

Publication details

Licence

Peer Review

This article has been peer reviewed.

File Checksums (MD5)

  • PDF: 35845099cc1924a70041b8cbf990c8ef