Brain-computer interfaces (BCI) have the potential to provide transformative
control in prosthetics, assistive technologies (wheelchairs), robotics, and
human-computer interfaces. While Motor Imagery (MI) offers an intuitive
approach to BCI control, its practical implementation is often limited by the
requirement for expensive devices, extensive training data, and complex
algorithms, leading to user fatigue and reduced accessibility. In this paper,
we demonstrate that effective MI-BCI control of a mobile robot in real-world
settings can be achieved using a fine-tuned Deep Neural Network (DNN) with a
sliding window, eliminating the need for complex feature extractions for
real-time robot control. The fine-tuning process optimizes the convolutional
and attention layers of the DNN to adapt to each user's daily MI data streams,
reducing training data by 70% and minimizing user fatigue from extended data
collection. Using a low-cost (~3k),16−channel,non−invasive,open−sourceelectroencephalogram(EEG)device,fourusersteleoperatedaquadrupedrobotoverthreedays.Thesystemachieved78datasetandmaintaineda75extensiveretrainingfromday−to−day.Forreal−worldrobotcommandclassification,weachievedanaverageof62evidencethatMI−BCIsystemscanmaintainperformanceovermultipledayswithreducedtrainingdatatoDNNandalow−costEEGdevice,ourworkenhancesthepracticalityandaccessibilityofBCItechnology.ThisadvancementmakesBCIapplicationsmorefeasibleforreal−worldscenarios,particularlyincontrollingroboticsystems.