The ability to recognize and adapt to errors is a fundamental aspect of cognitive control and is crucial for survival. In the context of motor tasks, errors can be classified into two main categories, namely Execution errors and Outcome errors, which are believed to arise from different neural mechanisms. Our study builds on previous research by testing the feasibility of classifying these errors in a complex motor task, furthermore, we also evaluate the possibilities of predicting outcome errors. We recorded electroencephalography data during a complex reaching arm task in a visuomotor rotation paradigm. To test the differentiability of these errors, we used four machine learning models to classify the neural correlates of error processing, focusing first on differentiating Execution errors and Outcome errors from no error signal and further differentiating between them. Event-related potentials exhibit distinct morphological variations between Outcome and Execution errors. Our analysis reveals that both types of errors can be significantly differentiated from a no-error signal, achieving accuracies up to 70 \% based on the model employed. Similarly, differentiating between outcome and execution errors yielded a high accuracy rate of up to 90 \%. Importantly, this differentiation is achievable by using data from a selective set of frontocentral and parietal electrodes. Furthermore, we demonstrate the feasibility of outcome prediction through either accurate classification of Execution errors or by a brief signal segment preceding outcome errors. This consistency in results spans both within-subject and cross-subject experimental paradigms, although with lower accuracies in the cross-subject paradigm. We provide evidence for two distinct types of errors, modulated by different available feedback, which can be successfully classified in a complex motor task. Moreover, our research supports the potential development of human-in-the-loop systems, where ErrPs furnish real-time feedback, enabling more efficient and effective interaction between humans and machines. These classifications could offer real-time feedback to decoding algorithms, enhancing adaptability and learning capabilities development with potential applications in brain-computer interfaces and human-computer interaction, where precise user intention decoding is essential.
|