pith. machine review for the scientific record. sign in

arxiv: 2603.03181 · v2 · submitted 2026-03-03 · 💻 cs.RO · eess.SP

Recognition: no theorem link

Robotic Grasping and Placement Controlled by EEG-Based Hybrid Visual and Motor Imagery

Authors on Pith no claims yet

Pith reviewed 2026-05-15 16:41 UTC · model grok-4.3

classification 💻 cs.RO eess.SP
keywords EEGbrain-computer interfacerobotic graspingvisual imagerymotor imageryhybrid BCIreal-time controlhuman-robot interaction
0
0 comments X

The pith

EEG visual imagery selects grasp targets while motor imagery sets placement poses for a robot arm.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper builds a system that reads EEG signals from visual imagery to identify which object a robot should grasp and from motor imagery to decide the pose for placing it. Pre-trained decoders run directly in a live streaming setup with no additional calibration or visual cues. This produces a single brain-signal link that handles both the selection and the action steps. A reader would care because the approach shows one path toward thought-based control of physical robots without body movement or external devices.

Core claim

The framework integrates visual imagery to identify grasp targets and motor imagery to determine placement locations through a cue-free EEG protocol. Offline-pretrained decoders are applied zero-shot in an online streaming pipeline on a robotic platform. This yields online accuracies of 40.23 percent for visual imagery and 62.59 percent for motor imagery, resulting in an end-to-end task success rate of 20.88 percent across scenarios including occlusions and posture changes. The results indicate that high-level visual cognition decoded from EEG can drive executable robot commands for grasping and placement.

What carries the argument

The dual-channel intent interface, where visual imagery selects objects and motor imagery sets poses, carried by zero-shot deployment of offline decoders in an online EEG pipeline.

If this is right

  • The system works in varied conditions such as occluded objects or different user postures.
  • Real-time decoding of visual cognition translates directly into robot movements.
  • Imagery-only BCI supports collaborative human-robot tasks without physical input devices.
  • Control covers both selection of grasp target and specification of placement location.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If accuracy improves, this could enable hands-free operation of robots in industrial or medical settings.
  • Similar dual-imagery setups might apply to controlling other robotic functions like navigation.
  • Further tests with diverse users could show how well the zero-shot approach generalizes.
  • Lower success rates suggest room for combining with other signals or feedback mechanisms.

Load-bearing premise

Pre-trained decoders maintain sufficient accuracy when switched to real-time use without any user-specific adjustment or retraining.

What would settle it

Running the system with new participants in the same online grasping tasks and measuring if the end-to-end success rate stays near 21 percent or falls to chance levels would confirm or refute the practical viability.

Figures

Figures reproduced from arXiv: 2603.03181 by Shouyan Wang, Tianyu Wang, Yanwei Fu, Yawei Li, Yichang Liu, Yu-Gang Jiang, Ziyi Ye.

Figure 1
Figure 1. Figure 1: High-level cognitive control pipeline for EEG-based robotic manipulation. The framework consists of three main components: (1) Offline EEG data collection: visual imagery (VI) and motor imagery (MI) EEG data are collected and labeled to train visual and motor decoders. (2) Online EEG acquisition: A dual-channel system is deployed in which VI-EEG is decoded into grasping intentions and MI-EEG determines pla… view at source ↗
Figure 2
Figure 2. Figure 2: Visual Imagery and Motor Imagery Paradigms. [PITH_FULL_IMAGE:figures/full_fig_p003_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Offline results for three tasks: (a) Visual Perception, (b) Visual Imagery, and (c) Motor Imagery. Points show per-subject accuracies across models and frequency settings; the dashed line denotes chance level (VP/VI: 33.3%, MI: 50%). MI consistently outperforms VP/VI, and all tasks exceed chance. for the visual imagery task was to achieve object-level differentiation for fine-grained, real-world items, the… view at source ↗
Figure 5
Figure 5. Figure 5: Evoked EEG responses to fruit visual imagery. Brain activity patterns during VI for three stimuli in Sub00 (59 channels): (a) Apple, (b) Banana, (c) Orange. Responses vary over time yet remain broadly similar across categories, reflecting the fine-grained nature of VI decoding. V. ONLINE DEMOSTRATION Based on the collected offline data and pre-trained model, we conducted online experiments for both VI and … view at source ↗
Figure 6
Figure 6. Figure 6: Online Demos for extended application scenarios. (a) Hidden Object Interaction: Using only VI-EEG, the subject guides the system to identify and reveal a hidden object, revealing an ability to decode mental imagery with￾out visual cues. (b) Direct Human-Robot Interaction: By imagining the object and controlling hand movements through Visual Imagery and Motor Imagery, the subject interacts seamlessly with t… view at source ↗
read the original abstract

We present a framework that integrates EEG-based visual and motor imagery (VI/MI) with robotic control to enable real-time, intention-driven grasping and placement. Motivated by the promise of BCI-driven robotics to enhance human-robot interaction, this system bridges neural signals with physical control by deploying offline-pretrained decoders in a zero-shot manner within an online streaming pipeline. This establishes a dual-channel intent interface that translates visual intent into robotic actions, with VI identifying objects for grasping and MI determining placement poses, enabling intuitive control over both what to grasp and where to place. The system operates solely on EEG via a cue-free imagery protocol, achieving integration and online validation. Implemented on a Base robotic platform and evaluated across diverse scenarios, including occluded targets or varying participant postures, the system achieves online decoding accuracies of 40.23% (VI) and 62.59% (MI), with an end-to-end task success rate of 20.88%. These results demonstrate that high-level visual cognition can be decoded in real time and translated into executable robot commands, bridging the gap between neural signals and physical interaction, and validating the flexibility of a purely imagery-based BCI paradigm for practical human-robot collaboration.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper presents a framework integrating EEG-based visual imagery (VI) and motor imagery (MI) for real-time robotic grasping and placement control. Offline-pretrained decoders are deployed zero-shot in a cue-free online streaming pipeline on a Base robot platform. VI identifies grasp targets while MI determines placement poses, with reported online accuracies of 40.23% (VI) and 62.59% (MI) and an end-to-end task success rate of 20.88% across scenarios including occlusions and varying postures. The work claims this establishes a practical dual-channel imagery-based BCI for intuitive human-robot collaboration.

Significance. If the empirical results hold under proper validation, the work would demonstrate a notable advance in BCI-driven robotics by showing that high-level visual and motor cognition can be decoded in real time to drive complex manipulation without external cues or recalibration. This could support more natural multi-intent interfaces for assistive robotics. The zero-shot online deployment and cue-free protocol are distinctive strengths if substantiated.

major comments (2)
  1. [Abstract/Results] Abstract and Results sections: The reported accuracies (40.23% VI, 62.59% MI) and 20.88% end-to-end success rate are given without participant count, trial numbers, chance baselines, error bars, or statistical tests. This is load-bearing for the central claim of usable real-time control, as the figures sit near plausible chance levels for typical class cardinalities and cannot be assessed for reliability without these details.
  2. [Methods] Methods and online validation description: The zero-shot transfer of offline-pretrained VI/MI decoders to the cue-free streaming pipeline is asserted without explicit tests for session drift, inter-trial variability, or online performance metrics (e.g., latency, command reliability). Given the marginal accuracies, this assumption requires concrete validation to support the integration and online validation claims.
minor comments (2)
  1. [Abstract] The abstract is dense and would benefit from briefly stating the number of classes/objects for VI and MI tasks to permit immediate evaluation of chance performance.
  2. [Results] Figure captions and results tables (if present) should include exact trial counts and condition breakdowns to clarify how the 20.88% success rate was computed across diverse scenarios.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments, which highlight important aspects of statistical reporting and validation that strengthen the manuscript. We address each major comment below and will revise the paper to incorporate the requested details and analyses.

read point-by-point responses
  1. Referee: [Abstract/Results] Abstract and Results sections: The reported accuracies (40.23% VI, 62.59% MI) and 20.88% end-to-end success rate are given without participant count, trial numbers, chance baselines, error bars, or statistical tests. This is load-bearing for the central claim of usable real-time control, as the figures sit near plausible chance levels for typical class cardinalities and cannot be assessed for reliability without these details.

    Authors: We agree that these details are essential for assessing reliability. The revised manuscript will explicitly report the number of participants, total trials per condition, chance baselines (e.g., 25% for 4-class VI and 50% for 2-class MI), error bars derived from cross-validation or participant-level variability, and statistical comparisons (e.g., one-sample t-tests against chance). These additions will clarify that the observed accuracies exceed chance and support the claims of practical control. revision: yes

  2. Referee: [Methods] Methods and online validation description: The zero-shot transfer of offline-pretrained VI/MI decoders to the cue-free streaming pipeline is asserted without explicit tests for session drift, inter-trial variability, or online performance metrics (e.g., latency, command reliability). Given the marginal accuracies, this assumption requires concrete validation to support the integration and online validation claims.

    Authors: We concur that explicit validation of the zero-shot transfer is needed given the accuracies. The revision will add analyses comparing offline and online performance to quantify session drift, metrics for inter-trial variability (e.g., standard deviation of per-trial accuracies), and online-specific metrics including average command latency and reliability (proportion of stable consecutive predictions). These will be presented in a new subsection on online deployment validation. revision: yes

Circularity Check

0 steps flagged

No circularity in empirical BCI-robotics system report

full rationale

The paper presents an empirical framework for EEG-based visual and motor imagery decoding to control robotic grasping and placement, with offline-pretrained classifiers deployed zero-shot in an online pipeline. No mathematical derivations, equations, fitted parameters presented as predictions, or self-referential definitions appear in the claims. Reported figures (40.23% VI accuracy, 62.59% MI accuracy, 20.88% task success) are direct experimental measurements from online validation, independent of any construction that reduces to the inputs. No load-bearing self-citations, uniqueness theorems, or ansatzes are invoked to force the central results. The work is a standard empirical system report whose outcomes stand on measured performance rather than any derivation chain.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on the domain assumption that scalp EEG contains decodable information about both visual object imagery and motor placement imagery that generalizes across users without recalibration. No free parameters or new entities are explicitly introduced in the abstract.

axioms (1)
  • domain assumption EEG signals recorded during visual and motor imagery contain classifiable patterns that can be decoded by models trained on separate data
    Invoked by the zero-shot online deployment claim; without this the reported accuracies would be impossible.

pith-pipeline@v0.9.0 · 5532 in / 1446 out tokens · 75954 ms · 2026-05-15T16:41:16.351463+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

44 extracted references · 44 canonical work pages · 1 internal anchor

  1. [1]

    Robotic grasping from classical to modern: A survey,

    H. Zhang, J. Tang, S. Sun, and X. Lan, “Robotic grasping from classical to modern: A survey,”arXiv preprint arXiv:2202.03631, 2022

  2. [2]

    A survey on learning-based robotic grasping,

    K. Kleeberger, R. Bormann, W. Kraus, and M. F. Huber, “A survey on learning-based robotic grasping,”Current Robotics Reports, vol. 1, no. 4, pp. 239–249, 2020

  3. [3]

    Trends and challenges in robot manipula- tion,

    A. Billard and D. Kragic, “Trends and challenges in robot manipula- tion,”Science, vol. 364, no. 6446, p. eaat8414, 2019

  4. [4]

    Polaris: Open-ended interactive robotic manipulation via syn2real visual grounding and large language models,

    T. Wang, H. Lin, J. Yu, and Y . Fu, “Polaris: Open-ended interactive robotic manipulation via syn2real visual grounding and large language models,” in2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2024, pp. 9676–9683

  5. [5]

    Distilled feature fields en- able few-shot language-guided manipulation.arXiv preprint arXiv:2308.07931, 2023

    W. Shen, G. Yang, A. Yu, J. Wong, L. P. Kaelbling, and P. Isola, “Distilled feature fields enable few-shot language-guided manipula- tion,”arXiv preprint arXiv:2308.07931, 2023

  6. [6]

    GR00T N1: An Open Foundation Model for Generalist Humanoid Robots

    J. Bjorck, F. Casta ˜neda, N. Cherniadev, X. Da, R. Ding, L. Fan, Y . Fang, D. Fox, F. Hu, S. Huanget al., “Gr00t n1: An open foundation model for generalist humanoid robots,”arXiv preprint arXiv:2503.14734, 2025

  7. [7]

    Noir: Neural signal operated intelligent robots for everyday activities,

    R. Zhang, S. Lee, M. Hwang, A. Hiranaka, C. Wang, W. Ai, J. J. R. Tan, S. Gupta, Y . Hao, G. Levineet al., “Noir: Neural signal operated intelligent robots for everyday activities,”arXiv preprint arXiv:2311.01454, 2023

  8. [8]

    Mind meets robots: a review of eeg-based brain- robot interaction systems,

    Y . Zhang, N. Rajabi, F. Taleb, A. Matviienko, Y . Ma, M. Bj ¨orkman, and D. Kragic, “Mind meets robots: a review of eeg-based brain- robot interaction systems,”International Journal of Human–Computer Interaction, pp. 1–32, 2025

  9. [9]

    Learning 6-dof object poses to grasp category-level objects by language instructions,

    C. Cheang, H. Lin, Y . Fu, and X. Xue, “Learning 6-dof object poses to grasp category-level objects by language instructions,” in2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 8476–8482

  10. [10]

    Gaze-based, context-aware robotic system for assisted reaching and grasping,

    A. Shafti, P. Orlov, and A. A. Faisal, “Gaze-based, context-aware robotic system for assisted reaching and grasping,” in2019 Interna- tional Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 863–869

  11. [11]

    User adaptive bcis: Ssvep and p300 based interfaces

    F. Beverina, G. Palmas, S. Silvoni, F. Piccione, S. Gioveet al., “User adaptive bcis: Ssvep and p300 based interfaces.”PsychNology J., vol. 1, no. 4, pp. 331–354, 2003

  12. [12]

    Brain–computer communication: motivation, aim, and impact of exploring a virtual apartment,

    R. Leeb, F. Lee, C. Keinrath, R. Scherer, H. Bischof, and G. Pfurtscheller, “Brain–computer communication: motivation, aim, and impact of exploring a virtual apartment,”IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 15, no. 4, pp. 473–482, 2007

  13. [13]

    Past, present, and future of eeg-based bci applications,

    K. V ¨arbu, N. Muhammad, and Y . Muhammad, “Past, present, and future of eeg-based bci applications,”Sensors, vol. 22, no. 9, p. 3331, 2022

  14. [14]

    Brain-computer interface paradigms and neural coding,

    P. Tai, P. Ding, F. Wang, A. Gong, T. Li, L. Zhao, L. Su, and Y . Fu, “Brain-computer interface paradigms and neural coding,”Frontiers in neuroscience, vol. 17, p. 1345961, 2024

  15. [15]

    Feasibility of decoding visual information from eeg,

    H. Wilson, X. Chen, M. Golbabaee, M. J. Proulx, and E. O’Neill, “Feasibility of decoding visual information from eeg,”Brain-computer interfaces, vol. 11, no. 1-2, pp. 33–60, 2024

  16. [16]

    Evaluating the feasibility of visual imagery for an eeg-based brain–computer interface,

    J. Kilmarx, I. Tashev, J. d. R. Mill ´an, J. Sulzer, and J. Lewis- Peacock, “Evaluating the feasibility of visual imagery for an eeg-based brain–computer interface,”IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 32, pp. 2209–2219, 2024

  17. [17]

    Cinebrain: A large-scale multi-modal brain dataset during naturalistic audiovisual narrative processing,

    J. Gao, Y . Liu, B. Yang, J. Feng, and Y . Fu, “Cinebrain: A large-scale multi-modal brain dataset during naturalistic audiovisual narrative processing,”arXiv preprint arXiv:2503.06940, 2025

  18. [18]

    Visual image reconstruction from human brain activity using a combination of multiscale local image decoders,

    Y . Miyawaki, H. Uchida, O. Yamashita, M.-a. Sato, Y . Morito, H. C. Tanabe, N. Sadato, and Y . Kamitani, “Visual image reconstruction from human brain activity using a combination of multiscale local image decoders,”Neuron, vol. 60, no. 5, pp. 915–929, 2008

  19. [19]

    Seeing through the brain: image reconstruction of visual perception from human brain signals,

    Y .-T. Lan, K. Ren, Y . Wang, W.-L. Zheng, D. Li, B.-L. Lu, and L. Qiu, “Seeing through the brain: image reconstruction of visual perception from human brain signals,”arXiv preprint arXiv:2308.02510, 2023

  20. [20]

    Neu- ropictor: Refining fmri-to-image reconstruction via multi-individual pretraining and multi-level modulation,

    J. Huo, Y . Wang, Y . Wang, X. Qian, C. Li, Y . Fu, and J. Feng, “Neu- ropictor: Refining fmri-to-image reconstruction via multi-individual pretraining and multi-level modulation,” inEuropean Conference on Computer Vision. Springer, 2024, pp. 56–73

  21. [21]

    Mind-3d++: Advancing fmri-based 3d reconstruction with high-quality textured mesh generation and a comprehensive dataset,

    J. Gao, Y . Fu, Y . Fu, Y . Wang, X. Qian, and J. Feng, “Mind-3d++: Advancing fmri-based 3d reconstruction with high-quality textured mesh generation and a comprehensive dataset,”IEEE Transactions on Pattern Analysis and Machine Intelligence, 2025

  22. [22]

    Applications and advances of combined fmri- fnirs techniques in brain functional research,

    L. Yang and Z. Wang, “Applications and advances of combined fmri- fnirs techniques in brain functional research,”Frontiers in Neurology, vol. 16, p. 1542075, 2025

  23. [23]

    Brain–computer interfaces,

    B. He, H. Yuan, J. Meng, and S. Gao, “Brain–computer interfaces,” inNeural engineering. Springer, 2020, pp. 131–183

  24. [24]

    Deep learn- ing for eeg-based visual classification and reconstruction: Panorama, trends, challenges and opportunities,

    W. Li, P. Zhao, C. Xu, Y . Hou, W. Jiang, and A. Song, “Deep learn- ing for eeg-based visual classification and reconstruction: Panorama, trends, challenges and opportunities,”IEEE Transactions on Biomed- ical Engineering, 2025

  25. [25]

    Eeg-convtransformer for single-trial eeg-based visual stimulus classification,

    S. Bagchi and D. R. Bathula, “Eeg-convtransformer for single-trial eeg-based visual stimulus classification,”Pattern Recognition, vol. 129, p. 108757, 2022

  26. [26]

    Learning spatiotemporal graph representations for visual perception using eeg signals,

    J. Kalafatovich, M. Lee, and S.-W. Lee, “Learning spatiotemporal graph representations for visual perception using eeg signals,”IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 31, pp. 97–108, 2022

  27. [27]

    Decoding visual neural representa- tions by multimodal learning of brain-visual-linguistic features,

    C. Du, K. Fu, J. Li, and H. He, “Decoding visual neural representa- tions by multimodal learning of brain-visual-linguistic features,”IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 45, no. 9, pp. 10 760–10 777, 2023

  28. [28]

    Decod- ing natural images from eeg for object recognition,

    Y . Song, B. Liu, X. Li, N. Shi, Y . Wang, and X. Gao, “Decod- ing natural images from eeg for object recognition,”arXiv preprint arXiv:2308.13234, 2023

  29. [29]

    Eegvision: reconstructing vision from human brain signals,

    H. Guo, “Eegvision: reconstructing vision from human brain signals,” Appl Math Nonlinear Sci, vol. 9, no. 1, p. 2, 2024

  30. [30]

    Visual decoding and reconstruction via eeg embeddings with guided diffusion,

    D. Li, C. Wei, S. Li, J. Zou, H. Qin, and Q. Liu, “Visual decoding and reconstruction via eeg embeddings with guided diffusion,”arXiv preprint arXiv:2403.07721, 2024

  31. [31]

    A com- prehensive review of eeg-based brain–computer interface paradigms,

    R. Abiri, S. Borhani, E. W. Sellers, Y . Jiang, and X. Zhao, “A com- prehensive review of eeg-based brain–computer interface paradigms,” Journal of neural engineering, vol. 16, no. 1, p. 011001, 2019

  32. [32]

    Eeg- controlled tele-grasping for undefined objects,

    M. Kim, M.-S. Choi, G.-R. Jang, J.-H. Bae, and H.-S. Park, “Eeg- controlled tele-grasping for undefined objects,”Frontiers in Neuro- robotics, vol. 17, p. 1293878, 2023

  33. [33]

    Enhancing motor imagery eeg signal decoding through machine learning: A systematic review of recent progress,

    I. Hameed, D. M. Khan, S. M. Ahmed, S. S. Aftab, and H. Fazal, “Enhancing motor imagery eeg signal decoding through machine learning: A systematic review of recent progress,”Computers in Biology and Medicine, vol. 185, p. 109534, 2025

  34. [34]

    Comparison of subject-independent and subject-specific eeg-based bci using lda and svm classifiers,

    E. M. Dos Santos, R. San-Martin, and F. J. Fraga, “Comparison of subject-independent and subject-specific eeg-based bci using lda and svm classifiers,”Medical & biological engineering & computing, vol. 61, no. 3, pp. 835–845, 2023

  35. [35]

    Geometric neural network based on phase space for bci-eeg decoding,

    I. Carrara, B. Aristimunha, M.-C. Corsi, R. Y . de Camargo, S. Cheval- lier, and T. Papadopoulo, “Geometric neural network based on phase space for bci-eeg decoding,”Journal of Neural Engineering, vol. 22, no. 1, p. 016049, 2025

  36. [36]

    Exploring the ability to classify visual perception and visual imagery eeg data: Toward an intuitive bci system,

    S. Lee, S. Jang, and S. C. Jun, “Exploring the ability to classify visual perception and visual imagery eeg data: Toward an intuitive bci system,”Electronics, vol. 11, no. 17, p. 2706, 2022

  37. [37]

    Eeg-imagenet: An electroencephalo- gram dataset and benchmarks with image visual stimuli of multi- granularity labels,

    S. Zhu, Z. Ye, Q. Ai, and Y . Liu, “Eeg-imagenet: An electroencephalo- gram dataset and benchmarks with image visual stimuli of multi- granularity labels,”arXiv preprint arXiv:2406.07151, 2024

  38. [38]

    Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces,

    V . J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces,”Journal of neural engineering, vol. 15, no. 5, p. 056013, 2018

  39. [39]

    Eeg-based emotion recognition using regularized graph neural networks,

    P. Zhong, D. Wang, and C. Miao, “Eeg-based emotion recognition using regularized graph neural networks,”IEEE Transactions on Affective Computing, vol. 13, no. 3, pp. 1290–1301, 2020

  40. [40]

    Review of the emotional feature extraction and classification using eeg signals,

    J. Wang and M. Wang, “Review of the emotional feature extraction and classification using eeg signals,”Cognitive robotics, vol. 1, pp. 29–40, 2021

  41. [41]

    A multi-day and high-quality eeg dataset for motor imagery brain- computer interface,

    B. Yang, F. Rong, Y . Xie, D. Li, J. Zhang, F. Li, G. Shi, and X. Gao, “A multi-day and high-quality eeg dataset for motor imagery brain- computer interface,”Scientific Data, vol. 12, no. 1, p. 488, 2025

  42. [42]

    A study of motor imagery eeg classification based on feature fusion and attentional mechanisms,

    T. Zhu, H. Tang, L. Jiang, Y . Li, S. Li, and Z. Wu, “A study of motor imagery eeg classification based on feature fusion and attentional mechanisms,”Frontiers in Human Neuroscience, vol. 19, p. 1611229, 2025

  43. [43]

    Contribution of low- level image statistics to eeg decoding of semantic content in multivari- ate and univariate models with feature optimization,

    E. L. Holm, D. F. Slezak, and E. Tagliazucchi, “Contribution of low- level image statistics to eeg decoding of semantic content in multivari- ate and univariate models with feature optimization,”NeuroImage, vol. 293, p. 120626, 2024

  44. [44]

    Spatiotemporal cortical dynamics for visual scene processing as revealed by eeg decoding,

    T. Orima and I. Motoyoshi, “Spatiotemporal cortical dynamics for visual scene processing as revealed by eeg decoding,”Frontiers in Neuroscience, vol. 17, p. 1167719, 2023