Recognition: unknown
EduGage: Methods and Dataset for Sensor-Based Momentary Assessment of Engagement in Self-Guided Video Learning
Pith reviewed 2026-05-09 17:51 UTC · model grok-4.3
The pith
Sensor signals can predict momentary engagement during self-guided video learning at 0.81 mean absolute error.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
In a participant-independent evaluation, a multimodal model trained on synchronized PPG, ECG, EDA, EEG, IMU, heart rate, temperature, and eye-tracking data predicts continuous engagement scores with a mean absolute error of 0.81, 83.75 percent within-one accuracy, 73.93 percent binary accuracy, and 68.45 percent binary macro F1, exceeding the performance of sensor-free, statistical, deep temporal, foundation-model, and LLM-based baselines.
What carries the argument
The key mechanism is a machine learning pipeline that fuses multimodal physiological and behavioral sensor streams with momentary self-report labels to produce fine-grained engagement estimates.
If this is right
- Fine-grained engagement estimation is feasible with current sensors but remains inherently noisy.
- Lightweight combinations of behavioral and physiological signals are more practical than full multimodal setups for real-world systems.
- The released EduGage dataset enables reproducible research on sensor-based engagement modeling in self-guided learning.
- Adaptive learning platforms could use such estimates to adjust content or provide timely interventions when engagement drops.
Where Pith is reading between the lines
- If the estimates prove stable across longer sessions, they could help surface video segments that consistently lose viewer attention.
- Consumer devices with fewer sensors might achieve similar results, lowering the barrier to widespread use in education.
- Linking these engagement scores to actual learning gains, such as quiz performance, would strengthen the case for using them in practice.
- Continuous sensing raises questions about data privacy and user acceptance in educational settings.
Load-bearing premise
That the brief in-situ self-reports collected during the study serve as reliable and generalizable ground truth for actual learner engagement.
What would settle it
Demonstrating that the trained model fails to predict engagement self-reports collected from a fresh set of participants in a different video-learning context would falsify the central claim of feasible estimation.
Figures
read the original abstract
Engagement, which links to attentional, emotional, and cognitive dimensions, plays an important role in learning. In online and video-based learning environments, learners often need to regulate their own interactions with instructional materials. Measuring and reflecting on engagement can therefore support both learners and adaptive learning systems. In this study, we use wearable and camera-based sensing devices to collect physiological and motion signals, including PPG, ECG, EDA, EEG, IMU, heart rate, temperature, and eye-tracking data, to estimate learner engagement. We conducted a user study with 16 participants in a video-based learning scenario, where participants completed learning tasks and provided repeated in-situ self-reports of engagement through brief probes. We develop and evaluate a system for engagement estimation, compare different sensing modalities, and further analyze the feasibility and effectiveness of multimodal modeling for characterizing learner engagement. Across participant-based cross-validation, our model achieves an MAE of 0.81, 83.75% within-1 accuracy, 73.93% binary accuracy, and 68.45% binary Macro-F1, outperforming sensor-free, statistical, deep temporal, foundation-model, and LLM-based baselines. Our results suggest that fine-grained engagement estimation is feasible but inherently noisy, and that practical systems should prioritize lightweight combinations of behavioral and physiological signals over full multimodal instrumentation. We release the EduGage dataset, including synchronized multimodal sensor signals, probe-aligned momentary engagement labels, video metadata, quizzes, and study materials, to support reproducible research on fine-grained sensor-based engagement modeling in self-guided learning.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The manuscript presents EduGage, a dataset and methods for sensor-based momentary assessment of engagement in self-guided video learning. It describes a study with 16 participants using multimodal sensors (PPG, ECG, EDA, EEG, IMU, heart rate, temperature, eye-tracking) to collect physiological and motion signals during video tasks, with repeated in-situ self-reports serving as ground truth labels. Models are developed and evaluated via participant-based cross-validation, achieving MAE of 0.81, 83.75% within-1 accuracy, 73.93% binary accuracy, and 68.45% binary Macro-F1 while outperforming sensor-free, statistical, deep temporal, foundation-model, and LLM-based baselines; the work concludes that fine-grained estimation is feasible but noisy and recommends lightweight sensor combinations, while releasing the full dataset including synchronized signals, labels, quizzes, and materials.
Significance. If the results hold, the work has moderate significance for HCI and educational technology by demonstrating the feasibility of wearable and camera-based sensing for engagement estimation in online learning and highlighting practical multimodal trade-offs. A clear strength is the public release of the EduGage dataset with synchronized multimodal signals, probe-aligned labels, video metadata, quizzes, and study materials, which directly supports reproducible research and future extensions in sensor-based learning analytics.
major comments (2)
- [User Study and Evaluation] The performance metrics (MAE 0.81, accuracies, and baseline outperformance) are computed exclusively against repeated in-situ self-report probes as ground truth. The manuscript provides no quantification of label reliability such as test-retest consistency, inter-probe agreement, or correlations with external criteria (e.g., quiz scores or behavioral dwell time on video content), despite acknowledging that estimation is 'inherently noisy'. This assumption is load-bearing for the central claim that sensor signals can estimate engagement.
- [Evaluation] The participant-based cross-validation results with N=16 are reported without full details on feature engineering, model architecture choices, data exclusion rules, or error analysis. This limits verification of the claimed superiority over the listed baselines and assessment of robustness given the small sample and potential individual variability.
minor comments (1)
- The abstract would be strengthened by specifying the total number of probes, tasks, and data points collected to better contextualize the scale and density of the momentary assessment.
Simulated Author's Rebuttal
We thank the referee for their constructive feedback on our manuscript. We address each major comment below and indicate where revisions have been made to improve clarity, reproducibility, and the strength of our claims.
read point-by-point responses
-
Referee: [User Study and Evaluation] The performance metrics (MAE 0.81, accuracies, and baseline outperformance) are computed exclusively against repeated in-situ self-report probes as ground truth. The manuscript provides no quantification of label reliability such as test-retest consistency, inter-probe agreement, or correlations with external criteria (e.g., quiz scores or behavioral dwell time on video content), despite acknowledging that estimation is 'inherently noisy'. This assumption is load-bearing for the central claim that sensor signals can estimate engagement.
Authors: We agree that quantifying label reliability strengthens the central claim. Self-report probes are the most direct ground truth for subjective engagement states, and their use is standard in momentary assessment studies. In the revised manuscript we have added a new subsection on label quality that reports (1) intra-class correlation coefficients for repeated probes within each participant to assess test-retest consistency, (2) average inter-probe agreement across the session, and (3) Pearson correlations between per-participant mean engagement scores and both quiz accuracy and video dwell time as external behavioral criteria. These additions show moderate but statistically significant alignment while confirming the expected noise level. We retain the original claim that sensor signals can estimate engagement because the models still outperform all baselines even after accounting for label variability; the added analyses make this argument more transparent rather than altering the core results. revision: yes
-
Referee: [Evaluation] The participant-based cross-validation results with N=16 are reported without full details on feature engineering, model architecture choices, data exclusion rules, or error analysis. This limits verification of the claimed superiority over the listed baselines and assessment of robustness given the small sample and potential individual variability.
Authors: We accept that additional methodological detail is required for verification. The revised manuscript now includes an expanded Methods section and a new Appendix that fully specify: (a) the complete feature set extracted from each sensor (e.g., time- and frequency-domain PPG features, EEG band powers, eye-tracking fixation metrics), (b) model architectures and hyperparameter grids for the temporal models and foundation-model baselines, (c) explicit data exclusion rules (artifact thresholds, minimum segment length, participant-level filtering), and (d) a per-participant error analysis with MAE distributions, confusion matrices, and discussion of individual variability. These additions allow readers to reproduce the participant-based cross-validation and assess robustness directly. The small N=16 remains a limitation we already note in the Discussion; the added details do not change the reported metrics but improve their interpretability. revision: yes
Circularity Check
No circularity: purely empirical supervised modeling with independent labels and cross-validation
full rationale
The paper describes sensor data collection paired with repeated in-situ self-report probes as ground-truth labels, followed by standard supervised model training and participant-based cross-validation to produce performance metrics (MAE, accuracies, F1). No equations, derivations, fitted parameters renamed as predictions, or load-bearing self-citations appear in the provided text. The central claims rest on held-out empirical evaluation rather than any reduction to inputs by construction, satisfying the criteria for a self-contained non-circular result.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Brief in-situ self-reports accurately capture momentary engagement as a ground truth label.
Reference graph
Works this paper leans on
-
[1]
Gerrit Anders, Jürgen Buder, Martin Merkt, Etienne Egger, and Markus Huff. 2024. Associations between mind wandering, viewer interactions, and the meaningful structure of educational videos. Computers & Education 212 (2024), 104996
2024
-
[2]
Andrea Apicella, Pasquale Arpaia, Mirco Frosolone, Giovanni Improta, Nicola Moccaldi, and Andrea Pollastro. 2022. EEG-based measurement system for monitoring student engagement in learning 4.0. Scientific Reports 12, 1 (2022), 5857
2022
-
[3]
James J Appleton, Sandra L Christenson, Dongjin Kim, and Amy L Reschly. 2006. Measuring cognitive and psychological engagement: Validation of the Student Engagement Instrument. Journal of school psychology 44, 5 (2006), 427–445
2006
-
[4]
John Arevalo, Thamar Solorio, Manuel Montes-y Gómez, and Fabio A González. 2017. Gated multimodal units for information fusion.arXiv preprint arXiv:1702.01992 (2017)
work page Pith review arXiv 2017
-
[5]
Win-Ken Beh, Yi-Hsuan Wu, and An-Yeu Wu. 2021. Robust PPG-based mental workload assessment system using wearable devices. IEEE Journal of Biomedical and Health Informatics 27, 5 (2021), 2323–2333
2021
-
[6]
Gary G Berntson, J Thomas Bigger Jr, Dwain L Eckberg, Paul Grossman, Peter G Kaufmann, Marek Malik, Haikady N Nagaraja, Stephen W Porges, J Philip Saul, Peter H Stone, et al. 1997. Heart rate variability: origins, methods, and interpretive caveats. Psychophysiology 34, 6 (1997), 623–648
1997
-
[7]
Sizhen Bian, Mengxi Liu, Siyu Yuan, Lala Shakti Swarup Ray, Bo Zhou, Bin Guo, Zhiwen Yu, Thomas Ploetz, Paul Lukowicz, and Vitor Fortes Rey
-
[8]
Foundation Models Defining A New Era In Sensor-based Human Activity Recognition: A Survey And Outlook.arXiv preprint arXiv:2604.02711 (2026)
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[9]
Paulo Blikstein and Marcelo Worsley. 2016. Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. Journal of learning analytics 3, 2 (2016), 220–238
2016
-
[10]
Blumenfeld, Tali M
Phyllis C. Blumenfeld, Tali M. Kempler, and Joseph S. Krajcik. 2006. Motivation and Cognitive Engagement in Learning Environments. In The Cambridge Handbook of the Learning Sciences , R. Keith Sawyer (Ed.). Cambridge University Press, 475–488
2006
-
[11]
Brandon M Booth, Nigel Bosch, and Sidney K D’Mello. 2023. Engagement detection and its applications in learning: a tutorial and selective review. Proc. IEEE 111, 10 (2023), 1398–1422
2023
-
[12]
Nigel Bosch. 2016. Detecting student engagement: Human versus machine. In proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. 317–320
2016
-
[13]
Wolfram Boucsein. 2012. Electrodermal activity. Springer science & business media
2012
-
[14]
Maritza Bustos-Lopez, Nicandro Cruz-Ramirez, Alejandro Guerra-Hernandez, Laura Nely Sánchez-Morales, Nancy Aracely Cruz-Ramos, and Giner Alor-Hernandez. 2022. Wearables for engagement detection in learning environments: A review. Biosensors 12, 7 (2022), 509
2022
-
[15]
Rafael A Calvo and Sidney D’Mello. 2010. Affect detection: An interdisciplinary review of models, methods, and their applications.IEEE Transactions on affective computing 1, 1 (2010), 18–37
2010
-
[16]
Michelene TH Chi and Ruth Wylie. 2014. The ICAP framework: Linking cognitive engagement to active learning outcomes.Educational psychologist 49, 4 (2014), 219–243
2014
-
[17]
Burcu Cinaz, Bert Arnrich, Roberto La Marca, and Gerhard Tröster. 2013. Monitoring of mental workload levels during an everyday life office-work scenario. Personal and ubiquitous computing 17, 2 (2013), 229–239
2013
-
[18]
Colin Conrad and Aaron Newman. 2021. Measuring mind wandering during online lectures assessed with EEG. Frontiers in Human Neuroscience 15 (2021), 697532
2021
-
[19]
Hugo D Critchley. 2002. Electrodermal responses: what happens in the brain. The Neuroscientist 8, 2 (2002), 132–142
2002
-
[20]
Mihaly Csikszentmihalyi and Reed Larson. 1987. Validity and reliability of the experience-sampling method. The Journal of nervous and mental disease 175, 9 (1987), 526–536
1987
-
[21]
Alex Dan, Miriam Reiner, et al. 2017. Real time EEG based measurements of cognitive load indicates mental states during learning. Journal of Educational Data Mining 9, 2 (2017), 31–44
2017
-
[22]
Dries De Weerdt, Mathea Simons, and Elke Struyf. 2024. Measuring student engagement in lessons using an experience sampling methodology: The development and validation of the dynamic engagement with learning questionnaire. Journal of Psychoeducational Assessment 42, 5 (2024), 527–539
2024
-
[23]
Egon Dejonckheere, Febe Demeyer, Birte Geusens, Maarten Piot, Francis Tuerlinckx, Stijn Verdonck, and Merijn Mestdagh. 2022. Assessing the reliability of single-item momentary affective measurements in experience sampling. Psychological assessment 34, 12 (2022), 1138. This manuscript is under review. Please write to zleng7@gatech.edu for up-to-date information
2022
-
[24]
Elena Di Lascio, Shkurta Gashi, and Silvia Santini. 2018. Unobtrusive assessment of students’ emotional engagement during lectures using electro- dermal activity sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 3 (2018), 1–21
2018
-
[25]
Betsy DiSalvo, Dheeraj Bandaru, Qiaosi Wang, Hong Li, and Thomas Plötz. 2022. Reading the room: Automated, momentary assessment of student engagement in the classroom: Are we there yet? Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1–26
2022
-
[26]
Sidney D’Mello and Art Graesser. 2012. Dynamics of affective states during complex learning. Learning and Instruction 22, 2 (2012), 145–157
2012
-
[27]
Jennifer A Fredricks, Phyllis C Blumenfeld, and Alison H Paris. 2004. School engagement: Potential of the concept, state of the evidence. Review of educational research 74, 1 (2004), 59–109
2004
-
[28]
Nan Gao, Mohammad Saiedur Rahaman, Wei Shao, Kaixin Ji, and Flora D Salim. 2022. Individual and group-wise classroom seating experience: Effects on student engagement in different courses. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6, 3 (2022), 1–23
2022
-
[29]
Nan Gao, Wei Shao, Mohammad Saiedur Rahaman, and Flora D Salim. 2020. n-gage: Predicting in-class emotional, behavioural and cognitive engagement in the wild. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1–26
2020
-
[30]
Mononito Goswami, Konrad Szafer, Arjun Choudhry, Yifu Cai, Shuo Li, and Artur Dubrawski. 2024. MOMENT: A Family of Open Time-series Foundation Models. In International Conference on Machine Learning
2024
-
[31]
Philip J Guo, Juho Kim, and Rob Rubin. 2014. How video production affects student engagement: An empirical study of MOOC videos. InProceedings of the first ACM conference on Learning@ scale conference . 41–50
2014
-
[32]
Jiaman He, Zikang Leng, Dana McKay, Johanne R Trippas, and Damiano Spina. 2025. Characterising Topic Familiarity and Query Specificity Using Eye-Tracking Data. In Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2602–2606
2025
-
[33]
Jiaman He, Marta Micheli, Damiano Spina, Dana McKay, Johanne R Trippas, and Noriko Kando. 2026. Characterizing Personality from Eye-Tracking: The Role of Gaze and Its Absence in Interactive Search Environments. In Proceedings of the 2026 Conference on Human Information Interaction and Retrieval. 193–203
2026
-
[34]
Curtis R Henrie, Lisa R Halverson, and Charles R Graham. 2015. Measuring student engagement in technology-mediated learning: A review. Computers & Education 90 (2015), 36–53
2015
-
[35]
Anne Horvers, Natasha Tombeng, Tibor Bosse, Ard W Lazonder, and Inge Molenaar. 2021. Detecting emotions through electrodermal activity in learning contexts: A systematic review. Sensors 21, 23 (2021), 7869
2021
-
[36]
Stephen Hutt, Jessica Hardey, Robert Bixler, Angela Stewart, Evan Risko, and Sidney K D’Mello. 2017. Gaze-Based Detection of Mind Wandering during Lecture Viewing. International Educational Data Mining Society (2017)
2017
-
[37]
Stephen Hutt, Kristina Krasich, Caitlin Mills, Nigel Bosch, Shelby White, James R Brockmole, and Sidney K D’Mello. 2019. Automated gaze-based mind wandering detection during computerized learning in classrooms: S. Hutt et al. User Modeling and User-Adapted Interaction 29, 4 (2019), 821–867
2019
-
[38]
Robert A Jacobs, Michael I Jordan, Steven J Nowlan, and Geoffrey E Hinton. 1991. Adaptive mixtures of local experts. Neural computation 3, 1 (1991), 79–87
1991
- [39]
-
[40]
Ella R Kahu. 2013. Framing student engagement in higher education. Studies in higher education 38, 5 (2013), 758–773
2013
-
[41]
Ella R Kahu and Karen Nelson. 2018. Student engagement in the educational interface: Understanding the mechanisms of student success. Higher education research & development 37, 1 (2018), 58–71
2018
-
[42]
Fahim Kawsar, Chulhong Min, Akhil Mathur, Alessandro Montanari, Utku Günay Acer, and Marc Van den Broeck. 2018. eSense: Open earable platform for human sensing. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems . 371–372
2018
-
[43]
Juho Kim, Philip J Guo, Daniel T Seaton, Piotr Mitros, Krzysztof Z Gajos, and Robert C Miller. 2014. Understanding in-video dropouts and interaction peaks inonline lecture videos. In Proceedings of the first ACM conference on Learning@ scale conference . 31–40
2014
-
[44]
René F Kizilcec, Chris Piech, and Emily Schneider. 2013. Deconstructing disengagement: analyzing learner subpopulations in massive open online courses. In Proceedings of the third international conference on learning analytics and knowledge . 170–179
2013
-
[45]
David R Krathwohl. 2002. A revision of Bloom’s taxonomy: An overview. Theory into practice 41, 4 (2002), 212–218
2002
-
[46]
Shelbi L Kuhlmann, Robert Plumley, Zoe Evans, Matthew L Bernacki, Jeffrey A Greene, Kelly A Hogan, Michael Berro, Kathleen Gates, and Abigail Panter. 2024. Students’ active cognitive engagement with instructional videos predicts STEM learning. Computers & Education 216 (2024), 105050
2024
-
[47]
Farzana Kulsoom, Sanam Narejo, Zahid Mehmood, Hassan Nazeer Chaudhry, Ayesha Butt, and Ali Kashif Bashir. 2022. A review of machine learning-based human activity recognition for diverse applications. Neural Computing and Applications 34, 21 (2022), 18289–18324
2022
-
[48]
Jun Li, Aaron Aguirre, Junior Moura, Che Liu, Lanhai Zhong, Chenxi Sun, Gari Clifford, Brandon Westover, and Shenda Hong. 2024. An electrocar- diogram foundation model built on over 10 million recordings with external evaluation across multiple domains. arXiv preprint arXiv:2410.04133 (2024)
-
[49]
Chang Liu, Xiangyang Wang, Chun Yu, Yingtian Shi, Chongyang Wang, Ziqi Liu, Chen Liang, and Yuanchun Shi. 2025. Enhancing Smartphone Eye Tracking with Cursor-Based Interactive Implicit Calibration. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. 1–22. This manuscript is under review. Please write to zleng7@gatech.edu for u...
2025
-
[50]
Change Liu, Chun Yu, Xiangyang Wang, Jianxiao Jiang, Tiaoao Yang, Bingda Tang, Yingtian Shi, Chen Liang, and Yuanchun Shi. 2024. Cali- bread: Unobtrusive eye tracking calibration from natural reading behavior. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 8, 4 (2024), 1–30
2024
-
[51]
Yunfei Luo, Yuliang Chen, Asif Salekin, and Tauhidur Rahman. 2024. Toward foundation model for multivariate wearable sensing of physiological signals. ACM Transactions on Computing for Healthcare (2024)
2024
-
[52]
Sebastian Mach, Pamela Storozynski, Josephine Halama, and Josef F Krems. 2022. Assessing mental workload with wearable devices–Reliability and applicability of heart rate and motion measurements. Applied ergonomics 105 (2022), 103855
2022
-
[53]
Kristine C Manwaring, Ross Larsen, Charles R Graham, Curtis R Henrie, and Lisa R Halverson. 2017. Investigating student engagement in blended learning settings using experience sampling and structural equation modeling. The Internet and Higher Education 35 (2017), 21–33
2017
-
[54]
Andrew J Martin, Marianne Mansour, and Lars-Erik Malmberg. 2020. What factors influence students’ real-time motivation and engagement? An experience sampling study of high school students using mobile technology. Educational Psychology 40, 9 (2020), 1113–1135
2020
-
[55]
Massachusetts Institute of Technology. 2001. MIT OpenCourseWare. https://ocw.mit.edu
2001
-
[56]
Cynthia J Miller, Jacquee McNear, and Michael J Metz. 2013. A comparison of traditional and engaging lecture methods in a large, professional-level course. Advances in physiology education 37, 4 (2013), 347–355
2013
-
[57]
Hamed Monkaresi, Nigel Bosch, Rafael A Calvo, and Sidney K D’Mello. 2016. Automated detection of engagement using video-based estimation of facial expressions and heart rate. IEEE Transactions on Affective Computing 8, 1 (2016), 15–28
2016
-
[58]
Selene Mota and Rosalind W Picard. 2003. Automated posture analysis for detecting learner’s interest level. In 2003 Conference on computer vision and pattern recognition workshop , Vol. 5. IEEE, 49–49
2003
-
[59]
Michael Noetel, Shantell Griffith, Oscar Delaney, Taren Sanders, Philip Parker, Borja del Pozo Cruz, and Chris Lonsdale. 2021. Video improves learning in higher education: A systematic review. Review of educational research 91, 2 (2021), 204–236
2021
-
[60]
Xavier Ochoa and Marcelo Worsley. 2016. Editorial: Augmenting learning analytics with multimodal sensory data. Journal of Learning Analytics 3, 2 (2016), 213–219
2016
-
[61]
Francisco Javier Ordóñez and Daniel Roggen. 2016. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 16, 1 (2016), 115
2016
-
[62]
Adrian M Owen, Kathryn M McMillan, Angela R Laird, and Ed Bullmore. 2005. N-back working memory paradigm: A meta-analysis of normative functional neuroimaging studies. Human brain mapping 25, 1 (2005), 46–59
2005
-
[63]
Leisi Pei, Morris Siu-Yung Jong, Junjie Shang, and Guang Ouyang. 2025. Design and validation of an electroencephalogram-supported approach to tracking real-time cognitive load variations for adaptive video-based learning. British Journal of Educational Technology 56, 4 (2025), 1553–1572
2025
-
[64]
Paul R Pintrich and Elisabeth V De Groot. 1990. Motivational and self-regulated learning components of classroom academic performance. Journal of educational psychology 82, 1 (1990), 33
1990
-
[65]
Alan T Pope, Edward H Bogart, and Debbie S Bartolome. 1995. Biocybernetic system evaluates indices of operator engagement in automated task. Biological psychology 40, 1-2 (1995), 187–195
1995
-
[66]
Mithun Saha, Maxwell A Xu, Wanting Mao, Sameer Neupane, James M Rehg, and Santosh Kumar. 2025. Pulse-ppg: An open-source field-trained ppg foundation model for wearable applications across lab and field settings. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 9, 3 (2025), 1–35
2025
-
[67]
Reza Sarailoo, Kayhan Latifzadeh, S Hamid Amiri, Alireza Bosaghzadeh, and Reza Ebrahimpour. 2022. Assessment of instantaneous cognitive load imposed by educational multimedia using electroencephalography signals. Frontiers in neuroscience 16 (2022), 744737
2022
-
[68]
Andrey V Savchenko, Lyudmila V Savchenko, and Ilya Makarov. 2022. Classifying emotions and engagement in online learning based on a single facial expression recognition neural network. IEEE transactions on affective computing 13, 4 (2022), 2132–2143
2022
-
[69]
Noah L Schroeder, William L Romine, and Sidney E Kemp. 2023. A scoping review of wrist-worn wearables in education. Computers and Education Open 5 (2023), 100154
2023
-
[70]
Margitta Seeck, Laurent Koessler, Thomas Bast, Frans Leijten, Christoph Michel, Christoph Baumgartner, Bin He, and Sándor Beniczky. 2017. The standardized EEG electrode array of the IFCN. Clinical Neurophysiology 128, 10 (2017), 2070–2077. doi:10.1016/j.clinph.2017.06.254
-
[71]
David J Shernof, Erik A Ruzek, Alexander J Sannella, Roberta Y Schorr, Lina Sanchez-Wall, and Denise M Bressler. 2017. Student engagement as a general factor of classroom experience: Associations with student practices and educational outcomes in a university gateway course. Frontiers in psychology 8 (2017), 994
2017
-
[72]
David J Shernoff, Mihaly Csikszentmihalyi, Barbara Schneider, and Elisa Steele Shernoff. 2014. Student engagement in high school classrooms from the perspective of flow theory. In Applications of flow in human development and education: The collected works of Mihaly Csikszentmihalyi . Springer, 475–494
2014
-
[73]
Saul Shiffman, Arthur A Stone, and Michael R Hufford. 2008. Ecological momentary assessment. Annu. Rev. Clin. Psychol. 4, 1 (2008), 1–32
2008
-
[74]
Singh, Madan Kumar Sharma, Aimé Lay-Ekuakille, Deepak Gangwar, and Sukrit Gupta
Satya P. Singh, Madan Kumar Sharma, Aimé Lay-Ekuakille, Deepak Gangwar, and Sukrit Gupta. 2021. Deep ConvLSTM With Self-Attention for Human Activity Decoding Using Wearable Sensors. IEEE Sensors Journal 21, 6 (2021), 8575–8582. doi:10.1109/JSEN.2020.3045135
-
[75]
Jonathan Smallwood and Jonathan W Schooler. 2015. The science of mind wandering: Empirically navigating the stream of consciousness. Annual review of psychology 66, 1 (2015), 487–518
2015
-
[76]
Jiyoung Song, Esther Howe, Joshua R Oltmanns, and Aaron J Fisher. 2023. Examining the concurrent and predictive validity of single items in ecological momentary assessments. Assessment 30, 5 (2023), 1662–1671. This manuscript is under review. Please write to zleng7@gatech.edu for up-to-date information
2023
-
[77]
Arthur A Stone, Stefan Schneider, and Joshua M Smyth. 2023. Evaluation of pressing issues in ecological momentary assessment. Annual Review of Clinical Psychology 19 (2023), 107–131
2023
-
[78]
Valdemar Švábenskỳ, Brendan Flanagan, Erwin Daniel López Zapata, and Atsushi Shimada. 2026. Open Datasets in Learning Analytics: Trends, Challenges, and Best PRACTICE. ACM Transactions on Knowledge Discovery from Data (2026)
2026
-
[79]
Karl K Szpunar, Novall Y Khan, and Daniel L Schacter. 2013. Interpolated memory tests reduce mind wandering and improve learning of online lectures. Proceedings of the National Academy of Sciences 110, 16 (2013), 6313–6317
2013
-
[80]
Jiankai Tang, Zhe He, Mingyu Zhang, Wei Geng, Chengchi Zhou, Weinan Shi, Yuanchun Shi, and Yuntao Wang. 2025. ?-Ring: A Smart Ring Platform for Multimodal Physiological and Behavioral Sensing. In Companion of the 2025 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 1271–1277
2025
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.