pith. machine review for the scientific record. sign in

arxiv: 2512.23926 · v3 · submitted 2025-12-30 · 💻 cs.NE · nlin.CD

Recognition: 2 theorem links

· Lean Theorem

Identification of fixations and saccades in eye-tracking data using adaptive threshold-based method

Authors on Pith no claims yet

Pith reviewed 2026-05-16 19:51 UTC · model grok-4.3

classification 💻 cs.NE nlin.CD
keywords eye-trackingfixation detectionsaccade detectionadaptive thresholdingMarkov modelnoise robustnessvelocity thresholddispersion threshold
0
0 comments X

The pith

Adaptive thresholds optimized by minimizing Markov state transitions improve fixation and saccade detection in noisy eye-tracking data.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces an adaptive thresholding method that treats eye-gaze as a two-state Markov process with fixations and saccades as states. The threshold is chosen to minimize the K-ratio, which counts state transitions, rather than relying on fixed ad-hoc values. This is tested on velocity, angular velocity, and dispersion algorithms using free-viewing and visual search tasks. In clean data a fixed velocity threshold reaches 90-93 percent accuracy, but all fixed methods drop sharply with added noise; the adaptive versions recover performance, with dispersion staying above 81 percent even at 50-pixel noise. The approach supplies a data-driven way to handle inter-task and inter-individual variability in oculomotor behavior.

Core claim

The authors claim that replacing fixed thresholds with ones that minimize state transitions in a Markov approximation of eye-gaze dynamics yields more accurate and noise-robust classification of fixations versus saccades. When applied to dispersion-based detection, this adaptive rule maintains accuracy above 81 percent at extreme noise levels where fixed thresholds fall below 20 percent, although it trades off some saccade recall for fixation precision.

What carries the argument

K-ratio minimization inside a two-state Markov model of eye-gaze dynamics, where the threshold parameter is selected to reduce the number of transitions between fixation and saccade states.

If this is right

  • Velocity thresholds give the highest accuracy (90-93 percent) when data are clean.
  • Adaptive optimization raises performance for velocity, angular-velocity, and dispersion methods once noise is present.
  • Adaptive dispersion thresholds keep accuracy above 81 percent even at 50-pixel noise.
  • A precision-recall trade-off appears that favors fixation detection over saccade detection.
  • Algorithm choice can be guided by measured data quality and by whether fixation or saccade statistics matter more.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same Markov-minimization idea could be tested on smooth-pursuit or microsaccade detection without new labeled data.
  • Real-time implementations might allow automatic retuning during an experiment when noise statistics change.
  • Combining the adaptive threshold with simple signal filters could reduce the precision-recall trade-off observed for dispersion.

Load-bearing premise

That the threshold minimizing the number of state transitions in the Markov model is the right criterion for separating fixations from saccades.

What would settle it

A dataset with hand-labeled or high-precision ground-truth fixations and saccades, corrupted by Gaussian noise of standard deviation 50 pixels, on which the K-ratio-minimizing dispersion threshold fails to reach 81 percent accuracy.

read the original abstract

Properties of ocular fixations and saccades are highly stochastic during many experimental tasks, and their statistics are often used as proxies for various aspects of cognition. Although distinguishing saccades from fixations is not trivial, experimentalists generally use common ad-hoc thresholds in detection algorithms. This neglects inter-task and inter-individual variability in oculomotor dynamics, and potentially biases the resulting statistics. In this article, we introduce and evaluate an adaptive method based on a Markovian approximation of eye-gaze dynamics, using saccades and fixations as states such that the optimal threshold minimizes state transitions. Applying this to three common threshold-based algorithms (velocity, angular velocity, and dispersion), we evaluate the overall accuracy against a multi-threshold benchmark as well as robustness to noise. We find that a velocity threshold achieves the highest baseline accuracy (90-93\%) across both free-viewing and visual search tasks. However, velocity-based methods degrade rapidly under noise when thresholds remain fixed, with accuracy falling below 20% at high noise levels. Adaptive threshold optimization via K-ratio minimization substantially improves performance under noisy conditions for all algorithms. Adaptive dispersion thresholds demonstrate superior noise robustness, maintaining accuracy above 81% even at extreme noise levels ({\sigma} = 50 px), though a precision-recall trade-off emerges that favors fixation detection at the expense of saccade identification. In addition to demonstrating our parsimonious adaptive thresholding method, these findings provide practical guidance for selecting and tuning classification algorithms based on data quality and analytical priorities.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 3 minor

Summary. The paper introduces an adaptive threshold method for classifying fixations and saccades in eye-tracking data. It approximates gaze dynamics with a two-state Markov model (fixation/saccade) and selects the threshold minimizing the K-ratio of state transitions. This is applied to velocity, angular-velocity, and dispersion algorithms. On free-viewing and visual-search tasks the velocity method reaches 90-93% baseline accuracy against a multi-threshold benchmark; under additive Gaussian noise all fixed-threshold versions degrade sharply, but K-ratio adaptation restores performance, with dispersion thresholds retaining >81% accuracy even at σ=50 px (at the cost of a fixation/saccade precision-recall trade-off).

Significance. If the K-ratio criterion is shown to track benchmark-optimal thresholds, the method supplies a parsimonious, data-driven alternative to hand-tuned fixed thresholds, directly addressing inter-subject and inter-task variability that currently biases oculomotor statistics. The concrete accuracy figures, noise-robustness results, and explicit comparison to a multi-threshold reference provide a usable practical guideline for algorithm selection under varying data quality.

major comments (2)
  1. [Evaluation / Results] The central claim that K-ratio minimization yields the threshold that optimally separates fixations from saccades is not directly verified. The manuscript reports accuracy gains relative to the multi-threshold benchmark but does not demonstrate that the K-minimizing threshold coincides with (or is close to) the benchmark-optimal threshold at each noise level; without this correspondence the reported improvements cannot be attributed to the Markov criterion rather than incidental effects of threshold adjustment.
  2. [Results] No statistical tests, confidence intervals, or participant/trial counts are provided for the accuracy figures (90-93% baseline, >81% at σ=50 px). This omission makes it impossible to judge whether the claimed superiority of adaptive dispersion thresholds is reliable or merely descriptive.
minor comments (3)
  1. The abstract and text refer to a 'multi-threshold benchmark' without describing its construction, the range of thresholds tested, or how the reference labels are obtained; this information is required for reproducibility.
  2. Dataset details (number of participants, recording device, sampling rate, task instructions, and preprocessing steps) are not supplied, preventing assessment of generalizability.
  3. Notation for the K-ratio and the precise definition of 'state transitions' in the two-state Markov approximation should be given explicitly, preferably with a short equation or pseudocode.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment below and will revise the manuscript to incorporate the suggested improvements.

read point-by-point responses
  1. Referee: [Evaluation / Results] The central claim that K-ratio minimization yields the threshold that optimally separates fixations from saccades is not directly verified. The manuscript reports accuracy gains relative to the multi-threshold benchmark but does not demonstrate that the K-minimizing threshold coincides with (or is close to) the benchmark-optimal threshold at each noise level; without this correspondence the reported improvements cannot be attributed to the Markov criterion rather than incidental effects of threshold adjustment.

    Authors: We agree that a direct verification of the correspondence between K-ratio-minimizing thresholds and benchmark-optimal thresholds at each noise level would strengthen attribution of the gains to the Markov criterion. In the revised manuscript we will add an analysis (new figure and accompanying text) that plots the K-minimizing threshold against the benchmark-optimal threshold across noise levels for all three algorithms, thereby demonstrating the alignment. revision: yes

  2. Referee: [Results] No statistical tests, confidence intervals, or participant/trial counts are provided for the accuracy figures (90-93% baseline, >81% at σ=50 px). This omission makes it impossible to judge whether the claimed superiority of adaptive dispersion thresholds is reliable or merely descriptive.

    Authors: We accept that the current manuscript lacks these details. The revised version will report the exact participant and trial counts, include confidence intervals (or standard errors) around all accuracy figures, and add appropriate statistical comparisons (e.g., paired tests between fixed and adaptive thresholds) where the data permit. revision: yes

Circularity Check

0 steps flagged

No significant circularity in the derivation chain

full rationale

The paper defines an adaptive threshold via K-ratio minimization on a two-state Markov model of gaze dynamics, then evaluates the resulting classifications against an independent multi-threshold benchmark. This benchmark serves as an external reference rather than a quantity derived from the same minimization. No equations reduce the claimed accuracy gains to the K-ratio fit by construction, and no self-citations or prior author results are invoked as load-bearing uniqueness theorems. The method is presented as a heuristic choice whose merit is assessed by separate performance metrics, keeping the derivation self-contained.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The method rests primarily on the domain assumption of a two-state Markov model for gaze dynamics; no free parameters or invented entities are introduced beyond the optimization procedure itself.

free parameters (1)
  • K-ratio
    Tunable element in the minimization procedure used to select the optimal threshold.
axioms (1)
  • domain assumption Eye-gaze dynamics can be approximated as a two-state Markov process with fixations and saccades as states.
    This approximation directly defines the optimality criterion as minimizing state transitions.

pith-pipeline@v0.9.0 · 5599 in / 1135 out tokens · 48090 ms · 2026-05-16T19:51:49.023237+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

21 extracted references · 21 canonical work pages

  1. [1]

    Duchowski, A.T.: Eye Tracking Methodology: Theory and Practice, 3rd edn., p

  2. [2]

    https://doi.org/10.1007/978-3-319-57883-5

    Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57883-5 . eBook Packages: Computer Science, Computer Science (R0)

  3. [3]

    Oxford University Press, United States (2011)

    Holmqvist, K., Nystr¨ om, M., Andersson, R., Dewhurst, R., Jarodzka, H., Weijer, J.: Eye Tracking: A Comprehensive Guide to Methods and Measures. Oxford University Press, United States (2011). DS Description: Holmqvist, K., Nystr¨ om, N., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. (Eds.) (2011). Eye tracking: a comprehensive guide to m...

  4. [4]

    In: Proceedings of the 2000 Symposium on Eye Tracking Research & Applications

    Salvucci, D.D., Goldberg, J.H.: Identifying fixations and saccades in eye-tracking protocols. In: Proceedings of the 2000 Symposium on Eye Tracking Research & Applications. ETRA ’00, pp. 71–78. Association for Computing Machinery, New York, NY, USA (2000). https://doi.org/10.1145/355017.355028

  5. [5]

    10 Biomedical Signal Processing and Control41, 10–20 (2018) https://doi.org/10

    Korda, A.I., Asvestas, P.A., Matsopoulos, G.K., Ventouras, E.M., Smyrnis, N.: Automatic identification of eye movements using the largest lyapunov exponent. 10 Biomedical Signal Processing and Control41, 10–20 (2018) https://doi.org/10. 1016/j.bspc.2017.11.004

  6. [6]

    In: Proceedings of the Tenth Annual Conference on International Computing Education Research

    Busjahn, T., Schulte, C., Sharif, B., Simon, Begel, A., Hansen, M., Bednarik, R., Orlov, P., Ihantola, P., Shchekotova, G., Antropova, M.: Eye tracking in comput- ing education. In: Proceedings of the Tenth Annual Conference on International Computing Education Research. ICER ’14, pp. 3–10. Association for Comput- ing Machinery, New York, NY, USA (2014). ...

  7. [7]

    Foundations and Trends®in Marketing1(4), 231–320 (2008) https://doi.org/10.1561/1700000011

    Wedel, M., Pieters, R.: Eye tracking for visual marketing. Foundations and Trends®in Marketing1(4), 231–320 (2008) https://doi.org/10.1561/1700000011

  8. [8]

    Behavior research methods52, 1122–1139 (2020) https://doi

    Schweitzer, R., Rolfs, M.: An adaptive algorithm for fast and reliable online saccade detection. Behavior research methods52, 1122–1139 (2020) https://doi. org/10.3758/s13428-019-01304-3

  9. [9]

    Nat Rev Neurosci5, 229–240 (2004) https://doi.org/10

    Martinez-Conde, S., Macknik, S., Hubel, D.: The role of fixational eye movements in visual perception. Nat Rev Neurosci5, 229–240 (2004) https://doi.org/10. 1038/nrn1348

  10. [10]

    Behavior Research Methods42(1), 188– 204 (2010) https://doi.org/10.3758/BRM.42.1.188

    Nystr¨ om, M., Holmqvist, K.: An adaptive algorithm for fixation, saccade, and glissade detection in eyetracking data. Behavior Research Methods42(1), 188– 204 (2010) https://doi.org/10.3758/BRM.42.1.188

  11. [11]

    Biomedical Signal Processing and Control18, 145–152 (2015) https://doi.org/10.1016/j.bspc.2014

    Larsson, L., Nystr¨ om, M., Andersson, R., Stridh, M.: Detection of fixations and smooth pursuit movements in high-speed eye-tracking data. Biomedical Signal Processing and Control18, 145–152 (2015) https://doi.org/10.1016/j.bspc.2014. 12.008

  12. [12]

    Behavior Research Methods50(4), 1605–1620 (2018) https://doi.org/10.3758/s13428-017-0860-3

    Zemblys, R., Niehorster, D.C., Holmqvist, K.: Using machine learning to detect events in eye-tracking data. Behavior Research Methods50(4), 1605–1620 (2018) https://doi.org/10.3758/s13428-017-0860-3

  13. [13]

    Behavior Research Methods49(2), 616–637 (2017) https://doi.org/10.3758/s13428-016-0738-9

    Andersson, R., Larsson, L., Holmqvist, K., Stridh, M., Nystr¨ om, M.: One algo- rithm to rule them all? an evaluation and discussion of ten eye movement event-detection algorithms. Behavior Research Methods49(2), 616–637 (2017) https://doi.org/10.3758/s13428-016-0738-9

  14. [14]

    Manuscript submitted toScientific Data(Data Descriptor), in review

    Mathema, R., Nav, S., Bhandari, S., Regmi, M., Lind, P., Yazidi, A., Lencastre, P.: Comprehensive datasets of features describing eye-gaze dynamics in different tasks. Manuscript submitted toScientific Data(Data Descriptor), in review. Manuscript ID SDATA-25-05139 (2025)

  15. [15]

    Accessed: October 20, 2025 (2024)

    SR Research Ltd.: EyeLink 1000 Manual. Accessed: October 20, 2025 (2024). https://www.manualslib.com/manual/1562170/Sr-Research-Eyelink-1000.html 11

  16. [16]

    Candlewick Press, Somerville, MA, USA (2007)

    Handford, M.: Where’s Waldo. Candlewick Press, Somerville, MA, USA (2007)

  17. [17]

    Behavior Research Methods52, 2515–2534 (2020) https://doi.org/10.3758/ s13428-020-01400-9

    Niehorster, D.C., Zemblys, R., Beelders, T., Holmqvist, K.: Characterizing gaze position signals and synthesizing noise during fixations in eye-tracking data. Behavior Research Methods52, 2515–2534 (2020) https://doi.org/10.3758/ s13428-020-01400-9

  18. [18]

    Scientific Reports 7, 17726 (2017) https://doi.org/10.1038/s41598-017-17983-x

    Pekkanen, J., Lappi, O.: A new and general approach to signal denoising and eye movement classification based on segmented linear regression. Scientific Reports 7, 17726 (2017) https://doi.org/10.1038/s41598-017-17983-x

  19. [19]

    Computer Meth- ods and Programs in Biomedicine221, 106929 (2022) https://doi.org/10.1016/ j.cmpb.2022.106929

    Franceschiello, B., Noto, T.D., Bourgeois, A., Murray, M.M., Minier, A., Pouget, P., Richiardi, J., Bartolomeo, P., Anselmi, F.: Machine learning algorithms on eye tracking trajectories to classify patients with spatial neglect. Computer Meth- ods and Programs in Biomedicine221, 106929 (2022) https://doi.org/10.1016/ j.cmpb.2022.106929

  20. [20]

    Developmental Cognitive Neuroscience 40, 100710 (2019) https://doi.org/10.1016/j.dcn.2019.100710

    Hessels, R.S., Hooge, I.T.C.: Eye tracking in developmental cognitive neuro- science – the good, the bad and the ugly. Developmental Cognitive Neuroscience 40, 100710 (2019) https://doi.org/10.1016/j.dcn.2019.100710

  21. [21]

    Komogortsev, O.V., Gobert, D.V., Jayarathna, S., Koh, D.H., Gowda, S.M.: Stan- dardization of automated analyses of oculomotor fixation and saccadic behaviors. IEEE Transactions on Biomedical Engineering57(11), 2635–2645 (2010) https: //doi.org/10.1109/TBME.2010.2057429 Supplementary results In this supplementary section, we provide detailed results from ...