pith. machine review for the scientific record. sign in

arxiv: 2603.23777 · v1 · submitted 2026-03-24 · 💻 cs.RO · cs.AI· cs.SY· eess.SY

Recognition: 2 theorem links

· Lean Theorem

Human-in-the-Loop Pareto Optimization: Trade-off Characterization for Assist-as-Needed Training and Performance Evaluation

Authors on Pith no claims yet

Pith reviewed 2026-05-14 23:58 UTC · model grok-4.3

classification 💻 cs.RO cs.AIcs.SYeess.SY
keywords Pareto optimizationhuman-in-the-loopassist-as-neededmotor learningrehabilitationBayesian optimizationperformance evaluationhaptic feedback
0
0 comments X

The pith

Human-in-the-loop Pareto optimization characterizes trade-offs between task performance and perceived challenge for adaptive motor training.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes a method to systematically map the inherent trade-off between how well a person performs a motor task and how challenging they find it. This characterization is done in real time with the human providing feedback during the optimization process. Knowing this trade-off surface matters for creating assist-as-needed training programs that adjust help based on individual needs and for fairly assessing whether training has improved a person's capabilities even when they still require assistance. The approach adapts Bayesian optimization to handle both measurable performance scores and subjective challenge ratings from users.

Core claim

By adapting Bayesian multi-criteria optimization to a human-in-the-loop setting, the trade-off between a quantitative performance metric and a qualitative perceived challenge metric can be efficiently characterized as a Pareto front. This is demonstrated in a user study involving a manual skill training task with haptic feedback. The resulting characterization supports designing assist-as-needed protocols, evaluating their group-level efficacy against baseline methods, comparing individual progress before and after training under varying assistance, and making fair performance comparisons across different users by accounting for their best possible performance at all assistance levels.

What carries the argument

Human-in-the-loop Pareto optimization using Bayesian multi-criteria optimization with a hybrid model of quantitative performance and qualitative challenge metrics, which explores the trade-off surface in real time with human participants.

If this is right

  • The characterized trade-off can be used to design assist-as-needed training protocols for motor learning tasks.
  • Group-level efficacy of the AAN protocol can be evaluated relative to a baseline adaptive assistance protocol.
  • Individual-level comparisons of trade-offs before and after training enable fair evaluation of progress even when users need assistance.
  • The trade-offs allow fair performance comparisons among different users under all feasible assistance levels.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • Such real-time trade-off mapping could extend to non-motor tasks like cognitive training where subjective effort and objective accuracy also trade off.
  • Integrating this with machine learning models of user behavior might allow predictive assistance without constant human feedback.
  • Testing in diverse populations, such as stroke patients versus healthy learners, could reveal if the Pareto fronts differ systematically.

Load-bearing premise

The hybrid model combining quantitative performance with qualitative perceived challenge accurately captures the true trade-off surface that matters for training design.

What would settle it

If applying the characterized Pareto-optimal assistance levels in a new training session fails to produce better learning outcomes or user satisfaction than a standard fixed or random assistance schedule, the characterization method would not hold.

Figures

Figures reproduced from arXiv: 2603.23777 by Harun Tolasa, Volkan Patoglu.

Figure 1
Figure 1. Figure 1: Proposed HiL Pareto characterization scheme and its application to AAN training, within- and between-participant performance evaluations. [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Selection of non-dominated solutions from the Pareto-front by [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: (a) A participant holding a force-feedback joystick (HandsOn-SEA) [PITH_FULL_IMAGE:figures/full_fig_p007_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Experimental procedure block magnets embedded in the sector pulley measures the deflections of the compliant element, thereby enabling esti￾mation of the interaction force. HandsOn-SEA can provide 15 N continuous force output at its handle with a force control bandwidth of 12 Hz, within a workspace of ±55◦ . HandsOn￾SEA works under velocity-sourced impedance control [51]– [54], implemented in real-time at … view at source ↗
Figure 5
Figure 5. Figure 5: In the top rows, the orange and blue lines show the mean of the surrogate function for the perceived challenge and the task performance, respectively, [PITH_FULL_IMAGE:figures/full_fig_p009_5.png] view at source ↗
Figure 7
Figure 7. Figure 7: Violin plots of the unassisted pre- and post-training performance [PITH_FULL_IMAGE:figures/full_fig_p010_7.png] view at source ↗
Figure 6
Figure 6. Figure 6: Results for a sample participant with low performance. The presen [PITH_FULL_IMAGE:figures/full_fig_p010_6.png] view at source ↗
Figure 8
Figure 8. Figure 8: Comparison of aggregate Pareto solutions of control (a) and test (b) groups during pre- and post-training. In these figures, a shift of the Pareto [PITH_FULL_IMAGE:figures/full_fig_p011_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Assistance provided to participants in the control group (adaptive [PITH_FULL_IMAGE:figures/full_fig_p011_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: A snapshot during HiL Pareto characterization and AAN training [PITH_FULL_IMAGE:figures/full_fig_p013_10.png] view at source ↗
read the original abstract

During human motor skill training and physical rehabilitation, there is an inherent trade-off between task difficulty and user performance. Characterizing this trade-off is crucial for evaluating user performance, designing assist-as-needed (AAN) protocols, and assessing the efficacy of training protocols. In this study, we propose a novel human-in-the-loop (HiL) Pareto optimization approach to characterize the trade-off between task performance and the perceived challenge level of motor learning or rehabilitation tasks. We adapt Bayesian multi-criteria optimization to systematically and efficiently perform HiL Pareto characterizations. Our HiL optimization employs a hybrid model that measures performance with a quantitative metric, while the perceived challenge level is captured with a qualitative metric. We demonstrate the feasibility of the proposed HiL Pareto characterization through a user study. Furthermore, we present the utility of the framework through three use cases in the context of a manual skill training task with haptic feedback. First, we demonstrate how the characterized trade-off can be used to design a sample AAN training protocol for a motor learning task and to evaluate the group-level efficacy of the proposed AAN protocol relative to a baseline adaptive assistance protocol. Second, we demonstrate that individual-level comparisons of the trade-offs characterized before and after the training session enable fair evaluation of training progress under different assistance levels. This evaluation method is more general than standard performance evaluations, as it can provide insights even when users cannot perform the task without assistance. Third, we show that the characterized trade-offs also enable fair performance comparisons among different users, as they capture the best possible performance of each user under all feasible assistance levels.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes a human-in-the-loop Pareto optimization framework that adapts Bayesian multi-criteria optimization to characterize the trade-off between a quantitative performance metric and a qualitative perceived-challenge rating in motor-learning and rehabilitation tasks. Feasibility is demonstrated through a user study, and utility is illustrated via three use cases: AAN protocol design with group-level efficacy comparison, pre/post individual trade-off comparison for training progress, and cross-user performance comparison under varying assistance levels.

Significance. If the recovered Pareto surfaces prove stable, the method supplies a more general evaluation tool than raw performance scores, particularly when users cannot complete tasks without assistance; the three use cases directly address practical needs in AAN design and fair progress assessment.

major comments (2)
  1. [Hybrid Model and User Study] The hybrid objective treats the qualitative challenge rating as a black-box function amenable to Gaussian-process modeling, yet the manuscript provides no test-retest reliability data, intra-user variance estimates, or sensitivity analysis for Likert-scale shifts of one point; such noise directly violates the smoothness and homoscedasticity assumptions required for the acquisition function to locate a stable Pareto front in few trials (see the Bayesian optimization description and the three use-case evaluations).
  2. [Use Cases 2 and 3] The claim that the characterized trade-offs enable fair pre/post and cross-user comparisons rests on the recovered front being a reliable reference; without reporting how observation noise in the qualitative metric propagates into the Pareto surface or showing repeated characterizations at fixed assistance levels, the fairness advantage over standard performance metrics remains unverified.
minor comments (2)
  1. [Methods] Specify the exact Likert-scale wording, anchoring, and normalization procedure used for the perceived-challenge metric so that the hybrid objective is reproducible.
  2. [Results] Add error bars or confidence intervals to any plotted Pareto fronts and report the number of trials per participant in the user study.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed comments on the stability of the recovered Pareto fronts and the modeling assumptions. We respond to each major comment below and indicate the revisions we will make to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Hybrid Model and User Study] The hybrid objective treats the qualitative challenge rating as a black-box function amenable to Gaussian-process modeling, yet the manuscript provides no test-retest reliability data, intra-user variance estimates, or sensitivity analysis for Likert-scale shifts of one point; such noise directly violates the smoothness and homoscedasticity assumptions required for the acquisition function to locate a stable Pareto front in few trials (see the Bayesian optimization description and the three use-case evaluations).

    Authors: We acknowledge that subjective Likert-scale challenge ratings introduce potential noise that could affect the Gaussian-process assumptions of smoothness and homoscedasticity. The original study did not collect dedicated test-retest reliability data. However, the optimization runs produced consistent and interpretable fronts across participants. In the revised manuscript we will add a sensitivity analysis that perturbs each challenge rating by ±1 point, recomputes the Pareto surfaces, and quantifies the resulting variation. We will also report intra-user variance estimates computed from the repeated evaluations obtained during each optimization run and include a brief discussion of how these factors influence acquisition-function behavior. revision: yes

  2. Referee: [Use Cases 2 and 3] The claim that the characterized trade-offs enable fair pre/post and cross-user comparisons rests on the recovered front being a reliable reference; without reporting how observation noise in the qualitative metric propagates into the Pareto surface or showing repeated characterizations at fixed assistance levels, the fairness advantage over standard performance metrics remains unverified.

    Authors: We agree that explicit evidence of front stability is required to substantiate the fairness claims in Use Cases 2 and 3. In the revised manuscript we will add a noise-propagation analysis that simulates observation noise on the collected qualitative ratings and shows the resulting envelope of Pareto surfaces. Where the study protocol recorded multiple evaluations at comparable assistance levels, we will report those repeated characterizations to illustrate consistency. These additions will provide quantitative support for the claim that the trade-off surfaces offer a more general and fair basis for pre/post and cross-user evaluation than raw performance scores. revision: yes

Circularity Check

0 steps flagged

No circularity: empirical HiL Pareto characterization is self-contained

full rationale

The paper proposes an empirical human-in-the-loop Pareto optimization framework that adapts standard Bayesian multi-criteria optimization to characterize trade-offs between a quantitative performance metric and a qualitative perceived-challenge rating. No equations, fitted parameters, or derivations are shown that reduce the claimed trade-off surface to its own inputs by construction. The method is demonstrated via a user study with three use cases, and the central construction relies on external Bayesian optimization techniques rather than self-referential definitions or load-bearing self-citations. This is a standard empirical methodology paper with no reduction of predictions to fitted inputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract-only review supplies no explicit free parameters, axioms, or invented entities; the method appears to rest on standard assumptions of Bayesian optimization and Pareto dominance that are not detailed here.

pith-pipeline@v0.9.0 · 5601 in / 1204 out tokens · 33693 ms · 2026-05-14T23:58:38.361343+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

60 extracted references · 60 canonical work pages · 1 internal anchor

  1. [1]

    Negative efficacy of fixed gain error reducing shared control for training in virtual environments,

    Y . Li, V . Patoglu, and M. K. O’Malley, “Negative efficacy of fixed gain error reducing shared control for training in virtual environments,”ACM Transactions on Applied Perception, vol. 6, no. 1, pp. 1–21, 2009

  2. [2]

    Slacking prevention during assistive contour following tasks with guaranteed coupled stability,

    A. Erdogan and V . Patoglu, “Slacking prevention during assistive contour following tasks with guaranteed coupled stability,” inIEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2012, pp. 1587–1594

  3. [3]

    Velocity field control of robot manipulators by using only position measurements,

    J. Moreno-Valenzuela, “Velocity field control of robot manipulators by using only position measurements,”Journal of the Franklin Institute, vol. 344, no. 8, pp. 1021–1038, 2007

  4. [4]

    Passive velocity field control of mechanical manipulators,

    P. Y . Li and R. Horowitz, “Passive velocity field control of mechanical manipulators,”IEEE Trans. on Robotics and Automation, vol. 15, no. 4, pp. 751–763, 1999

  5. [5]

    Development of a progressive task regulation algorithm for robot-aided rehabilitation,

    R. Colombo, I. Sterpi, A. Mazzone, C. Delconte, and F. Pisano, “Development of a progressive task regulation algorithm for robot-aided rehabilitation,” inInt. Conf. of the IEEE Engineering in Medicine and Biology Society, 2011, pp. 3123–3126

  6. [6]

    Online Generation of Velocity Fields for Passive Contour Following,

    A. Erdogan and V . Patoglu, “Online Generation of Velocity Fields for Passive Contour Following,” inIEEE World Haptics Conference, 2011, pp. 245–250

  7. [7]

    Assist-as-needed path control for the PASCAL rehabilitation robot,

    U. Keller, G. Rauter, and R. Riener, “Assist-as-needed path control for the PASCAL rehabilitation robot,” inIEEE Int. Conf. on Rehabilitation Robotics, 2013, pp. 1–7

  8. [8]

    Field- Based Assist-as-Needed Control Schemes for Rehabilitation Robots,

    H. J. Asl, M. Yamashita, T. Narikiyo, and M. Kawanishi, “Field- Based Assist-as-Needed Control Schemes for Rehabilitation Robots,” IEEE/ASME Trans. on Mechatronics, vol. 25, no. 4, pp. 2100–2111, 2020

  9. [9]

    Assist- as-needed Impedance Control Strategy for a Wearable Ankle Robotic Orthosis,

    J. Lopes, C. Pinheiro, J. Figueiredo, L. Reis, and C. Santos, “Assist- as-needed Impedance Control Strategy for a Wearable Ankle Robotic Orthosis,” inIEEE Int. Conf. on Autonomous Robot Systems and Competitions, 2020, pp. 10–15

  10. [10]

    Rehabilitation Robotics: Performance-Based Progressive Robot-Assisted Therapy,

    H. Krebs, J. Palazzolo, L. Dipietro, M. Ferraro, J. Krol, K. Rannekleiv, B. V olpe, and N. Hogan, “Rehabilitation Robotics: Performance-Based Progressive Robot-Assisted Therapy,”Autonomous Robots, vol. 15, pp. 7–20, 2003

  11. [11]

    Progressive shared control for training in virtual environments,

    Y . Li, J. C. Huegel, V . Patoglu, and M. K. O’Malley, “Progressive shared control for training in virtual environments,” inWorld Haptics, 2009, pp. 332–337

  12. [12]

    Brain Computer Interface based robotic rehabilitation with online modification of task speed,

    M. Sarac, E. Koyas, A. Erdogan, M. Cetin, and V . Patoglu, “Brain Computer Interface based robotic rehabilitation with online modification of task speed,” inIEEE Int. Conf. on Rehabilitation Robotics, 2013, pp. 1–7

  13. [13]

    Electroencephalographic identifiers of motor adaptation learning,

    O. Ozdenizci, M. Yalcın, A. Erdogan, V . Patoglu, M. Grosse-Wentrup, and M. Cetin, “Electroencephalographic identifiers of motor adaptation learning,”Journal of Neural Engineering, vol. 14, no. 4, p. 046027, 2017. Fig. 10. A snapshot during HiL Pareto characterization and AAN training with the AssistOn-Arm upper-extremity exoskeleton

  14. [14]

    V oluntary Assist-as-Needed Controller for an Ankle Power-Assist Rehabilitation Robot,

    R. Yang, Z. Shen, Y . Lyu, Y . Zhuang, L. Li, and R. Song, “V oluntary Assist-as-Needed Controller for an Ankle Power-Assist Rehabilitation Robot,”IEEE Trans. on Biomedical Engineering, vol. 70, no. 6, pp. 1795–1803, 2023

  15. [15]

    Robotic move- ment training as an optimization problem: designing a controller that assists only as needed,

    J. L. Emken, J. E. Bobrow, and D. J. Reinkensmeyer, “Robotic move- ment training as an optimization problem: designing a controller that assists only as needed,”Int. Conf. on Rehabilitation Robotics, pp. 307– 312, 2005

  16. [16]

    Adaptive regulation of assistance ‘as needed’ in robot-assisted motor skill learning and neuro- rehabilitation,

    V . Squeri, A. Basteris, and V . Sanguineti, “Adaptive regulation of assistance ‘as needed’ in robot-assisted motor skill learning and neuro- rehabilitation,” inIEEE Int. Conf. on Rehabilitation Robotics, 2011, pp. 1–6

  17. [17]

    AR3n: A Reinforcement Learning-Based Assist-as-Needed Controller for Robotic Rehabilita- tion,

    S. Pareek, H. J. Nisar, and T. Kesavadas, “AR3n: A Reinforcement Learning-Based Assist-as-Needed Controller for Robotic Rehabilita- tion,”IEEE Robotics & Automation Magazine, pp. 2–10, 2023

  18. [18]

    Bringing Psychological Strategies to Robot-Assisted Phys- iotherapy for Enhanced Treatment Efficacy,

    B. Zhong, W. Niu, E. Broadbent, A. McDaid, T. M. C. Lee, and M. Zhang, “Bringing Psychological Strategies to Robot-Assisted Phys- iotherapy for Enhanced Treatment Efficacy,”Frontiers in Neuroscience, vol. 13, 2019

  19. [19]

    Psychological state estimation from physi- ological recordings during robot-assisted gait rehabilitation,

    A. Koenig, X. Omlin, L. Zimmerli, M. Sapa, C. Krewer, M. Bolliger, F. Mueller, and R. Riener, “Psychological state estimation from physi- ological recordings during robot-assisted gait rehabilitation,”Journal of Rehabilitation Research and Development, vol. 48, pp. 367–385, 2011

  20. [20]

    Review of Control Strategies for Lower-limb Exoskeletons to Assist Gait,

    R. Baud, A. Manzoori, A. Ijspeert, and M. Bouri, “Review of Control Strategies for Lower-limb Exoskeletons to Assist Gait,”Journal of NeuroEngineering and Rehabilitation, vol. 18, no. 119, 2021

  21. [21]

    A Compre- hensive Review of Control Challenges and Methods in End-Effector Upper-Limb Rehabilitation Robots,

    D. Mahfouz, O. Shehata, E. Morgan, and F. Arrichiello, “A Compre- hensive Review of Control Challenges and Methods in End-Effector Upper-Limb Rehabilitation Robots,”Robotics, vol. 13, no. 12, p. 181, 2024

  22. [22]

    A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning

    E. Brochu, V . M. Cora, and N. de Freitas, “A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning,”CoRR, 2010. [Online]. Available: https://arxiv.org/abs/1012.2599

  23. [23]

    Body-in-the- loop: Optimizing device parameters using measures of instantaneous energetic cost,

    W. Felt, J. C. Selinger, J. M. Donelan, and C. D. Remy, “Body-in-the- loop: Optimizing device parameters using measures of instantaneous energetic cost,”PLOS One, vol. 10, no. 8, pp. 1–21, 2015

  24. [24]

    Body-in-the- Loop Optimization of Assistive Robotic Devices: A Validation Study,

    J. R. Koller, D. H. Gates, D. P. Ferris, and D. C. Remy, “Body-in-the- Loop Optimization of Assistive Robotic Devices: A Validation Study,” inRobotics: Science and Systems, 2015

  25. [25]

    Biomechanical and physiological evaluation of multi-joint assistance with soft exosuits,

    Y . Ding, I. Galiana, A. Asbeck, S. de Rossi, J. Bae, T. Santos, V . de Araujo, S. Lee, K. Holt, and C. Walsh, “Biomechanical and physiological evaluation of multi-joint assistance with soft exosuits,” IEEE Trans. on Neural Systems and Rehab. Eng., vol. 25, no. 2, pp. 119–130, 2016

  26. [26]

    Human-in-the-loop optimization of exoskeleton assistance during walking,

    J. Zhang, P. Fiers, K. A. Witte, R. W. Jackson, K. L. Poggensee, C. G. Atkeson, and S. H. Collins, “Human-in-the-loop optimization of exoskeleton assistance during walking,”Science Robotics, vol. 356, no. 6344, pp. 1280–1284, 2017

  27. [27]

    Active Preference- Based Learning of Reward Functions,

    D. Sadigh, A. Dragan, S. Sastry, and S. Seshia, “Active Preference- Based Learning of Reward Functions,” inRobotics: Science and Systems, vol. 13, 2017

  28. [28]

    ROIAL: Region of interest active learning for characterizing exoskeleton gait preference landscapes,

    K. Li, M. Tucker, E. Biyik, E. Novoseller, J. Burdick, Y . Sui, D. Sadigh, Y . Yue, and A. Ames, “ROIAL: Region of interest active learning for characterizing exoskeleton gait preference landscapes,” inIEEE Int. Conf. on Robotics and Automation, 2021, pp. 3212–3218

  29. [29]

    Active Preference-Based Gaussian Process Regression for Reward Learning,

    E. Biyik, N. Huynh, M. Kochenderfer, and D. Sadigh, “Active Preference-Based Gaussian Process Regression for Reward Learning,” inRobotics: Science and Systems, 2020

  30. [30]

    https://ieeexplore.ieee.org/document/10102327 Preference-Based Human-in-the-Loop Optimization for Perceived Realism of Haptic Rendering,

    B. Catkin and V . Patoglu, “https://ieeexplore.ieee.org/document/10102327 Preference-Based Human-in-the-Loop Optimization for Perceived Realism of Haptic Rendering,”IEEE Transactions on Haptics, vol. 16, no. 4, pp. 470–476, 2023

  31. [31]

    Human-in-the-Loop Optimization of Perceived Realism of Multi-Modal Haptic Rendering under Conflict- ing Sensory Cues,

    H. Tolasa, B. Catkin, and V . Patoglu, “Human-in-the-Loop Optimization of Perceived Realism of Multi-Modal Haptic Rendering under Conflict- ing Sensory Cues,”IEEE Trans. on Haptics, vol. 18, no. 2, pp. 295–311, 2025

  32. [32]

    Multi-objective constrained Bayesian optimization for structural design,

    A. Mathern, O. Steinholtz, A. Sj ¨oberg, M. ¨Onnheim, K. Ek, R. Rem- pling, E. Gustavsson, and M. Jirstrand, “Multi-objective constrained Bayesian optimization for structural design,”Structural and Multidis- ciplinary Optimization, vol. 63, 2021

  33. [33]

    Multi-objective Bayesian Optimization using Pareto-frontier Entropy,

    S. Suzuki, S. Takeno, T. Tamura, K. Shitara, and M. Karasuyama, “Multi-objective Bayesian Optimization using Pareto-frontier Entropy,” inInt. Conf. on Machine Learning, vol. 119, 2020, pp. 9279–9288

  34. [34]

    A Flexible Framework for Multi-Objective Bayesian Optimization using Random Scalarizations,

    B. Paria, K. Kandasamy, and B. P ´oczos, “A Flexible Framework for Multi-Objective Bayesian Optimization using Random Scalarizations,” 14 inUncertainty in Artificial Intelligence Conf., vol. 115, 2020, pp. 766– 776

  35. [35]

    Uncertainty-aware search framework for multi-objective Bayesian op- timization,

    S. Belakaria, A. Deshwal, N. K. Jayakodi, and J. R. Doppa, “Uncertainty-aware search framework for multi-objective Bayesian op- timization,” inAAAI Conf. on Artificial Intelligence, vol. 34, no. 06, 2020, pp. 10 044–10 052, An extended version is available online at https://arxiv.org/abs/2204.05944

  36. [36]

    The computation of the expected improvement in dominated hypervolume of Pareto front approximations,

    M. Emmerich, J. W. Klinkenberg, and N. Bohrweg, “The computation of the expected improvement in dominated hypervolume of Pareto front approximations,” Leiden University, The Netherlands, Technical Report LIACS-TR9-2008, 2008

  37. [37]

    Predictive Entropy Search for Multi-objective Bayesian Optimization,

    D. Hernandez-Lobato, J. Hernandez-Lobato, A. Shah, and R. Adams, “Predictive Entropy Search for Multi-objective Bayesian Optimization,” inInt. Conf. on Machine Learning, vol. 48, 2016, pp. 1492–1501

  38. [38]

    P. Y . Papalambros and D. J. Wilde,Principles of optimal design: modeling and computation. Cambridge University Press, 2000

  39. [39]

    The weighted sum method for multi- objective optimization: New insights,

    R. T. Marler and J. S. Arora, “The weighted sum method for multi- objective optimization: New insights,”Structural and Multidisciplinary Optimization, vol. 41, no. 6, pp. 853–862, 2010

  40. [40]

    A multi-criteria design optimiza- tion framework for haptic interfaces,

    R. Unal, G. Kiziltas, and V . Patoglu, “A multi-criteria design optimiza- tion framework for haptic interfaces,” inIEEE Haptics Symposium, 2008, pp. 231–238

  41. [41]

    Preferential Multi-Objective Bayesian Optimization,

    R. Astudillo, K. Li, M. Tucker, C. X. Cheng, A. D. Ames, and Y . Yue, “Preferential Multi-Objective Bayesian Optimization,”CoRR,

  42. [42]

    Available: https://arxiv.org/abs/2406.14699

    [Online]. Available: https://arxiv.org/abs/2406.14699

  43. [43]

    Active Learning of Fractional- Order Viscoelastic Model Parameters for Realistic Haptic Rendering,

    H. Tolasa, G. Gemalmaz, and V . Patoglu, “Active Learning of Fractional- Order Viscoelastic Model Parameters for Realistic Haptic Rendering,” CoRR, 2025. [Online]. Available: https://arxiv.org/abs/2512.00667

  44. [44]

    Biplanar Ankle Assistance for Dropfoot Gait Post-Stroke with Multi- Objective Human-In-the-Loop Optimization: A Case Study,

    X. Zhang, A. Fredriksen, S. Palmcrantz, and E. M. Gutierrez-Farewik, “Biplanar Ankle Assistance for Dropfoot Gait Post-Stroke with Multi- Objective Human-In-the-Loop Optimization: A Case Study,” inInt. Conf. on Rehabilitation Robotics, 2025, pp. 1132–1138

  45. [45]

    Musculoskeletal Simulation-Based Multi- criteria Optimization Framework for Exoskeleton Design,

    A. K. Bonab and V . Patoglu, “Musculoskeletal Simulation-Based Multi- criteria Optimization Framework for Exoskeleton Design,”IEEE Trans. on Neural Systems and Rehabilitation Engineering, 2026, Early Access. [Online]. Available: https://ieeexplore.ieee.org/document/11367100

  46. [46]

    A Computational Multicriteria Optimization Approach to Controller Design for Physical Human-Robot Interaction,

    Y . Aydin, O. Tokatli, V . Patoglu, and C. Basdogan, “A Computational Multicriteria Optimization Approach to Controller Design for Physical Human-Robot Interaction,”IEEE Trans. on Robotics, vol. 36, no. 6, pp. 1791–1804, 2020

  47. [47]

    Garnett,Bayesian Optimization

    R. Garnett,Bayesian Optimization. Cambridge: Cambridge University Press, 2023

  48. [48]

    Preference learning with Gaussian pro- cesses,

    W. Chu and Z. Ghahramani, “Preference learning with Gaussian pro- cesses,” inInt. Conf. on Machine Learning, 2005, pp. 137–144

  49. [49]

    C. E. Rasmussen and C. Williams,Gaussian Processes for Machine Learning. MIT Press, 2005

  50. [50]

    Human Preference-Based Learning for High- dimensional Optimization of Exoskeleton Walking Gaits,

    M. Tucker, M. Cheng, E. Novoseller, R. Cheng, Y . Yue, J. W. Bur- dick, and A. D. Ames, “Human Preference-Based Learning for High- dimensional Optimization of Exoskeleton Walking Gaits,” inIEEE Int. Conf. on Robotics and Systems, 2020, pp. 3423–3430

  51. [51]

    Physical Human-Robot Interac- tion Using HandsOn-SEA: An Educational Robotic Platform With Series Elastic Actuation,

    A. Otaran, O. Tokatli, and V . Patoglu, “Physical Human-Robot Interac- tion Using HandsOn-SEA: An Educational Robotic Platform With Series Elastic Actuation,”IEEE Trans. on Haptics, vol. 14, no. 4, pp. 922–929, 2021

  52. [52]

    Necessary and Sufficient Conditions for the Passivity of Impedance Rendering With Velocity-Sourced Series Elastic Actuation,

    F. E. Tosun and V . Patoglu, “Necessary and Sufficient Conditions for the Passivity of Impedance Rendering With Velocity-Sourced Series Elastic Actuation,”IEEE Trans. on Robotics, vol. 36, no. 3, pp. 757–772, 2020

  53. [53]

    Passive Realizations of Series Elastic Actuation: Effects of Plant and Controller Dynamics on Haptic Render- ing Performance,

    C. U. Kenanoglu and V . Patoglu, “Passive Realizations of Series Elastic Actuation: Effects of Plant and Controller Dynamics on Haptic Render- ing Performance,”IEEE Trans. on Haptics, vol. 17, no. 4, pp. 882–899, 2024

  54. [54]

    Effect of Inherent Damping of the Series Elastic Element on Rendering Performance and Passivity of Interaction Control,

    C. U. Kenanoglu and V . Patoglu, “Effect of Inherent Damping of the Series Elastic Element on Rendering Performance and Passivity of Interaction Control,”ASME Journal of Dynamic Systems, Measurement, and Control, vol. 147, no. 5, p. 051008, 2025

  55. [55]

    Effect of Reduced-Order Modelling on Passivity and Rendering Performance Analysis of Series Elastic Actuation,

    C. U. Kenanoglu and V . Patoglu, “Effect of Reduced-Order Modelling on Passivity and Rendering Performance Analysis of Series Elastic Actuation,”IEEE Robotics and Automation Letters, vol. 10, no. 6, pp. 5745–5752, 2025

  56. [56]

    Video game training enhances cognitive control in older adults,

    J. Anguera, J. Boccanfuso, J. Rintoul, O. Claflin, F. Faraji, J. Janowich, E. Kong, Y . Larraburo, C. Rolle, E. Johnston, and A. Gazzaley, “Video game training enhances cognitive control in older adults,”Nature, vol. 501, pp. 97–101, 09 2013

  57. [57]

    Transfer of Training from Virtual to Real Baseball Batting,

    R. Gray, “Transfer of Training from Virtual to Real Baseball Batting,” Frontiers in Psychology, vol. 8, 2017

  58. [58]

    Advancing Precision Rehabilitation Through a Sensor- Based 6-DoF Robotic Exoskeleton: Clinical Validation and Ergonomic Assessment,

    H. Argunsah, B. Yalcin, M. A. Ergin, G. Coruhlu, M. Yalcin, V . Patoglu, and Z. G ¨uven, “Advancing Precision Rehabilitation Through a Sensor- Based 6-DoF Robotic Exoskeleton: Clinical Validation and Ergonomic Assessment,”Sensors, vol. 26, no. 1, 2026

  59. [59]

    AssistOn-SE: A Self-Aligning Shoulder- Elbow Exoskeleton,

    M. Ergin and V . Patoglu, “AssistOn-SE: A Self-Aligning Shoulder- Elbow Exoskeleton,” inIEEE Int. Conf. on Robotics and Automation, 2012, pp. 2479–2485

  60. [60]

    Kinematics and Design of AssistOn-SE: A Self-Adjusting Shoulder-Elbow Exoskeleton,

    M. Yalcin and V . Patoglu, “Kinematics and Design of AssistOn-SE: A Self-Adjusting Shoulder-Elbow Exoskeleton,” inIEEE Int. Conf. on Biomedical Robotics and Biomechatronics, 2012, pp. 1579–1585. Harun Tolasareceived his B.Sc. degree in me- chanical engineering from Bilkent University (2021) and his M.Sc. in mechatronics engineering from Sabanci University...