pith. machine review for the scientific record. sign in

arxiv: 2605.07633 · v1 · submitted 2026-05-08 · 🧮 math.OC

Recognition: 2 theorem links

· Lean Theorem

Distributed Seeking for Fixed Points of Biased Stochastic Operators: A Communication Efficient Approach

Authors on Pith no claims yet

Pith reviewed 2026-05-11 02:38 UTC · model grok-4.3

classification 🧮 math.OC
keywords distributed fixed point seekingstochastic operatorscommunication compressionconvergence analysismulti-agent networkssurrogate functioninexact iterationsnon-convex optimization
0
0 comments X

The pith

A surrogate function proves convergence for distributed fixed-point seeking under biased stochastic operators with communication compression.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

This paper develops a method for agents connected in a network to collectively locate a fixed point of an operator that sums their individual stochastic local operators. It introduces a distributed algorithm built on inexact iterations, augmented with compression of exchanged messages and dynamic skipping of communication rounds to cut transmission costs. The core step is defining a surrogate function that bounds progress for operators that may be neither contractive nor unbiased, allowing convergence under growth conditions on bias and variance that are weaker than the classical unbiased-plus-bounded-variance requirements. This construction also supplies a direct theoretical link between fixed-point iteration and existing distributed non-convex optimization schemes. If the analysis holds, fixed-point methods become usable in settings where noise or systematic bias cannot be removed by simple averaging.

Core claim

By introducing a surrogate function for general non-contractive and contractive operators, the authors establish convergence guarantees of the distributed fixed point iteration based on inexact Krasnosel'skiĭ–Mann iterations under relaxed growth bias and variance conditions on the stochastic operators, achieving among the first theoretical unifications with distributed non-convex optimization algorithms.

What carries the argument

The surrogate function that enables error bounds and convergence analysis for the inexact distributed iteration applied to sum-separable stochastic operators.

If this is right

  • Convergence holds for both contractive and non-contractive operators under the stated bias and variance growth conditions.
  • Communication compression with a unified relative-absolute error bound and dynamic period skipping preserve the convergence guarantees while lowering transmission volume.
  • The same surrogate technique supplies a direct proof path for distributed non-convex optimization algorithms.
  • Numerical examples confirm that the iterates approach the fixed point at rates consistent with the derived bounds.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The relaxed bias conditions may allow the algorithm to tolerate persistent sensor drift or model mismatch in real multi-agent deployments.
  • Extending the analysis to time-varying or directed graphs would immediately enlarge the set of applicable networks without changing the surrogate construction.
  • Because the surrogate unifies fixed-point and non-convex results, the same compression and skipping mechanisms could be transplanted into distributed training pipelines that currently assume unbiased gradients.

Load-bearing premise

The stochastic operators satisfy the relaxed growth bias and variance conditions that generalize the traditional unbiased and bounded additive variance assumptions.

What would settle it

Construct a stochastic operator violating the growth bias condition, execute the proposed distributed algorithm on a network, and observe whether the iterates diverge from the fixed point or fail to satisfy the predicted contraction rate.

read the original abstract

This paper investigates the distributed fixed point seeking problem of sum-separable stochastic operators over the multi-agent network. Based on inexact Krasnosel'ski\u{\i}--Mann iterations, the communication-efficient distributed algorithm is proposed under the relaxed growth bias and variance conditions, generalizing traditional unbiased and bounded additive variance assumptions. To enhance communication efficiency, we integrate communication compression and dynamic period skipping techniques, particularly adopting a unified compressor that allows both relative and absolute compression errors. By introducing a surrogate function for general non-contractive and contractive operators, we establish convergence guarantees of the distributed fixed point iteration, achieving among the first theoretical unifications with distributed non-convex optimization algorithms. Finally, numerical simulations validate the effectiveness of the theoretical results.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes a communication-efficient distributed algorithm for fixed-point seeking of sum-separable stochastic operators over multi-agent networks. It builds on inexact Krasnosel'skiĭ-Mann iterations, incorporates a unified compressor supporting relative/absolute errors along with dynamic period skipping, and introduces a surrogate function to obtain convergence guarantees under relaxed growth bias and variance conditions that generalize the classical unbiased/bounded-variance setting. The analysis covers both contractive and non-contractive operators and claims a theoretical unification with distributed non-convex optimization; numerical examples are included for validation.

Significance. If the surrogate construction and the associated convergence analysis are correct, the result would be significant: it supplies one of the first rigorous bridges between distributed fixed-point iteration and non-convex optimization under genuinely relaxed stochastic assumptions, while delivering practical communication savings. The explicit handling of both relative and absolute compression errors and the dynamic skipping mechanism are also attractive features.

major comments (2)
  1. [§4] §4 (Convergence analysis): the surrogate function is introduced to handle non-contractive operators, yet the manuscript must explicitly verify that (i) the surrogate shares exactly the same fixed-point set as the original sum-separable operator and (ii) the relaxed growth bias and variance bounds are inherited without implicitly re-imposing Lipschitz or monotonicity conditions that would invalidate the claimed unification with non-convex optimization. The current sketch does not rule out the possibility that bias control strengthens for non-contractive cases.
  2. [§3.2] §3.2 (Assumptions and surrogate definition): the relaxed bias/variance conditions are stated to generalize the classical unbiased case, but the proof that the inexact Krasnosel'skiĭ-Mann iteration still converges under these conditions when the operator is merely non-contractive (rather than contractive) is not supplied in sufficient detail; a concrete counter-example or a missing step in the error recursion would undermine the central claim.
minor comments (2)
  1. [Theorem 1] The step-size sequence is listed as a free parameter; its explicit dependence on the network size, compression ratio, and skipping period should be stated once in the main theorem statement rather than only in the proof.
  2. [Algorithm 1] Notation for the unified compressor (relative vs. absolute error) is introduced in the algorithm description but used inconsistently in the subsequent error bounds; a single table summarizing the two error models would improve readability.

Simulated Author's Rebuttal

2 responses · 0 unresolved

Thank you for the thorough review and valuable feedback on our manuscript. We have carefully considered the major comments and will make revisions to enhance the clarity and completeness of the convergence analysis and surrogate function discussion. Our point-by-point responses are as follows.

read point-by-point responses
  1. Referee: [§4] §4 (Convergence analysis): the surrogate function is introduced to handle non-contractive operators, yet the manuscript must explicitly verify that (i) the surrogate shares exactly the same fixed-point set as the original sum-separable operator and (ii) the relaxed growth bias and variance bounds are inherited without implicitly re-imposing Lipschitz or monotonicity conditions that would invalidate the claimed unification with non-convex optimization. The current sketch does not rule out the possibility that bias control strengthens for non-contractive cases.

    Authors: We agree that explicit verification is necessary for rigor. In the revised version, we will add a dedicated lemma proving that the surrogate function has precisely the same fixed-point set as the original operator, established directly from the definition without extra conditions. Furthermore, we will show step-by-step that the growth bias and variance bounds carry over exactly to the surrogate under the stated relaxed assumptions, without invoking Lipschitz continuity or monotonicity. This preserves the unification with non-convex optimization. We will also include an explicit statement that the bias control remains as relaxed in the non-contractive case, supported by the error bound derivation. revision: yes

  2. Referee: [§3.2] §3.2 (Assumptions and surrogate definition): the relaxed bias/variance conditions are stated to generalize the classical unbiased case, but the proof that the inexact Krasnosel'skiĭ-Mann iteration still converges under these conditions when the operator is merely non-contractive (rather than contractive) is not supplied in sufficient detail; a concrete counter-example or a missing step in the error recursion would undermine the central claim.

    Authors: We acknowledge that the proof requires more elaboration. The revised manuscript will include the complete derivation of the error recursion for the inexact KM iteration under the relaxed conditions for non-contractive operators. We will detail each step, demonstrating how the surrogate enables bounding the iteration without contractivity, using only the generalized bias and variance assumptions. This will confirm convergence and address the central claim. No counter-example is possible under these conditions, as shown by the telescoping sum in the analysis. revision: yes

Circularity Check

0 steps flagged

No significant circularity; derivation builds on standard iterations with external surrogate analysis

full rationale

The paper proposes a distributed algorithm based on inexact Krasnosel'skiĭ-Mann iterations under relaxed bias/variance conditions, introduces a surrogate function to handle non-contractive operators, and claims convergence unification with non-convex optimization. No quoted equations or sections show self-definitional reduction, fitted parameters renamed as predictions, or load-bearing self-citations that collapse the central result to its inputs. The surrogate is presented as a new construction whose fixed-point preservation and bound inheritance are derived rather than assumed by definition, making the chain self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

1 free parameters · 2 axioms · 0 invented entities

The central claim rests on standard assumptions from stochastic approximation and operator theory plus the paper-specific relaxed growth conditions; no invented entities are introduced.

free parameters (1)
  • step-size sequence
    Typical parameter in inexact iterations that must be chosen to satisfy convergence conditions under the relaxed bias and variance assumptions.
axioms (2)
  • domain assumption Relaxed growth bias and variance conditions on stochastic operators
    Invoked to generalize beyond unbiased and bounded-variance cases and enable the convergence proof.
  • domain assumption Existence of a fixed point for the sum-separable operator
    Standard background assumption required for the fixed-point seeking problem to be well-posed.

pith-pipeline@v0.9.0 · 5428 in / 1321 out tokens · 34605 ms · 2026-05-11T02:38:11.116364+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

48 extracted references · 48 canonical work pages

  1. [1]

    Bauschke and P

    H. Bauschke and P. Combettes,Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2nd ed. Springer, 2017

  2. [2]

    Supermann: A superlinearly convergent algorithm for finding fixed points of nonexpansive operators,

    A. Themelis and P. Patrinos, “Supermann: A superlinearly convergent algorithm for finding fixed points of nonexpansive operators,”IEEE Transactions on Automatic Control, vol. 64, pp. 4875–4890, 2016

  3. [3]

    Iteration complexity of feasible descent meth- ods for convex optimization,

    P. W. Wang and C. J. Lin, “Iteration complexity of feasible descent meth- ods for convex optimization,”Journal of Machine Learning Research, vol. 15, pp. 1523–1548, 2014

  4. [4]

    Cooperative and competitive multi-agent systems: From optimization to games,

    J. Wang, Y . Hong, J. Wang, J. Xu, Y . Tang, Q.-L. Han, and J. Kurths, “Cooperative and competitive multi-agent systems: From optimization to games,”IEEE/CAA Journal of Automatica Sinica, vol. 9, no. 5, pp. 763–783, 2022

  5. [5]

    A survey of distributed optimization,

    T. Yang, X. Yi, J. Wu, Y . Yuan, D. Wu, Z. Meng, Y . Hong, H. Wang, Z. Lin, and K. H. Johansson, “A survey of distributed optimization,” Annual Reviews in Control, vol. 47, pp. 278–305, 2019. FAN LIet al.:DISTRIBUTED SEEKING FOR FIXED POINTS OF BIASED STOCHASTIC OPERATORS: A COMMUNICATION-EFFICIENT APPROACH 17

  6. [6]

    Zeroth-order algorithms for stochastic distributed nonconvex optimization,

    X. Yi, S. Zhang, T. Yang, and K. H. Johansson, “Zeroth-order algorithms for stochastic distributed nonconvex optimization,”Automatica, vol. 142, p. 110353, 2022

  7. [7]

    Two comments on the method of successive approximations,

    M. A. Krasnosel’skii, “Two comments on the method of successive approximations,”Uspekhi Matematicheskikh Nauk, vol. 10, no. 1, pp. 123–127, 1955

  8. [8]

    Distributed algorithms for computing a common fixed point of a group of nonexpansive operators,

    X. Li and G. Feng, “Distributed algorithms for computing a common fixed point of a group of nonexpansive operators,”IEEE Transactions on Automatic Control, vol. 66, no. 5, pp. 2130–2145, 2021

  9. [9]

    A distributed algorithm for computing a common fixed point of a family of strongly quasi-nonexpansive maps,

    J. Liu, D. Fullmer, A. Nedi ´c, T. Bas ¸ar, and A. S. Morse, “A distributed algorithm for computing a common fixed point of a family of strongly quasi-nonexpansive maps,” inAmerican Control Conference, 2017, pp. 686–690

  10. [10]

    A distributed algorithm for computing a common fixed point of a finite family of paracontractions,

    D. Fullmer and A. S. Morse, “A distributed algorithm for computing a common fixed point of a finite family of paracontractions,”IEEE Transactions on Automatic Control, vol. 63, no. 9, pp. 2833–2843, 2018

  11. [11]

    Distributed algorithms for computing a fixed point of multi-agent nonexpansive operators,

    X. Li and L. Xie, “Distributed algorithms for computing a fixed point of multi-agent nonexpansive operators,”Automatica, vol. 122, p. 109286, 2020

  12. [12]

    Distributed Ba- nach–Picard iteration for locally contractive maps,

    F. Andrade, M. A. T. Figueiredo, and J. Xavier, “Distributed Ba- nach–Picard iteration for locally contractive maps,”IEEE Transactions on Automatic Control, vol. 68, pp. 1275–1280, 2021

  13. [13]

    DOT and DOP: Linearly convergent algorithms for finding fixed points of multiagent operators,

    X. Li, M. Meng, and L. Xie, “DOT and DOP: Linearly convergent algorithms for finding fixed points of multiagent operators,”IEEE Transactions on Automatic Control, vol. 69, no. 6, pp. 3689–3704, 2024

  14. [14]

    Continuous-time distributed algorithm for seeking fixed points of multiagent quasi-nonexpansive operators,

    X. Nian, D. Liu, and F. Li, “Continuous-time distributed algorithm for seeking fixed points of multiagent quasi-nonexpansive operators,”IEEE Transactions on Control of Network Systems, vol. 11, pp. 1238–1250, 2024

  15. [15]

    A stochastic operator framework for optimization and learning with sub-weibull errors,

    N. Bastianello, L. Madden, R. Carli, and E. Dall’Anese, “A stochastic operator framework for optimization and learning with sub-weibull errors,”IEEE Transactions on Automatic Control, vol. 69, no. 12, pp. 8722–8737, 2024

  16. [16]

    A unified model for large-scale inexact fixed-point it- eration: A stochastic optimization perspective,

    A. Hashemi, “A unified model for large-scale inexact fixed-point it- eration: A stochastic optimization perspective,”IEEE Transactions on Automatic Control, vol. 70, no. 4, pp. 2435–2449, 2025

  17. [17]

    , author Li, Z

    Y . Liao, Z. Li, S. Pu, and T.-H. Chang, “A robust compressed push-pull method for decentralized nonconvex optimization,”arXiv:2408.01727, 2024

  18. [18]

    Innovation compression for communication-efficient distributed optimization with linear conver- gence,

    J. Zhang, K. You, and L. Xie, “Innovation compression for communication-efficient distributed optimization with linear conver- gence,”IEEE Transactions on Automatic Control, vol. 68, no. 11, pp. 6899–6906, 2023

  19. [19]

    Moniqua: Modulo quantized communication in decentralized SGD,

    Y . Lu and C. De Sa, “Moniqua: Modulo quantized communication in decentralized SGD,” inInternational Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol. 119. PMLR, 2020, pp. 6415–6425

  20. [20]

    On biased compression for distributed learning,

    A. Beznosikov, S. Horv ´ath, P. Richt ´arik, and M. Safaryan, “On biased compression for distributed learning,”Journal of Machine Learning Research, vol. 24, no. 276, pp. 1–50, 2023

  21. [21]

    Communication compression for distributed nonconvex optimization,

    X. Yi, S. Zhang, T. Yang, T. Chai, and K. H. Johansson, “Communication compression for distributed nonconvex optimization,”IEEE Transactions on Automatic Control, vol. 68, no. 9, pp. 5477–5492, 2023

  22. [22]

    Decentralized online strongly con- vex optimization with general compressors and random disturbances,

    H. Liu, D. Yuan, and B. Zhang, “Decentralized online strongly con- vex optimization with general compressors and random disturbances,” Journal of Optimization Theory and Applications, vol. 204, no. 1, pp. 268–296, Jan 2025

  23. [23]

    SPARQ-SGD: Event- triggered and compressed communication in decentralized optimization,

    N. Singh, D. Data, J. George, and S. Diggavi, “SPARQ-SGD: Event- triggered and compressed communication in decentralized optimization,” IEEE Transactions on Automatic Control, vol. 68, no. 2, pp. 721–736, 2023

  24. [24]

    Dynamic regret for decentralized online bandit gradient descent with local steps,

    H. Liu, B. Zhang, and D. Yuan, “Dynamic regret for decentralized online bandit gradient descent with local steps,”Journal of the Franklin Institute, vol. 362, no. 4, p. 107530, 2025

  25. [25]

    The effectiveness of local updates for decentralized learning under data heterogeneity,

    T. Wu, Z. Li, and Y . Sun, “The effectiveness of local updates for decentralized learning under data heterogeneity,”IEEE Transactions on Signal Processing, vol. 73, pp. 751–765, 2025

  26. [26]

    Optimal distributed stochastic mirror descent for strongly convex optimization,

    D. Yuan, Y . Hong, D. W. Ho, and G. Jiang, “Optimal distributed stochastic mirror descent for strongly convex optimization,”Automatica, vol. 90, pp. 196–203, 2018

  27. [27]

    Online distributed stochastic gradient algorithm for nonconvex optimization with compressed communication,

    J. Li, C. Li, J. Fan, and T. Huang, “Online distributed stochastic gradient algorithm for nonconvex optimization with compressed communication,” IEEE Transactions on Automatic Control, vol. 69, no. 2, pp. 936–951, 2024

  28. [28]

    Finite-bit quantization for distributed algorithms with linear convergence,

    N. Michelusi, G. Scutari, and C.-S. Lee, “Finite-bit quantization for distributed algorithms with linear convergence,”IEEE Transactions on Information Theory, vol. 68, no. 11, pp. 7254–7280, 2022

  29. [29]

    Dadam: A consensus- based distributed adaptive gradient method for online optimization,

    P. Nazari, D. A. Tarzanagh, and G. Michailidis, “Dadam: A consensus- based distributed adaptive gradient method for online optimization,” IEEE Transactions on Signal Processing, vol. 70, pp. 6065–6079, 2022

  30. [30]

    Convergence in high probability of distributed stochastic gradient descent algorithms,

    K. Lu, H. Wang, H. Zhang, and L. Wang, “Convergence in high probability of distributed stochastic gradient descent algorithms,”IEEE Transactions on Automatic Control, vol. 69, no. 4, pp. 2189–2204, 2024

  31. [31]

    On the convergence of decentralized stochastic gradient descent with biased gradients,

    Y . Jiang, H. Kang, J. Liu, and D. Xu, “On the convergence of decentralized stochastic gradient descent with biased gradients,”IEEE Transactions on Signal Processing, vol. 73, pp. 549–558, 2025

  32. [32]

    Decentralized online convex optimization with compressed communications,

    X. Cao and T. Bas ¸ar, “Decentralized online convex optimization with compressed communications,”Automatica, vol. 156, p. 111186, 2023

  33. [33]

    On the operations in abstract sets and their application to the equations,

    S. Banach, “On the operations in abstract sets and their application to the equations,”Fundamenta mathematicae, vol. 3, no. 1, pp. 133–181, 1922

  34. [34]

    Lan,First-order and Stochastic Optimization Methods for Machine Learning

    G. Lan,First-order and Stochastic Optimization Methods for Machine Learning. Springer International Publishing, 01 2020

  35. [35]

    On the convergence of SGD with biased gradients,

    A. Ajalloeian and S. U. Stich, “On the convergence of SGD with biased gradients,” inInternational Conference on Machine Learning, vol. 119. PMLR, 2020, pp. 152–162

  36. [36]

    Quantization enabled privacy protection in decentralized stochastic optimization,

    Y . Wang and T. Bas ¸ar, “Quantization enabled privacy protection in decentralized stochastic optimization,”IEEE Transactions on Automatic Control, vol. 68, no. 7, pp. 4038–4052, 2023

  37. [37]

    Random gradient-free minimization of convex functions,

    Y . Nesterov and V . G. Spokoiny, “Random gradient-free minimization of convex functions,”F oundations of Computational Mathematics, vol. 17, pp. 527–566, 2015

  38. [38]

    Stochastic optimization with decision- dependent distributions,

    D. Drusvyatskiy and L. Xiao, “Stochastic optimization with decision- dependent distributions,”Mathematics of Operations Research, vol. 48, pp. 954–998, 2020

  39. [39]

    EMC2: Efficient MCMC negative sampling for contrastive learning with global convergence,

    C.-Y . Yau, H.-T. Wai, P. Raman, S. Sarkar, and M. Hong, “EMC2: Efficient MCMC negative sampling for contrastive learning with global convergence,” inProceedings of the 39th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, vol

  40. [40]

    25 128–25 167

    PMLR, 2022, pp. 25 128–25 167

  41. [41]

    Finite-time convergence rates of decentralized stochastic approximation with applications in multi- agent and multi-task learning,

    S. Zeng, T. T. Doan, and J. Romberg, “Finite-time convergence rates of decentralized stochastic approximation with applications in multi- agent and multi-task learning,”IEEE Transactions on Automatic Control, vol. 68, no. 5, pp. 2758–2773, 2023

  42. [42]

    Random reshuffling with variance reduction: New analysis and better rates,

    G. Malinovsky, A. Sailanbayev, and P. Richt ´arik, “Random reshuffling with variance reduction: New analysis and better rates,”Neural Infor- mation Processing Systems, vol. 33, pp. 17 309–17 320, 2021

  43. [43]

    Variance-reduced reshuffling gradient descent for nonconvex optimization: Centralized and distributed algorithms,

    X. Jiang, X. Zeng, L. Xie, J. Sun, and J. Chen, “Variance-reduced reshuffling gradient descent for nonconvex optimization: Centralized and distributed algorithms,”Automatica, vol. 171, p. 111954, 2025

  44. [44]

    Distributed frank-wolfe solver for stochastic optimization with coupled inequality constraints,

    J. Hou, X. Zeng, G. Wang, C. Chen, and J. Sun, “Distributed frank-wolfe solver for stochastic optimization with coupled inequality constraints,” IEEE Transactions on Neural Networks and Learning Systems, vol. 36, no. 5, pp. 7858–7872, 2025

  45. [45]

    Compression- based privacy preservation for distributed nash equilibrium seeking in aggregative games,

    W. Huo, X. Chen, K. Ding, S. Dey, and L. Shi, “Compression- based privacy preservation for distributed nash equilibrium seeking in aggregative games,”IEEE Control Systems Letters, vol. 8, pp. 886–891, 2024

  46. [46]

    Linear convergence of consensus-based quantized optimization for smooth and strongly convex cost functions,

    Y . Kajiyama, N. Hayashi, and S. Takai, “Linear convergence of consensus-based quantized optimization for smooth and strongly convex cost functions,”IEEE Transactions on Automatic Control, vol. 66, no. 3, pp. 1254–1261, 2021

  47. [47]

    Distributed online bandit opti- mization with communication compression,

    X. Ge, H. Zhang, W. Xu, and H. Bao, “Distributed online bandit opti- mization with communication compression,” inInternational Conference on Information Science and Technology, 2023, pp. 678–686

  48. [48]

    Online learning over dynamic graphs via distributed proximal gradient algorithm,

    R. Dixit, A. S. Bedi, and K. Rajawat, “Online learning over dynamic graphs via distributed proximal gradient algorithm,”IEEE Transactions on Automatic Control, vol. 66, no. 11, pp. 5065–5079, 2021