pith. machine review for the scientific record. sign in

arxiv: 2603.26039 · v2 · submitted 2026-03-27 · 🪐 quant-ph · math-ph· math.MP· math.OC

Recognition: no theorem link

Achieving double-logarithmic precision dependence in optimization-based quantum unstructured search

Authors on Pith no claims yet

Pith reviewed 2026-05-14 23:48 UTC · model grok-4.3

classification 🪐 quant-ph math-phmath.MPmath.OC
keywords quantum searchGrover algorithmRiemannian optimizationNewton methodquadratic convergenceunitary manifoldunstructured search
0
0 comments X

The pith

With the target fraction known, the Riemannian modified Newton method on the unitary manifold makes the gradient an eigenvector of the Hessian, delivering quadratic convergence and O(√(N/M) log log(1/ε)) complexity for quantum unstructured

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper reformulates Grover search as maximization over the unitary manifold and replaces linearly convergent Riemannian gradient ascent with a Riemannian modified Newton method. Under the assumption that the target density M/N is known, the Riemannian gradient turns out to be an eigenvector of the corresponding Hessian, so the Newton direction is always collinear with the gradient. This alignment produces quadratic convergence in the error ε with no extra quantum cost. The resulting algorithm still uses only the standard Grover oracle and diffusion operators, with parameter updates precomputable classically. Consequently the total complexity improves from O(√(N/M) log(1/ε)) to O(√(N/M) log log(1/ε)).

Core claim

In the Riemannian optimization formulation of unstructured search with known M/N, the Riemannian gradient is always an eigenvector of the Riemannian Hessian. Therefore the Newton direction remains collinear with the gradient, and the RMN iteration achieves quadratic convergence with respect to ε. The method requires no additional quantum overhead beyond the usual Grover operators and permits classical precomputation of the update sequence, yielding an overall complexity of O(√(N/M) log log(1/ε)).

What carries the argument

Riemannian modified Newton (RMN) method on the unitary manifold, where the known M/N ratio forces the Riemannian gradient to be an eigenvector of the Hessian and thereby aligns the Newton step with the gradient direction.

If this is right

  • Search cost now depends only double-logarithmically on desired precision ε.
  • No extra quantum gates or measurements beyond standard Grover diffusion and oracle calls.
  • All iteration parameters can be precomputed on a classical computer before any quantum execution.
  • The algorithm remains fully Grover-compatible and implementable on existing quantum hardware.
  • Quadratic convergence is observed numerically without requiring line searches or Hessian inversions at each step.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same collinearity may appear in other manifold-constrained quantum variational problems when a density parameter is known.
  • An adaptive estimator for M/N could be combined with the method to remove the known-ratio assumption while preserving most of the speedup.
  • Classical precomputation of the sequence suggests hybrid quantum-classical scheduling strategies for larger search instances.
  • The approach may generalize to other quantum algorithms whose cost landscapes admit low-rank or eigenvector-structured Hessians.

Load-bearing premise

The ratio M/N of target items to total search space size must be known in advance.

What would settle it

A small-N numerical run that measures the observed error reduction factor per RMN iteration and checks whether it scales quadratically with the current error rather than linearly.

Figures

Figures reproduced from arXiv: 2603.26039 by Dong An, Jiang Hu, Zaiwen Wen, Zhijian Lai.

Figure 1
Figure 1. Figure 1: Schematic illustration of a manifold optimization iteration on the unitary manifold U(N). Starting from the current point Uk, a tangent direction ηk ∈ TUk (e.g., the Riemannian gradient) is chosen in the tangent space. The retraction RUk then maps the scaled tangent vector tkηk back onto the manifold, producing the next iterate Uk+1 = RUk (tkηk). III. GROVER-COMPATIBLE RIEMANNIAN GRADIENT ASCENT METHOD In … view at source ↗
Figure 2
Figure 2. Figure 2: Absolute errors of the cost value qk and expansion coefficients xk, yk between the classical simulation and the explicit full matrix implementation. (a)–(c) display the results for RGA, and (d)–(f) for RMN. All errors remain around machine precision (10−16), verifying that the classical procedures accurately simulate the algorithms. local quadratic convergence: there exist an integer k2 and a constant C > … view at source ↗
Figure 3
Figure 3. Figure 3: Convergence comparison between the RGA and RMN methods for problem sizes of n = 5, 10, and 15 qubits. (a)–(c) illustrate the gradient norm, and (d)–(f) show the function value error. The results demonstrate the linear convergence of RGA and the significantly faster, quadratic convergence of RMN. 0 5000 10000 15000 N 0 10000 20000 30000 40000 Total iterations of RMN Measured iterations Linear fit [PITH_FUL… view at source ↗
Figure 4
Figure 4. Figure 4 [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: ). From an optimization perspective, it consists of all feasible directions (i.e., tangent vectors) along curves on the manifold. This structure can also be characterized algebraically. Fix a point U ∈ U(N), and consider a smooth curve t 7→ U(t) ∈ U(N) with U(0) = U. Differentiating the unitarity constraint U(t)U(t) † = I with respect to t at t = 0 yields UU˙ (0)† + U˙ (0)U † = 0. (A2) Letting the tangent … view at source ↗
read the original abstract

Grover's algorithm is a fundamental quantum algorithm that achieves a quadratic speedup for unstructured search problems of size $N$. Recent studies have reformulated this task as a maximization problem on the unitary manifold and solved it via linearly convergent Riemannian gradient ascent (RGA) methods, resulting in a complexity of $O(\sqrt{N/M} \log (1/\varepsilon))$, where $M$ denotes the number of target items. In this work, we adopt the Riemannian modified Newton (RMN) method to solve the quantum search problem, under the assumption that the ratio $ M/N$ is known. We show that, in this setting, the Riemannian Newton direction is collinear with the Riemannian gradient in the sense that the Riemannian gradient is always an eigenvector of the corresponding Riemannian Hessian. As a result, without additional overhead, the proposed RMN method numerically achieves a quadratic convergence rate with respect to the error $\varepsilon$, implying a complexity of $O(\sqrt{N/M} \log\log (1/\varepsilon))$. Furthermore, our approach remains Grover-compatible, namely, it relies exclusively on the standard Grover diffusion and oracle operators to ensure algorithmic implementability, and its parameter update process can be efficiently precomputed on classical computers.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 2 minor

Summary. The paper reformulates Grover unstructured search as maximization on the unitary manifold and applies the Riemannian modified Newton (RMN) method under the assumption that the ratio M/N is known. It establishes that the Riemannian gradient is always an eigenvector of the Riemannian Hessian, so the Newton direction is collinear with the gradient and requires no linear solve. The manuscript states that this yields quadratic convergence numerically, producing complexity O(√(N/M) log log(1/ε)) while remaining implementable with standard Grover oracle and diffusion operators whose parameters can be precomputed classically.

Significance. If the quadratic rate can be placed on a rigorous footing, the result would improve the ε-dependence from logarithmic (prior RGA methods) to double-logarithmic, which is valuable for high-precision regimes. The eigenvector property is a clean structural observation that removes the usual Newton-step overhead without sacrificing Grover compatibility.

major comments (1)
  1. [Convergence analysis (around the statement of quadratic rate)] The central complexity claim O(√(N/M) log log(1/ε)) rests on the assertion that RMN 'numerically achieves' quadratic convergence. The eigenvector property guarantees that the Newton direction equals a scaled gradient, but quadratic convergence of Riemannian Newton methods holds only inside a basin whose radius depends on Hessian Lipschitz constants and sectional curvature; the manuscript provides no theorem bounding the number of initial linear-rate steps or proving that precomputed steps place every iterate inside that basin from an arbitrary initial unitary. A rigorous iteration-count argument is therefore required to support the stated complexity.
minor comments (2)
  1. [Abstract and §1] The assumption that M/N is known should be stated explicitly in the abstract and introduction, together with a brief discussion of how the ratio might be estimated or relaxed in applications.
  2. [Numerical results section] Numerical experiments supporting quadratic convergence should report the tested ranges of N, M, and ε, together with the observed iteration counts and the precise stopping criterion used.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the careful reading and constructive feedback. We agree that the convergence analysis requires strengthening to rigorously support the complexity claim and will revise the manuscript accordingly.

read point-by-point responses
  1. Referee: [Convergence analysis (around the statement of quadratic rate)] The central complexity claim O(√(N/M) log log(1/ε)) rests on the assertion that RMN 'numerically achieves' quadratic convergence. The eigenvector property guarantees that the Newton direction equals a scaled gradient, but quadratic convergence of Riemannian Newton methods holds only inside a basin whose radius depends on Hessian Lipschitz constants and sectional curvature; the manuscript provides no theorem bounding the number of initial linear-rate steps or proving that precomputed steps place every iterate inside that basin from an arbitrary initial unitary. A rigorous iteration-count argument is therefore required to support the stated complexity.

    Authors: We agree that the current reliance on numerical evidence for quadratic convergence is insufficient to fully substantiate the stated complexity. In the revised manuscript we will add a rigorous convergence theorem. Leveraging the eigenvector property (which makes the Newton step a scaled gradient), we will show that the algorithm enters the quadratic-convergence basin after a constant number of iterations independent of N and ε, with the precomputed step sizes guaranteeing that every iterate starting from the standard initial unitary remains inside this basin. This will establish the O(√(N/M) log log(1/ε)) bound with a complete iteration-count argument. revision: yes

Circularity Check

0 steps flagged

Derivation self-contained with algebraic proof and standard convergence theory

full rationale

The paper proves the Riemannian gradient is an eigenvector of the Hessian (collinearity), allowing Newton direction to coincide with gradient without linear solve. Quadratic convergence then follows from standard Riemannian Newton theory once inside the basin; the paper reports numerical verification of the rate and combines it with the known O(√(N/M)) iteration scaling. No equation reduces the target complexity to a fitted parameter, self-definition, or unverified self-citation chain. The explicit assumption of known M/N is stated upfront and does not derive from the result itself.

Axiom & Free-Parameter Ledger

0 free parameters · 2 axioms · 0 invented entities

The approach rests on standard Riemannian geometry for the unitary manifold and the explicit assumption that M/N is known; no free parameters fitted to data or new postulated entities are introduced.

axioms (2)
  • domain assumption Quantum unstructured search can be reformulated as a maximization problem on the unitary manifold.
    This is the foundational modeling step that enables application of Riemannian optimization methods.
  • domain assumption The ratio M/N is known a priori.
    Explicitly required for the RMN method to achieve the claimed quadratic convergence without additional overhead.

pith-pipeline@v0.9.0 · 5525 in / 1273 out tokens · 80359 ms · 2026-05-14T23:48:33.215016+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

79 extracted references · 79 canonical work pages · 2 internal anchors

  1. [1]

    The state|ψk⟩remains in the fixed 2-dimensional subspace (called Grover plane) |ψk⟩∈S:= spanC{|ψ0⟩,H|ψ0⟩}⊆H.(20)

  2. [2]

    The skew-Hermitian part of Riemannian gradient, [H,ψk], remains in the fixed 2-dimensional real sub- space [H,ψk]∈W:= spanR{X0,Y 0}⊆u(N),(21) and∥[H,ψk]∥F = √ 2qk (1−qk). Theorem 1 provides an important insight: if the cur- rent gradient1 lies in the 2-dimensional subspaceW ⊆ u(N)(whenU 0 =I, the initial Riemannian gradient is simplyX 0 ∈W), then it suffi...

  3. [3]

    For the additional update gatesV(t k;xk,yk)in Eq.(26), define the corresponding2×2matrix Mk := K∏ ℓ=1 EH ( θ(1) ℓ(tk;xk,yk) ) Eψ0 ( θ(2) ℓ(tk;xk,yk) ) , whereEH(θ) = [ eiθ0 0 1 ] ,E ψ0(θ) =I2 +(eiθ−1)Ψ0

  4. [4]

    Update ( αk+1 βk+1 ) =M k ( αk βk ) , and letq k+1 = q0|αk+1|2, zk+1 =αk+1 ¯βk+1, xk+1 =ℜ(zk+1), yk+1 =ℑ(zk+1)

  5. [5]

    As we will see later, the Riemannian Newton method proposed in this work is also classically simulable and can be obtained with modifications based on Theorem 2

    Setk←k+ 1and return to step 1. As we will see later, the Riemannian Newton method proposed in this work is also classically simulable and can be obtained with modifications based on Theorem 2. IV. GROVER-COMPATIBLE RIEMANNIAN MODIFIED NEWTON METHOD In the classical Euclidean setting, consider the opti- mization problemmaxx∈Rnf(x). A prototypical second- o...

  6. [6]

    Remark 3.Since the Riemannian gradient is always an eigenvector of the Riemannian Hessian, the Newton di- rection is collinear with the gradient

    (Recall thatW:= span R{X0,Y 0}⊆ u(N)as defined in Theorem 1.) The corresponding stan- dard Newton update is then given byUk+1 = RUk(Ω N k ) with unit step sizetk = 1. Remark 3.Since the Riemannian gradient is always an eigenvector of the Riemannian Hessian, the Newton di- rection is collinear with the gradient. Then, the Newton method can be viewed as a s...

  7. [7]

    In this case, the Hes- sian operator is already negative along the gradi- ent direction

    Ifq k >0.5, thenλk <0. In this case, the Hes- sian operator is already negative along the gradi- ent direction. We therefore setµk = 0, which yields γk = 1 −λk like Eq. (37)

  8. [8]

    In this case, the Hes- sian operator is positive along the gradient direc- tion, and the unmodified Newton step would point toward a descent direction

    Ifq k ≤0.5, thenλk ≥0. In this case, the Hes- sian operator is positive along the gradient direc- tion, and the unmodified Newton step would point toward a descent direction. To avoid this, we set µk =λk +δwithδ >0, which yieldsγk = 1 δ. Combining the two cases above, we set the scaling factor γk = 1 max (δ,2qk−1) (41) whereδ >0is a small constant, typica...

  9. [9]

    Set modified scalingγk := 1 max(δ,2qk−1)

  10. [10]

    For the trial stept, and additional update gates V(tγk;xk,yk)in Eq.(26), define the corresponding 2×2matrix Mk(t) := K∏ ℓ=1 EH ( θ(1) ℓ(tγk;xk,yk) ) Eψ0 ( θ(2) ℓ(tγk;xk,yk) ) , whereEH(θ) = [ eiθ0 0 1 ] ,E ψ0(θ) =I2 +(eiθ−1)Ψ0

  11. [11]

    For the trial stept, define ( αtemp k (t) βtemp k (t) ) := Mk(t) ( αk βk ) ,andq temp k (t) :=q 0 ⏐⏐αtemp k (t) ⏐⏐2

  12. [12]

    Letmk≥0be the smallest nonnegative integer such that the Armijo condition qtemp k (ρmk)≥qk +cρmkγkGk (44) holds, and settk :=ρmk

  13. [13]

    Update ( αk+1 βk+1 ) =M k(tk) ( αk βk ) , and letq k+1 = q0|αk+1|2,G k+1 = 2q k+1(1−qk+1),z k+1 = αk+1 ¯βk+1,x k+1 =ℜ(zk+1),y k+1 =ℑ(zk+1)

  14. [14]

    without classical simulation

    Setk←k+ 1and return to step 1. V. NUMERICAL EXPERIMENTS In this section, we compare the following methods via numerical simulations: •the Grover-compatible Riemannian gradient ascent (RGA) method (Algorithm 1) [37], and •the proposed Grover-compatible Riemannian mod- ified Newton (RMN) method (Algorithm 2). All experiments are implemented in the NumPy pac...

  15. [15]

    Tangent space Geometrically, thetangent spaceprovides a local linear approximation of the manifold at a given point, analo- gous to a tangent plane to a two-dimensional sphere (see Fig. 5). From an optimization perspective, it consists of all feasible directions (i.e., tangent vectors) along curves on the manifold. This structure can also be characterized...

  16. [16]

    Riemannian gradient Consider a functionf: U(N)⊆C N×N→R. The Euclidean gradient∇f(U)gives the direction of steepest ascent in the ambient spaceCN×N, but it generally does not lie in the tangent spaceTU and thus is not a feasi- ble direction on the manifold. The standard approach is to orthogonally project the Euclidean gradient ontoTU (see [23, Proposition...

  17. [17]

    Retractions Standard additive updatesU+tηleave the unitary manifoldU(N), as they violate the unitarity constraint. To map such updates back ontoU(N)while preserving the search direction, one introduces a geometric mapping called aretraction[21, 38], which is a central tool in the modern manifold optimization framework. DefineTU(N) :={(U,η) :U∈U(N),η∈TU}.A...

  18. [18]

    Riemannian exponential map [24]: Rexp U (η) = exp(˜η)U,(A10) which generates the exact geodesic curves onU(N)

  19. [19]

    (First-order) Trotter retraction [31, 45, 55]: Rtrt U (η) =   N 2 ∏ j=1 exp ( iωjPj)  U,(A11) where˜η∈u(N)is expanded in thelog2(N)-qubit Pauli basis{Pj}N 2 j=1 as˜η=i∑ jωjPj forωj∈R

  20. [20]

    Cayley transform retraction [56]: Rcay U (η) = ( I−1 2 ˜η )−1( I+ 1 2 ˜η ) U.(A12)

  21. [21]

    QR decomposition retraction [21]: Rqr U (η) = qf(U+η),(A13) whereqf(A)denotes the orthogonalQ-factor in the QR decomposition ofA

  22. [22]

    The effectiveness of these retractions strongly depends on the computational paradigm

    Polar decomposition retraction [21]: Rpolar U (η) = (U+η)(I+η†η)−1/2,(A14) where(I+η †η)−1/2is the inverse of the unique positive definite Hermitian square root ofI+η†η. The effectiveness of these retractions strongly depends on the computational paradigm. The Riemannian ex- ponential map and the Trotter retraction correspond to physical Hamiltonian evolu...

  23. [23]

    Riemannian Hessian Unlike the usual setting, where the Hessian is repre- sented by a symmetric matrix of second-order partial derivatives, the Riemannian Hessian atU, denoted by Hessf(U), is a self-adjoint linear operator onTU. It de- scribes the covariant derivative of the Riemannian gra- dient field along a tangent direction, thereby naturally incorpora...

  24. [24]

    Grassl, B

    M. Grassl, B. Langenberg, M. Roetteler, and R. Stein- wandt, Applying Grover’s algorithm to AES: quantum resource estimates, inPost-Quantum Cryptography, Lec- ture Notes in Computer Science, Vol. 9606 (Springer,

  25. [25]

    D. J. Bernstein, Post-quantum cryptography, inEncyclo- pedia of Cryptography, Security and Privacy(Springer,

  26. [26]

    A Quantum Algorithm for Finding the Minimum

    C. Durr and P. Hoyer, A quantum algorithm for find- ing the minimum, arXiv preprint quant-ph/9607014 10.48550/arXiv.quant-ph/9607014 (1996)

  27. [27]

    Gilliam, S

    A. Gilliam, S. Woerner, and C. Gonciulea, Grover adap- tive search for constrained polynomial binary optimiza- tion, Quantum5, 428 (2021)

  28. [28]

    D. Dong, C. Chen, H. Li, and T.-J. Tarn, Quantum rein- forcement learning, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)38, 1207 (2008)

  29. [29]

    Du, M.-H

    Y. Du, M.-H. Hsieh, T. Liu, and D. Tao, A Grover-search based quantum learning scheme for classification, New Journal of Physics23, 023020 (2021)

  30. [30]

    C. H. Bennett, E. Bernstein, G. Brassard, and U. Vazi- rani, Strengths and weaknesses of quantum computing, SIAM Journal on Computing26, 1510 (1997)

  31. [31]

    D. E. Knuth,The Art of Computer Programming, Vol- 14 ume 3: Sorting and Searching, 2nd ed., Vol. 3 (Addison- Wesley, Reading, Massachusetts, 1998)

  32. [32]

    C. D. Manning, P. Raghavan, and H. Schütze,Intro- duction to Information Retrieval(Cambridge University Press, 2008)

  33. [33]

    L. K. Grover, A fast quantum mechanical algorithm for database search, inProceedings of the twenty-eighth an- nual ACM symposium on Theory of computing(1996)pp. 212–219

  34. [34]

    M. A. Nielsen and I. L. Chuang,Quantum Computation and Quantum Information(Cambridge University Press, 2010)

  35. [35]

    Zalka, Grover’s quantum searching algorithm is opti- mal, Physical Review A60, 2746 (1999)

    C. Zalka, Grover’s quantum searching algorithm is opti- mal, Physical Review A60, 2746 (1999)

  36. [36]

    Beals, H

    R. Beals, H. Buhrman, R. Cleve, M. Mosca, and R. De Wolf, Quantum lower bounds by polynomials, Journal of the ACM (JACM)48, 778 (2001)

  37. [37]

    G.Brassard, P.Høyer, M.Mosca,andA.Tapp,Quantum amplitude amplification and estimation, Contemporary Mathematics305, 53 (2002), arXiv:quant-ph/0005055

  38. [38]

    Suzuki, S

    Y. Suzuki, S. Uno, R. Raymond, T. Tanaka, T. On- odera, and N. Yamamoto, Amplitude estimation without phase estimation, Quantum Information Processing19, 75 (2020)

  39. [39]

    Gilyén, Y

    A. Gilyén, Y. Su, G. H. Low, and N. Wiebe, Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics, inPro- ceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing(2019) pp. 193–204

  40. [40]

    J. M. Martyn, Z. M. Rossi, A. K. Tan, and I. L. Chuang, Grand unification of quantum algorithms, PRX Quan- tum2, 040203 (2021)

  41. [41]

    Miyake and M

    A. Miyake and M. Wadati, Geometric strategy for the optimal quantum search, Physical Review A64, 042317 (2001)

  42. [42]

    Cafaro and S

    C. Cafaro and S. Mancini, On Grover’s search algorithm from a quantum information geometry viewpoint, Phys- ica A: Statistical Mechanics and its Applications391, 1610 (2012)

  43. [43]

    Suzuki, M

    Y. Suzuki, M. Gluza, J. Son, B. H. Tiang, N. H. Ng, and Z. Holmes, Grover’s algorithm is an approx- imation of imaginary-time evolution, arXiv preprint 10.48550/arXiv.2507.15065 (2025), arXiv:2507.15065

  44. [44]

    Absil, R

    P.-A. Absil, R. Mahony, and R. Sepulchre,Optimization Algorithms on Matrix Manifolds(Princeton University Press, 2008)

  45. [45]

    J. Hu, X. Liu, Z.-W. Wen, and Y.-X. Yuan, A brief in- troduction to manifold optimization, Journal of the Op- erations Research Society of China8, 199 (2020)

  46. [46]

    Boumal,An Introduction to Optimization on Smooth Manifolds(Cambridge University Press, 2023)

    N. Boumal,An Introduction to Optimization on Smooth Manifolds(Cambridge University Press, 2023)

  47. [47]

    B. C. Hall,Lie Groups, Lie Algebras, and Representa- tions: An Elementary Introduction, 2nd ed., Graduate Texts in Mathematics, Vol. 222 (Springer International Publishing Switzerland, 2015) pp. XIII + 449

  48. [48]

    Hsu, E.-J

    M.-C. Hsu, E.-J. Kuo, W.-H. Yu, J.-F. Cai, and M.-H. Hsieh, Quantum state tomography via nonconvex Rie- mannian gradient descent, Physical Review Letters132, 240804 (2024)

  49. [49]

    Li, X.-L

    Z.-T. Li, X.-L. He, C.-C. Zheng, Y.-Q. Dong, T. Luan, X.-T. Yu, and Z.-C. Zhang, Quantum comb tomography via learning isometries on Stiefel manifold, Physical Re- view Letters134, 010803 (2025)

  50. [50]

    Wiersema and N

    R. Wiersema and N. Killoran, Optimizing quantum cir- cuits with Riemannian gradient flow, Physical Review A 107, 062421 (2023)

  51. [51]

    A. B. Magann, S. E. Economou, and C. Arenz, Random- ized adaptive quantum state preparation, Physical Re- view Research5, 033227 (2023)

  52. [52]

    Malvetti, C

    E. Malvetti, C. Arenz, G. Dirr, and T. Schulte- Herbrüggen, Randomized gradient descents on Rieman- nian manifolds: Almost sure convergence to global min- ima in and beyond quantum optimization, arXiv preprint arXiv:2405.12039 10.48550/arXiv.2405.12039 (2024)

  53. [53]

    Pervez, A

    M. Pervez, A. Haqq, N. A. McMahon, and C. Arenz, Rie- mannian gradient descent-based quantum algorithms for ground state preparation with guarantees, arXiv preprint arXiv:2512.13401 10.48550/arXiv.2512.13401 (2025)

  54. [54]

    Z. Lai, H. Nie, J. Wu, and D. An, Quantum cir- cuit design from a retraction-based Riemannian opti- mization framework, arXiv preprint arXiv:2602.20605 10.48550/arXiv.2602.20605 (2026)

  55. [55]

    Kotil, R

    A. Kotil, R. Banerjee, Q. Huang, and C. B. Mendl, Rie- mannian quantum circuit optimization for Hamiltonian simulation, Journal of Physics A: Mathematical and The- oretical57, 135303 (2024)

  56. [56]

    I. N. M. Le, S. Sun, and C. B. Mendl, Riemannian quan- tum circuit optimization based on matrix product oper- ators, Quantum9, 1833 (2025)

  57. [57]

    Luchnikov, A

    I. Luchnikov, A. Ryzhov, S. Filippov, and H. Ouerdane, QGOpt: Riemannian optimization for quantum tech- nologies, SciPost Physics10, 079 (2021)

  58. [58]

    I. A. Luchnikov, M. E. Krechetov, and S. N. Filippov, Riemannian geometry and automatic differentiation for optimization problems of quantum physics and quantum technologies, New Journal of Physics23, 073006 (2021)

  59. [59]

    Y. Yao, F. Miatto, and N. Quesada, Riemannian opti- mization of photonic quantum circuits in phase and Fock space, SciPost Physics17, 082 (2024)

  60. [60]

    Z. Lai, D. An, J. Hu, and Z. Wen, A Grover- compatible manifold optimization algorithm for quantum search, arXiv preprint arXiv:2512.08432 10.48550/arXiv.2512.08432 (2025)

  61. [61]

    R. L. Adler, J.-P. Dedieu, J. Y. Margulies, M. Martens, and M. Shub, Newton’s method on Riemannian mani- folds and a geometric model for the human spine, IMA Journal of Numerical Analysis22, 359 (2002)

  62. [62]

    On Arbitrary Phases in Quantum Amplitude Amplification

    P. Høyer, On arbitrary phases in quantum amplitude amplification, Physical Review A62, 052304 (2000), arXiv:quant-ph/0006031

  63. [63]

    G.L.Long,Groveralgorithmwithzerotheoreticalfailure rate, Physical Review A64, 022307 (2001), arXiv:quant- ph/0106071

  64. [64]

    P. Kaye, R. Laflamme, and M. Mosca,An Introduction to Quantum Computing(Oxford University Press., 2006)

  65. [65]

    Brassard, Searching a quantum phone book, Science 275, 627 (1997)

    G. Brassard, Searching a quantum phone book, Science 275, 627 (1997)

  66. [66]

    L. K. Grover, Fixed-point quantum search, Physical Re- view Letters95, 150501 (2005)

  67. [67]

    T. J. Yoder, G. H. Low, and I. L. Chuang, Fixed-point quantumsearchwithanoptimalnumberofqueries,Phys- ical Review Letters113, 210501 (2014)

  68. [68]

    H. F. Trotter, On the product of semi-groups of opera- tors, Proceedings of the American Mathematical Society 10, 545 (1959)

  69. [69]

    J. E. Dennis Jr and R. B. Schnabel,Numerical Methods for Unconstrained Optimization and Nonlinear Equations (SIAM, 1996). 15

  70. [70]

    C. T. Kelley,Solving Nonlinear Equations with Newton’s method(SIAM, 2003)

  71. [71]

    Nocedal and S

    J. Nocedal and S. J. Wright,Numerical Optimization (Springer, 2006)

  72. [72]

    O. P. Ferreira and R. C. Silva, Local convergence of New- ton’s method under a majorant condition in Riemannian manifolds, IMA Journal of Numerical Analysis32, 1696 (2012)

  73. [73]

    Peruzzo, J

    A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O’brien, A variational eigenvalue solver on a photonic quantum processor, Nature Communications5, 4213 (2014)

  74. [74]

    Kandala, A

    A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, and J. M. Gambetta, Hardware- efficient variational quantum eigensolver for small molecules and quantum magnets, Nature549, 242 (2017)

  75. [75]

    Huang, K

    W. Huang, K. A. Gallivan, and P.-A. Absil, A Broyden class of quasi-Newton methods for Riemannian optimiza- tion, SIAM Journal on Optimization25, 1660 (2015)

  76. [76]

    Huang, P.-A

    W. Huang, P.-A. Absil, and K. A. Gallivan, A Rieman- nian BFGS method without differentiated retraction for nonconvex optimization problems, SIAM Journal on Op- timization28, 470 (2018)

  77. [77]

    Lai, GroverNewton public codes,https://github

    Z. Lai, GroverNewton public codes,https://github. com/GALVINLAI/GroverNewton(2026), accessed: 2026-3- 20

  78. [78]

    Lloyd, Universal quantum simulators, Science273, 1073 (1996)

    S. Lloyd, Universal quantum simulators, Science273, 1073 (1996)

  79. [79]

    Wen and W

    Z. Wen and W. Yin, A feasible method for optimization with orthogonality constraints, Mathematical Program- ming142, 397 (2013)