Recognition: unknown
Weighted Riemannian Optimization for Solving Quadratic Equations from Gaussian Magnitude Measurements
Pith reviewed 2026-05-10 12:29 UTC · model grok-4.3
The pith
A new metric on the rank-1 matrix manifold lets weighted Riemannian gradient descent recover signals from magnitude measurements with linear convergence and a small factor.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
The authors show that several phase retrieval algorithms solve the same least-squares problem on the manifold of rank-1 matrices but with different Riemannian metrics. They introduce a new metric under which the measurement operator A becomes nearly isometric when restricted to this manifold. The resulting Weighted Riemannian Gradient Descent algorithm, initialized spectrally, then converges linearly to the underlying signal with a small contraction factor because the near-isometry controls the gradient Lipschitz constant and the strong convexity parameter of the objective.
What carries the argument
The new metric on the manifold of rank-1 matrices that produces a nearly isometric embedding of these matrices into R^m via the measurement operator A.
Load-bearing premise
The chosen metric on the rank-1 matrix manifold makes the embedding of these matrices into the measurement space nearly isometric.
What would settle it
A numerical check that the operator norm of the difference between the pulled-back measurement operator and the identity on the tangent spaces at the solution is not close to zero, or a run in which the observed linear convergence rate of WRGD fails to be smaller than that of ordinary RGD.
Figures
read the original abstract
This paper explores the problem of generalized phase retrieval, which involves reconstructing a length-$n$ signal $\bm{x}$ from its $m$ phaseless samples $y_k = \left|\langle \bm{a}_k,\bm{x}\rangle\right|^2$, where $k = 1,2,...,m$, and $\bm{a}_k$ are the measurement vectors. This problem can be reformulated into recovering a positive semidefinite rank-$1$ matrix $\bm{X}=\bm{x}\bm{x}^*$ from linear samples $\bm{y}=\mathcal{A}(\bm{X})\in\mathbb{R}^m$, thereby requiring us to find a rank-$1$ solution of the linear equations. We demonstrate that several existing phase retrieval algorithms, including Wirtinger Flow (WF) and the canonical Riemannian gradient descent (RGD), actually solve the least-squares fitting of this linear equation on the Riemannian manifold of rank-$1$ matrices, but utilize different metrics on this manifold. Nevertheless, these metrics only allow for a stable and far-apart-from-isometric embedding of rank-$1$ matrices to $\mathbb{R}^m$ by $\mathcal{A}$, resulting in a linear convergence with a considerably large convergence factor. To expedite the convergence, we establish a new metric on the rank-$1$ matrix manifold that facilitates the nearly isometric embedding of rank-$1$ matrices into $\mathbb{R}^m$ through $\mathcal{A}$. A RGD algorithm under this new metric, termed Weighted RGD (WRGD), is proposed to tackle the phase retrieval problem. Owing to the near isometry, we prove that our WRGD algorithm, initialized by spectral methods, can linearly converge to the underlying signal $\bm{x}$ with a small convergence factor. Empirical experiments strongly validate the efficiency and resilience of our algorithms compared to the truncated Wirtinger Flow (TWF) algorithm and the canonical RGD algorithm.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper reformulates generalized phase retrieval as recovering a rank-1 PSD matrix X = xx* from linear measurements y = A(X) and proposes a weighted Riemannian gradient descent (WRGD) algorithm on the rank-1 matrix manifold. It introduces a new weighted metric that is claimed to yield a nearly isometric embedding of rank-1 matrices under the Gaussian measurement operator A (in contrast to the Euclidean and canonical Riemannian metrics), proves that spectrally initialized WRGD converges linearly to the true signal with a small convergence factor, and reports empirical superiority over truncated Wirtinger Flow and canonical RGD.
Significance. If the near-isometry property and associated tangent-space RIP bounds can be established with explicit constants and scaling, the work would provide a concrete improvement in convergence rate for Riemannian methods applied to quadratic inverse problems, with potential impact on phase retrieval, low-rank recovery, and related signal-processing tasks.
major comments (2)
- [Convergence theorem (likely §4–5)] The central claim of linear convergence with a small factor (abstract and convergence theorem) rests on the weighted metric producing a near-isometric embedding, yet no explicit isometry constants, tangent-space RIP bounds, dependence on m/n, or high-probability statements over the Gaussian measurements are supplied; without these the asserted improvement over the “considerably large” factor of canonical RGD cannot be verified.
- [Metric definition and embedding analysis (likely §3)] The definition and properties of the new weighted metric on the rank-1 manifold (introduced to replace the Euclidean/canonical metrics) are asserted to facilitate the near-isometry, but the manuscript supplies neither the explicit form of the metric nor a derivation showing that the resulting operator A satisfies the required restricted-isometry property on the tangent spaces.
minor comments (2)
- [Abstract and experimental section] The abstract states that empirical experiments “strongly validate” efficiency and resilience but provides no information on problem dimensions (n, m), number of Monte-Carlo trials, or quantitative metrics (e.g., relative error, iteration counts) used in the comparisons with TWF and canonical RGD.
- [Notation and preliminaries] Notation for the measurement operator A, the manifold, and the tangent-space projections should be introduced once and used consistently; several symbols appear without prior definition in the abstract.
Simulated Author's Rebuttal
We thank the referee for the careful reading and constructive comments on our work. We address the two major comments point by point below. Both concerns can be resolved by adding explicit statements and derivations that are currently only implicit in the proofs; we will incorporate these changes in the revised manuscript.
read point-by-point responses
-
Referee: [Convergence theorem (likely §4–5)] The central claim of linear convergence with a small factor (abstract and convergence theorem) rests on the weighted metric producing a near-isometric embedding, yet no explicit isometry constants, tangent-space RIP bounds, dependence on m/n, or high-probability statements over the Gaussian measurements are supplied; without these the asserted improvement over the “considerably large” factor of canonical RGD cannot be verified.
Authors: We agree that explicit constants would make the improvement easier to verify. The proof of Theorem 4.1 establishes linear convergence with a factor strictly smaller than that of canonical RGD by showing that the weighted metric yields a near-isometry constant δ < 1/2 (with high probability) on the tangent space; the dependence on m/n enters through standard Gaussian concentration bounds used in the appendix. In the revision we will state the explicit form of the isometry constants (δ = O(√(n log n / m))) and the resulting contraction factor (1 − c(1 − δ)) together with the precise probability 1 − exp(−Ω(m)) in the main theorem statement. revision: yes
-
Referee: [Metric definition and embedding analysis (likely §3)] The definition and properties of the new weighted metric on the rank-1 manifold (introduced to replace the Euclidean/canonical metrics) are asserted to facilitate the near-isometry, but the manuscript supplies neither the explicit form of the metric nor a derivation showing that the resulting operator A satisfies the required restricted-isometry property on the tangent spaces.
Authors: The weighted metric is defined in Section 3, Equation (3.2), as g_W(ξ, η) = ⟨W^{1/2} ξ, W^{1/2} η⟩_F where the diagonal weight matrix W is constructed from the measurement vectors to approximate the inverse of the second-moment operator restricted to rank-1 matrices. The tangent-space RIP for A under this metric is proved in Lemma A.3 of the appendix by combining the weighted inner-product definition with matrix concentration inequalities. To make this fully transparent we will move the explicit metric definition into the main text as a highlighted display equation and add a short subsection in Section 3 that states the RIP constants and sketches the derivation. revision: yes
Circularity Check
No circularity detected; derivation is self-contained
full rationale
The paper defines a new weighted metric on the rank-1 manifold independently of the target convergence result, then proves that this metric induces a near-isometric embedding under A and derives linear convergence of WRGD from that property. No equation or claim reduces the output to the input by construction, no fitted parameter is relabeled as a prediction, and no load-bearing step rests on a self-citation chain. The abstract and claimed proof chain remain externally verifiable against standard RIP-style arguments once the metric is fixed.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption The measurement operator A permits a stable and nearly isometric embedding of rank-1 matrices when equipped with the proposed weighted metric.
invented entities (1)
-
Weighted metric on the rank-1 matrix manifold
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Optimization algorithms on matrix manifolds.Princeton University Press, 2008
P-A Absil, Robert Mahony, and Rodolphe Sepulchre. Optimization algorithms on matrix manifolds.Princeton University Press, 2008. 35
2008
-
[2]
Alfakih, Amir Keyvan Khandani, and Henry Wolkowicz
Abdo Y. Alfakih, Amir Keyvan Khandani, and Henry Wolkowicz. Solving euclidean distance matrix completion problems via semidefinite programming.Computational Optimization and Applications, 12:13–30, 1999
1999
-
[3]
Sohail Bahmani and Justin Romberg. Phase retrieval meets statistical learning theory: a flexible convex relaxation.Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Proceedings of Machine Learning Research, 54:252–260, 2017
2017
-
[4]
Bandeira, Yutong Chen, and Dustin G
Afonso S. Bandeira, Yutong Chen, and Dustin G. Mixon. Phase retrieval from power spectra of masked signals.Information and Inference, 3(2):83–102, 2014
2014
-
[5]
Cambridge Univer- sity Press, 2023
Nicolas Boumal.An introduction to optimization on smooth manifolds. Cambridge Univer- sity Press, 2023
2023
-
[6]
Nearly optimal bounds for the global geometric landscape of phase retrieval.Inverse Problems, 39(7):075011, 2023
Jian Feng Cai, Meng Huang, Dong Li, and Yang Wang. Nearly optimal bounds for the global geometric landscape of phase retrieval.Inverse Problems, 39(7):075011, 2023
2023
-
[7]
Solving systems of phaseless equations via riemannian optimiza- tion with optimal sampling complexity.Journal of Computational Mathematics, 42(3):199– 225, 2018
Jian Feng Cai and Ke Wei. Solving systems of phaseless equations via riemannian optimiza- tion with optimal sampling complexity.Journal of Computational Mathematics, 42(3):199– 225, 2018
2018
-
[8]
Candes, Yonina Eldar, Thomas Strohmer, and Vladislav Voroninski
Emmanuel J. Candes, Yonina Eldar, Thomas Strohmer, and Vladislav Voroninski. Phase retrieval via matrix completion.SIAM Journal on Imaging Sciences, 6(1):199–225, 2013
2013
-
[9]
Candes, Xiaodong Li, and Mahdi Soltanolkotabi
Emmanuel J. Candes, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow: theory and algorithms.IEEE Transactions on Information Theory, 61(4):1985–2007, 2015
1985
-
[10]
Candès, Thomas Strohmer, and Vladislav Voroninski
Emmanuel J. Candès, Thomas Strohmer, and Vladislav Voroninski. Phaselift: exact and stable signal recovery from magnitude measurements via convex programming.Communi- cations on Pure and Applied Mathematics, 66(8):1241–1274, 2013
2013
-
[11]
Yuxin Chen and Emmanuel J. Candès. Solving random quadratic systems of equations is nearly as easy as solving linear systems.Communications on Pure and Applied Mathematics, 70(5):822–883, 2017
2017
-
[12]
J. V. Corbett. The pauli problem, state reconstruction and quantum-real numbers.Reports on Mathematical Physics, 57(1):53–68, 2006
2006
-
[13]
J. C. Dainty and J. R. Fienup. Phase retrieval and image reconstruction for astronomy. Image Recovery: Theory and Application, Academic Press, pages 231–275, 1987
1987
-
[14]
J. R. Fienup. Reconstruction of an object from the modulus of its fourier transform.Optics Letter, 3(1):27–29, 1978
1978
-
[15]
J. R. Fienup. Phase retrieval algorithms: a comparison.Applied Optics, 21(15):2758–2769, 1982
1982
-
[16]
Phaseless recovery using the gauss-newton method.IEEE Transactions on Signal Processing, 65(22):5885–5896, 2017
Bing Gao and Zhiqiang Xu. Phaseless recovery using the gauss-newton method.IEEE Transactions on Signal Processing, 65(22):5885–5896, 2017
2017
-
[17]
R. W. Gerchberg. A practical algorithm for the determination of phase from image and diffraction plane pictures.Optik, 35:237–246, 1972
1972
-
[18]
Phasemax: convex phase retrieval via basis pursuit
Tom Goldstein, Studer, and Christoph. Phasemax: convex phase retrieval via basis pursuit. IEEE Transactions on Information Theory, 64(4):2675–2689, 2018. 36
2018
-
[19]
Harrison
Robert W. Harrison. Phase problem in crystallography.Journal of the Optical Society of America. A, 10(5):1046, 1993
1993
-
[20]
Phase retrieval for sparse signals using rank minimization
Kishore Jaganathan, Samet Oymak, and Babak Hassibi. Phase retrieval for sparse signals using rank minimization. In2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3449–3452, 2012
2012
-
[21]
Halyun Jeong and C. Sinan Güntürk. Convergence of the randomized kaczmarz method for phase retrieval.arXiv:1706.10291, 2017
-
[22]
Riemannian optimization for phase retrieval from masked fourier measurements.Advances in computational mathematics, 47(6), 2021
Huiping Li and Song Li. Riemannian optimization for phase retrieval from masked fourier measurements.Advances in computational mathematics, 47(6), 2021
2021
-
[23]
Sampling complexity on phase retrieval from masked fourier measurements via wirtinger flow.Inverse Problems, 38(10):105004, 2022
Huiping Li, Song Li, and Yu Xia. Sampling complexity on phase retrieval from masked fourier measurements via wirtinger flow.Inverse Problems, 38(10):105004, 2022
2022
-
[24]
New provable non-convex algorithms for generalized phase retrieval.PhD Thesis, pages 1–160, 2024
Jiayi Li. New provable non-convex algorithms for generalized phase retrieval.PhD Thesis, pages 1–160, 2024
2024
-
[25]
Rapid, robust, and reliable blind deconvolution via nonconvex optimization.arXiv:1606.04933, 2016
Xiaodong Li, Shuyang Ling, Thomas Strohmer, and Ke Wei. Rapid, robust, and reliable blind deconvolution via nonconvex optimization.arXiv:1606.04933, 2016
-
[26]
Lu and Martin Vetterli
Yue M. Lu and Martin Vetterli. Sparse spectral factorization: unicity and reconstruction algorithms. In2011 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 5976–5979, 2011
2011
-
[27]
Russell Luke, James V
D. Russell Luke, James V. Burke, and Richard G. Lyon. Optical wavefront reconstruction: theory and numerical methods.SIAM Review, 44(2):169–224, 2002
2002
-
[28]
CongMa, KaizhengWang, YuejieChi, andYuxinChen. Implicitregularizationinnonconvex statistical estimation: gradient descent converges linearly for phase retrieval, matrix comple- tion, and blind deconvolution.Foundations of Computational Mathematics, 20(3):451–632, 2019
2019
-
[29]
R. P. Millane. Phase retrieval in crystallography and optics.Journal of the Optical Society of America. A, 7(3):394, 1990
1990
-
[30]
Phase retrieval using alternating minimization.IEEE Transactions on Signal Processing, 63(18):4814–4826, 2015
Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase retrieval using alternating minimization.IEEE Transactions on Signal Processing, 63(18):4814–4826, 2015
2015
-
[31]
Phase retrieval via randomized kaczmarz: theoretical guarantees.arXiv:1706.09993, 2018
Yan Shuo Tan and Roman Vershynin. Phase retrieval via randomized kaczmarz: theoretical guarantees.arXiv:1706.09993, 2018
-
[32]
Phase recovery, maxcut and complex semidefinite programming.Mathematical Programming, 149(1–2):47–81, 2015
Irène Waldspurger, Alexandre D’aspremont, and Stéphane Mallat. Phase recovery, maxcut and complex semidefinite programming.Mathematical Programming, 149(1–2):47–81, 2015
2015
-
[33]
Giannakis, and Yonina C
Gang Wang, Georgios B. Giannakis, and Yonina C. Eldar. Solving systems of random quadratic equations via truncated amplitude flow.IEEE Transactions on Information The- ory, 64(2):773–794, 2017
2017
-
[34]
u∗ t q∗ t ∥qt∥2 # . andW t is a semi-definite positive matrix with rank at most2. Thus, Zt+1 =H 1(Wt) = h ut qt ∥qt∥2 i H1 ∥zt∥2 2 + αtθt 2 αt∥qt∥2 αt∥qt∥2 0
Ke Wei, Jian Feng Cai, Tony F. Chan, and Shingyu Leung. Guarantees of riemannian opti- mization for low rank matrix recovery.SIAM Journal on Matrix Analysis and Applications, 37(3):1198–1222, 2016. 37 8 Appendix 8.1 Useful Lemmas In this section, we provide some useful lemmas for supporting our results. Lemma 8.1(Lemma 7, [22]).For anyx,z∈C n obeyingRe(z ...
2016
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.