Recognition: unknown
Eigenvector continuation with subspace learning
read the original abstract
A common challenge faced in quantum physics is finding the extremal eigenvalues and eigenvectors of a Hamiltonian matrix in a vector space so large that linear algebra operations on general vectors are not possible. There are numerous efficient methods developed for this task, but they generally fail when some control parameter in the Hamiltonian matrix exceeds some threshold value. In this work we present a new technique called eigenvector continuation that can extend the reach of these methods. The key insight is that while an eigenvector resides in a linear space with enormous dimensions, the eigenvector trajectory generated by smooth changes of the Hamiltonian matrix is well approximated by a very low-dimensional manifold. We prove this statement using analytic function theory and propose an algorithm to solve for the extremal eigenvectors. We benchmark the method using several examples from quantum many-body theory.
This paper has not been read by Pith yet.
Forward citations
Cited by 3 Pith papers
-
Low-rank compression of two-electron reduced density matrices
A structure-preserving low-rank factorization of 2RDMs achieves linear effective rank scaling with ~99% compression for octane while retaining chemical accuracy and enabling quadratic-memory interpolation in ab initio...
-
Low-rank compression of two-electron reduced density matrices
A structure-preserving low-rank factorization of 2RDMs achieves linear rank scaling with system size and ~99% compression while retaining chemical accuracy for correlated states.
-
Perturbative calculations of light nuclei up to N$^3$LO in chiral effective field theory
Perturbative N3LO calculations in chiral EFT with RG-guided power counting yield robust predictions for light nuclei energies when calibrated on the tritium binding energy.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.