pith. machine review for the scientific record. sign in

arxiv: 2508.10533 · v5 · submitted 2025-08-14 · 🪐 quant-ph · cs.LG

Recognition: unknown

Mitigating Exponential Mixed Frequency Growth through Frequency Selection

Authors on Pith no claims yet
classification 🪐 quant-ph cs.LG
keywords frequencyencodingfrequenciesselectiontargetapproachesapproxdense
0
0 comments X
read the original abstract

Angle encoding has emerged as a popular feature map for embedding classical data into quantum models, naturally generating truncated Fourier series with universal function approximation capabilities. Despite this expressive capability, practical training faces significant challenges. Through controlled experiments with white-box target functions, we demonstrate that training failures can occur even when all established parameter sufficiency conditions are satisfied. Building on the redundancy-gradient framework of Duffy and Jastrzebski, we provide systematic experimental evidence that non-unique frequencies dominate the gradient landscape and crowd out target frequencies -- a burden that grows exponentially with encoding depth under unary encoding. Small-angle initialization mitigates this in one-dimensional settings but fails to scale to higher dimensions, where even ternary encoding -- which minimizes per-frequency redundancy -- faces intractable combinatorial growth of unique frequency tuples regardless of initialization or optimizer choice. We introduce frequency selection as a principled solution that restricts the model spectrum to only those frequencies present in the target function. For two-dimensional targets, frequency selection achieves near-optimal performance (median $R^2 \approx 0.95$) where dense approaches struggle, and remains tractable at high-frequency magnitudes where dense approaches fail entirely (median $R^2 \approx 0.85$). Validation on a real-world dataset confirms the approach transfers beyond synthetic settings.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Architecture Shape Governs QNN Trainability: Jacobian Null Space Growth and Parameter Efficiency

    quant-ph 2026-05 unverdicted novelty 7.0

    At fixed encoding budget, serial QNN architectures suffer unbounded structural gradient starvation via rank(J) ≤ 2L+1 while parallel ones keep full Jacobian rank and better parameter efficiency when adding feature-map layers.