pith. machine review for the scientific record. sign in

arxiv: 2603.26931 · v1 · submitted 2026-03-27 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

Tunable Domain Adaptation Using Unfolding

Authors on Pith no claims yet

Pith reviewed 2026-05-14 23:24 UTC · model grok-4.3

classification 💻 cs.LG
keywords domain adaptationunrolled networksregressioncompressed sensingtunable parameterssparse signal recoveryphase retrieval
0
0 comments X

The pith

Unrolled networks enable tunable domain adaptation for regression tasks at inference time.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper proposes two methods based on unrolled networks that adapt regression models to varying domains, such as different noise levels, by linking select parameters to domain variables. Parametric Tunable-Domain Adaptation uses known domain parameters for dynamic tuning during inference, while Data-Driven Tunable-Domain Adaptation infers the necessary adjustments directly from input data. This approach is evaluated on compressed sensing regression problems including noise-adaptive sparse signal recovery, gain calibration, and phase retrieval. A sympathetic reader would care because it offers a flexible alternative to training entirely separate models per domain or forcing one model to handle all domains at once, potentially improving efficiency in settings with shifting data distributions.

Core claim

The central claim is that interpretable unrolled networks, derived from iterative optimization algorithms, achieve effective domain adaptation in regression by exploiting the functional dependence of tunable parameters on domain variables. This yields two concrete methods: P-TDA, which incorporates known domain parameters for controlled adjustment at inference, and DD-TDA, which learns to infer domain adaptation from the input itself. Experiments on noise-adaptive sparse signal recovery and related compressed sensing tasks show these methods match or exceed the accuracy of domain-specific models while outperforming standard joint-training baselines.

What carries the argument

Interpretable unrolled networks that embed domain-dependent tunable parameters to enable controlled adaptation during inference without retraining.

If this is right

  • Outperforms joint training baselines across multiple compressed sensing regression tasks.
  • Achieves accuracy comparable to separately trained domain-specific models.
  • Supports adaptation to varying noise without requiring full retraining per domain.
  • Extends to gain calibration and phase retrieval problems under domain shifts.
  • Preserves interpretability by tying parameter changes directly to domain variables.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same unrolling strategy could reduce storage and compute costs when deploying models across many similar but non-identical environments.
  • Applying DD-TDA to regression tasks outside compressed sensing, such as time-series forecasting with sensor drift, would test whether inference-time inference of domain variables generalizes.
  • Combining these tunable parameters with other iterative algorithms might yield adaptation rules that remain stable even when domain variables are only partially observed.

Load-bearing premise

That the functional dependence of select tunable parameters on domain variables can be leveraged to enable controlled adaptation during inference without degrading performance on the core task.

What would settle it

If a P-TDA or DD-TDA model applied to a held-out domain with an unseen noise level produces higher reconstruction error than a model trained specifically on that domain.

Figures

Figures reproduced from arXiv: 2603.26931 by Jayaprakash Katual, Satish Mulleti, Snehaa Reddy.

Figure 1
Figure 1. Figure 1: Layers/iterations of different LISTA architectures: (a) Conventional LISTA, (b), (c) NA-LISTA. (b) Single layer of the P-DTDA model with input [PITH_FULL_IMAGE:figures/full_fig_p007_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Comparison of the proposed methods with the JT and PT methods for NA-LISTA for different SNR ranges. PT’s performances are better than JT’s [PITH_FULL_IMAGE:figures/full_fig_p008_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparison of the proposed methods with the JT and PT methods [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Qualitative MNIST-CS reconstructions across SNR domains. Columns show ground truth (GT) and reconstructions from JT (LISTA), Tail-LISTA [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Qualitative MNIST-CS reconstructions at SNR [PITH_FULL_IMAGE:figures/full_fig_p012_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Comparison of learned W1 matrices for different domains. Since this experiment evaluates the ability to generalize to unseen domains, we compare our proposed models specif￾ically against the JT method, focusing on their respective generalization capabilities. The results, presented in [PITH_FULL_IMAGE:figures/full_fig_p012_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Comparison of top (15 × 15) of learned W1 matrices. 1 4 7 10 13 15 1 4 7 10 13 15 0.0 0.2 0.4 0.6 0.8 (a) Domain 1 1 4 7 10 13 15 1 4 7 10 13 15 0.0 0.2 0.4 0.6 0.8 (b) Domain 2 1 4 7 10 13 15 1 4 7 10 13 15 0.0 0.2 0.4 0.6 0.8 (c) Domain 3 [PITH_FULL_IMAGE:figures/full_fig_p013_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Comparison of top (15 × 15) of learned W2 matrices for different domains. -1 2. 9 -21 . 1 -23. 5 -1 2 -1 4.8 -1 5. 1 -1 4. 7 -20.8 -21 . 6 -1 4. 7 -20. 4 -21 . 1 78. 4 95.3 98. 1 73.3 81 79 79. 4 93. 9 95. 2 80.8 92. 1 93. 2 D2 D4 D6 -24 -1 8 -1 2 -6 0 N MSE (dB) PT JT P-TDA DD-TDA D2 D4 D6 0 20 40 60 80 1 00 HR (% ) Unseen Test Domains [PITH_FULL_IMAGE:figures/full_fig_p013_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Results for generalization experiment: The models (except PT) were [PITH_FULL_IMAGE:figures/full_fig_p013_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Tunable model for blind calibration gain with [PITH_FULL_IMAGE:figures/full_fig_p014_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Comparison of the proposed methods with the JT and PT methods for the sparse gain calibration problem. The P-TDA/DD-TDA has comparable [PITH_FULL_IMAGE:figures/full_fig_p015_11.png] view at source ↗
Figure 12
Figure 12. Figure 12: Generalization performance of models, across domains with [PITH_FULL_IMAGE:figures/full_fig_p015_12.png] view at source ↗
Figure 13
Figure 13. Figure 13: PRα,L() is the tunable model for sparse phase retrieval. Note: Green symbols represent the data available at inference time. Red symbols are trainable parameters, and blue symbols are fixed parameters. where p k (Sk+1) is computed using the gradient and Hessian of 1 4Ny ∥y − |Ax| 2∥ 2 2 at xk (cf. [49]). In the updates (27) and (28), α = [α1 α2] are the step sizes. B. Unfolding and Learnable/Tunable Param… view at source ↗
Figure 14
Figure 14. Figure 14: Comparison of the proposed methods with the JT and PT methods [PITH_FULL_IMAGE:figures/full_fig_p017_14.png] view at source ↗
Figure 15
Figure 15. Figure 15: A comparison of NMSE against the number of parameters and FLOPS for different methods. [PITH_FULL_IMAGE:figures/full_fig_p018_15.png] view at source ↗
read the original abstract

Machine learning models often struggle to generalize across domains with varying data distributions, such as differing noise levels, leading to degraded performance. Traditional strategies like personalized training, which trains separate models per domain, and joint training, which uses a single model for all domains, have significant limitations in flexibility and effectiveness. To address this, we propose two novel domain adaptation methods for regression tasks based on interpretable unrolled networks--deep architectures inspired by iterative optimization algorithms. These models leverage the functional dependence of select tunable parameters on domain variables, enabling controlled adaptation during inference. Our methods include Parametric Tunable-Domain Adaptation (P-TDA), which uses known domain parameters for dynamic tuning, and Data-Driven Tunable-Domain Adaptation (DD-TDA), which infers domain adaptation directly from input data. We validate our approach on compressed sensing problems involving noise-adaptive sparse signal recovery, domain-adaptive gain calibration, and domain-adaptive phase retrieval, demonstrating improved or comparable performance to domain-specific models while surpassing joint training baselines. This work highlights the potential of unrolled networks for effective, interpretable domain adaptation in regression settings.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes two novel domain adaptation methods for regression tasks, Parametric Tunable-Domain Adaptation (P-TDA) and Data-Driven Tunable-Domain Adaptation (DD-TDA), based on interpretable unrolled networks. These leverage the functional dependence of select tunable parameters on domain variables to enable controlled adaptation during inference. The methods are validated on three compressed sensing problems (noise-adaptive sparse signal recovery, domain-adaptive gain calibration, and domain-adaptive phase retrieval), claiming performance that is improved or comparable to domain-specific models while surpassing joint training baselines.

Significance. If the empirical results hold with full details, the work provides a flexible, interpretable alternative to personalized or joint training for handling domain shifts in regression settings, particularly in signal processing applications. The use of unrolled networks for tunable adaptation could advance domain adaptation by avoiding full retraining while maintaining performance.

major comments (2)
  1. [Experiments] Experiments section: the abstract reports validation on three compressed sensing problems with claims of improved performance, but without full details on metrics, baselines, error analysis, or statistical significance, the support for the central claim that the methods match domain-specific models remains unverified. Please provide quantitative tables and ablation studies.
  2. [Method] Method description (P-TDA and DD-TDA): the functional dependence of tunable parameters on domain variables is load-bearing for the adaptation claim; clarify the exact parameterization and training procedure to ensure it does not implicitly rely on target-domain information during inference.
minor comments (2)
  1. [Abstract] Abstract: the description of the three problems is clear but could briefly note the specific domain variables (e.g., noise levels) used in each to aid reader understanding.
  2. [Notation] Notation: ensure consistent use of symbols for domain variables and tunable parameters across equations and text to avoid ambiguity.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive comments and the recommendation for minor revision. We address each major comment below and outline the revisions we will make to strengthen the manuscript.

read point-by-point responses
  1. Referee: [Experiments] Experiments section: the abstract reports validation on three compressed sensing problems with claims of improved performance, but without full details on metrics, baselines, error analysis, or statistical significance, the support for the central claim that the methods match domain-specific models remains unverified. Please provide quantitative tables and ablation studies.

    Authors: We agree that expanded experimental details will better support the claims. In the revised manuscript we will add comprehensive quantitative tables reporting metrics such as MSE and recovery error for P-TDA, DD-TDA, domain-specific models, and joint-training baselines across all three tasks (noise-adaptive sparse recovery, gain calibration, and phase retrieval). We will also include ablation studies on the functional mappings and tunable parameters, plus error bars and statistical significance (paired t-tests or Wilcoxon tests over 10+ random seeds) to verify that performance is improved or comparable to domain-specific models. revision: yes

  2. Referee: [Method] Method description (P-TDA and DD-TDA): the functional dependence of tunable parameters on domain variables is load-bearing for the adaptation claim; clarify the exact parameterization and training procedure to ensure it does not implicitly rely on target-domain information during inference.

    Authors: We will clarify the parameterization and training procedure in the revised Method section. For P-TDA the tunable parameters (e.g., step sizes or thresholds in the unrolled iterations) are expressed as an explicit function of the known domain variable (noise level, gain factor, etc.), implemented as a small neural network or polynomial whose weights are learned during multi-domain training; at inference only the scalar domain variable is supplied and no target-domain samples or labels are used. For DD-TDA an auxiliary network predicts the domain variable (or directly the parameters) solely from the input measurement vector. We will add explicit equations, a training pseudocode block, and a statement confirming that inference uses neither target data nor target labels. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper proposes P-TDA and DD-TDA as modeling choices that leverage functional dependence of tunable parameters on domain variables within standard unrolled network architectures. This is presented as an empirical design for controlled adaptation rather than a derivation that reduces to fitted inputs by construction. Validation consists of direct performance comparisons on three compressed sensing tasks against domain-specific and joint-training baselines, with no self-definitional equations, no predictions that are statistically forced by prior fits, and no load-bearing self-citations whose content is itself unverified. The central claims rest on external empirical results and architectural inspiration, remaining self-contained against benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Based solely on the abstract, the central claim rests on the assumption that unrolled networks can be extended with domain-dependent tunable parameters. No free parameters, invented entities, or additional axioms are explicitly detailed.

axioms (1)
  • domain assumption Unrolled networks can incorporate functional dependence of tunable parameters on domain variables for controlled adaptation
    This is the core mechanism enabling P-TDA and DD-TDA as described in the abstract.

pith-pipeline@v0.9.0 · 5490 in / 1154 out tokens · 22377 ms · 2026-05-14T23:24:46.433187+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

50 extracted references · 50 canonical work pages · 3 internal anchors

  1. [1]

    How transferable are features in deep neural networks?

    J. Yosinski, J. Clune, Y . Bengio, and H. Lipson, “How transferable are features in deep neural networks?” inProc. Adv. Neural Info. Process. Sys. (NeurIPS), 2014, pp. 3320–3328. ARXIV 19

  2. [2]

    Deep residual learning for image recognition,

    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” inProc. IEEE Int. Conf. Comput. Vision and Pattern Recognition (CVPR), 2016, pp. 770–778

  3. [3]

    Bert: Pre-training of deep bidirectional transformers for language understanding,

    J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” inProc. North American Chap. Asso. Comput. Linguistics (NAACL-HLT), 2019, pp. 4171–4186

  4. [4]

    A survey on transfer learning,

    S. J. Pan and Q. Yang, “A survey on transfer learning,”IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, 2010

  5. [5]

    Interpretations of domain adaptations via layer variational analysis,

    H.-H. Tseng, H.-Y . Lin, K.-H. Hung, and Y . Tsao, “Interpretations of domain adaptations via layer variational analysis,”arXiv preprint arXiv:2302.01798, 2023

  6. [6]

    Gradual domain adaptation: Theory and algorithms,

    Y . He, H. Wang, B. Li, and H. Zhao, “Gradual domain adaptation: Theory and algorithms,”J Machine Learning Res., vol. 25, no. 361, pp. 1–40, 2024

  7. [7]

    A brief review of domain adaptation,

    A. Farahani, S. V oghoei, K. Rasheed, and H. R. Arabnia, “A brief review of domain adaptation,” inProc. Adv. Data Sci. Info. Engg.Cham: Springer International Publishing, 2021, pp. 877–894

  8. [8]

    Deep domain adaptation for regression,

    A. Singh and S. Chakraborty, “Deep domain adaptation for regression,” inDevelopment and Analysis of Deep Learning Architectures. Springer, 2019, pp. 91–115

  9. [10]

    Transfer learning for high-dimensional linear regression: Prediction, estimation and minimax optimality,

    S. Li, T. T. Cai, and H. Li, “Transfer learning for high-dimensional linear regression: Prediction, estimation and minimax optimality,”J. Royal Stat. Society Series B: Statistical Methodology, vol. 84, no. 1, pp. 149–173, 2022

  10. [11]

    Self-supervised deep tensor domain-adversarial regression adaptation for online remaining useful life prediction across machines,

    W. Mao, K. Liu, Y . Zhang, X. Liang, and Z. Wang, “Self-supervised deep tensor domain-adversarial regression adaptation for online remaining useful life prediction across machines,”IEEE Trans. Inst. Meas., vol. 72, pp. 1–16, 2023

  11. [12]

    Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,

    V . Monga, Y . Li, and Y . C. Eldar, “Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing,”IEEE Mag. Signal Processing, vol. 38, no. 2, pp. 18–44, 2021

  12. [13]

    Compressed sensing,

    D. Donoho, “Compressed sensing,”IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, 2006

  13. [14]

    An iterative thresholding al- gorithm for linear inverse problems with a sparsity constraint,

    I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholding al- gorithm for linear inverse problems with a sparsity constraint,”Commun. Pure Applied Math., vol. 57, no. 11, pp. 1413–1457, 2004

  14. [15]

    Learning fast approximations of sparse coding,

    K. Gregor and Y . LeCun, “Learning fast approximations of sparse coding,” inProc. Int. Conf. Machine Learn. (ICML), 2010, pp. 399– 406

  15. [16]

    Ideal spatial adaptation by wavelet shrinkage,

    D. L. Donoho and I. M. Johnstone, “Ideal spatial adaptation by wavelet shrinkage,”Biometrika, vol. 81, no. 3, pp. 425–455, 1994

  16. [17]

    Spot- tune: transfer learning through adaptive fine-tuning,

    Y . Guo, H. Shi, A. Kumar, K. Grauman, T. Rosing, and R. Feris, “Spot- tune: transfer learning through adaptive fine-tuning,” inProceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2019, pp. 4805–4814

  17. [18]

    Transfer feature learning with joint distribution adaptation,

    M. Long, J. Wang, G. Ding, J. Sun, and P. S. Yu, “Transfer feature learning with joint distribution adaptation,” inProc. Int. Conf. Computer Vision (ICCV), 2013, pp. 2200–2207

  18. [19]

    Learning transferable features with deep adaptation networks,

    M. Long, Y . Cao, J. Wang, and M. I. Jordan, “Learning transferable features with deep adaptation networks,” inProc. Int. Conf. on Machine Learning (ICML), 2015, pp. 97–105

  19. [20]

    Deep Domain Confusion: Maximizing for Domain Invariance

    E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, “Deep domain confusion: Maximizing for domain invariance,”arXiv preprint arXiv:1412.3474, 2014

  20. [21]

    Unified deep supervised domain adaptation and generalization,

    S. Motiian, M. Piccirilli, D. A. Adjeroh, and G. Doretto, “Unified deep supervised domain adaptation and generalization,” inProc. Int. Conf. Computer Vision (ICCV), 2017, pp. 5715–5725

  21. [22]

    Transfer learning for linear regression: A statistical test of gain,

    B. Tolooshams, X. Wang, X. He, Y . Zhang, and M. Jacob, “Transfer learning for linear regression: A statistical test of gain,” inarXiv preprint arXiv:2102.09504, 2021

  22. [23]

    Deep domain adaptation for regression,

    Y . Lu, J. Qin, and Y . Wang, “Deep domain adaptation for regression,” inProc. Int. Conf. Machine Learning (ICML), 2019, pp. 97–105

  23. [24]

    Transfer learning for high-dimensional linear regression: Prediction via information borrowing,

    H. Li, Y . Wang, and X. Xie, “Transfer learning for high-dimensional linear regression: Prediction via information borrowing,”J. Royal Sta- tistical Soc.: Series B, vol. 84, no. 1, pp. 149–175, 2023

  24. [25]

    Representation transfer learning for semiparametric regression,

    Y . Zhang, X. Wang, and X. He, “Representation transfer learning for semiparametric regression,”arXiv preprint arXiv:2406.13197, 2024

  25. [26]

    Self-supervised deep domain-adversarial regression adaptation for remaining useful life prediction,

    W. Chen, Y . Li, and Y . Zhou, “Self-supervised deep domain-adversarial regression adaptation for remaining useful life prediction,” inIEEE Trans. Ind. Electron., 2022, p. 9769904

  26. [27]

    Boosting for regression transfer,

    D. Pardoe and P. Stone, “Boosting for regression transfer,” inProc. Int. Conf. on Machine Learning (ICML), 2010, pp. 863–870

  27. [28]

    Algorithm-Induced Prior for Image Restoration

    S. H. Chan, “Algorithm-induced prior for image restoration,”arXiv preprint arXiv:1602.00715, 2016

  28. [29]

    A fast iterative shrinkage-thresholding algorithm with application to wavelet-based image deblurring,

    A. Beck and M. Teboulle, “A fast iterative shrinkage-thresholding algorithm with application to wavelet-based image deblurring,” inProc. IEEE Int. Conf. Acoust., Speech, and Signal Process. (ICASSP), 2009, pp. 693–696

  29. [30]

    Ada-lista: Learned solvers adaptive to varying models,

    A. Aberdam, A. Golts, and M. Elad, “Ada-lista: Learned solvers adaptive to varying models,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 12, pp. 9222–9235, 2021

  30. [31]

    Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds,

    X. Chen, J. Liu, Z. Wang, and W. Yin, “Theoretical linear convergence of unfolded ISTA and its practical weights and thresholds,”Adv. Neural Info. Process. Syst., vol. 31, 2018

  31. [32]

    ALISTA: Analytic weights are as good as learned weights in LISTA,

    J. Liu and X. Chen, “ALISTA: Analytic weights are as good as learned weights in LISTA,” inInt. Conf. Learning Representations (ICLR), 2019

  32. [33]

    Hyperparameter tuning is all you need for LISTA,

    X. Chen, J. Liu, Z. Wang, and W. Yin, “Hyperparameter tuning is all you need for LISTA,”Adv. Neural Info. Process. Syst., vol. 34, pp. 11 678–11 689, 2021

  33. [34]

    Learned ISTA with error-based thresholding for adaptive sparse coding,

    Z. Li, K. Wu, Y . Guo, and C. Zhang, “Learned ISTA with error-based thresholding for adaptive sparse coding,” inInt. Conf. Acoust. Speech Signal Process. (ICASSP). IEEE, 2024, pp. 9301–9305

  34. [35]

    Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation,

    N. P. Galatsanos and A. K. Katsaggelos, “Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation,”IEEE Trans. Image Process., vol. 1, no. 3, pp. 322–336, 1992

  35. [36]

    Automatic estimation and removal of noise from a single image,

    C. Liu, R. Szeliski, S. B. Kang, C. L. Zitnick, and W. T. Freeman, “Automatic estimation and removal of noise from a single image,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 2, pp. 299–314, 2008

  36. [37]

    Noise level estimation using weak textured patches of a single noisy image,

    X. Liu, M. Tanaka, and M. Okutomi, “Noise level estimation using weak textured patches of a single noisy image,” inInt. Conf. Image Process. IEEE, 2012, pp. 665–668

  37. [38]

    Generalized cross-validation as a method for choosing a good ridge parameter,

    G. H. Golub, M. Heath, and G. Wahba, “Generalized cross-validation as a method for choosing a good ridge parameter,”Technometrics, vol. 21, no. 2, pp. 215–223, 1979

  38. [39]

    Deep unfolding of tail-based methods for robust sparse recovery under noise and model mismatch,

    Y . Kvich, P. Reshma, P. Pradhan, R. Randhi, and Y . C. Eldar, “Deep unfolding of tail-based methods for robust sparse recovery under noise and model mismatch,”IEEE Trans. on Neural Networks and Learning Systems, 2025

  39. [40]

    Denoising Diffusion Implicit Models

    J. Song, C. Meng, and S. Ermon, “Denoising diffusion implicit models,” arXiv preprint arXiv:2010.02502, 2020

  40. [41]

    Mnist handwritten digit classifier (handwritten- digit-recognition),

    A. Jhawar, “Mnist handwritten digit classifier (handwritten- digit-recognition),” https://github.com/aakashjhawar/ handwritten-digit-recognition, 2018, gitHub repository

  41. [42]

    Calibration of time-of-flight range imaging cameras,

    O. Steiger, J. Felder, and S. Weiss, “Calibration of time-of-flight range imaging cameras,” inProc. Int. Conf. on Image Process.IEEE, 2008, pp. 1968–1971

  42. [43]

    Calibration of time-of-flight cameras for accurate intraoperative surface reconstruction,

    S. Mersmann, A. Seitel, M. Erz, B. J ¨ahne, F. Nickel, M. Mieth, A. Mehrabi, and L. Maier-Hein, “Calibration of time-of-flight cameras for accurate intraoperative surface reconstruction,”Medical Physics, vol. 40, no. 8, p. 082701, 2013

  43. [44]

    Direction of arrival estimation by eigenstruc- ture methods with unknown sensor gain and phase,

    A. Paulraj and T. Kailath, “Direction of arrival estimation by eigenstruc- ture methods with unknown sensor gain and phase,” inProc. IEEE Int. Conf. Acoust., Speech, and Signal Process. (ICASSP), vol. 10, 1985, pp. 640–643

  44. [45]

    Blind calibration in compressed sensing using message passing algorithms,

    C. Schulke, F. Caltagirone, F. Krzakala, and L. Zdeborov ´a, “Blind calibration in compressed sensing using message passing algorithms,” Adv. Neural Info. Process. Syst., vol. 26, 2013

  45. [46]

    Unrolled compressed blind-deconvolution,

    B. Tolooshams, S. Mulleti, D. Ba, and Y . C. Eldar, “Unrolled compressed blind-deconvolution,”IEEE Trans. Signal Process., vol. 71, pp. 2118– 2129, 2023

  46. [47]

    Sparse phase retrieval via truncated amplitude flow,

    G. Wang, L. Zhang, G. B. Giannakis, M. Akc ¸akaya, and J. Chen, “Sparse phase retrieval via truncated amplitude flow,”IEEE Transactions on Signal Processing, vol. 66, no. 2, pp. 479–491, 2017

  47. [48]

    Unfolded algorithms for deep phase retrieval,

    N. Naimipour, S. Khobahi, M. Soltanalian, H. Safavi, and H. C. Shaw, “Unfolded algorithms for deep phase retrieval,”Algorithms, vol. 17, no. 12, p. 587, 2024

  48. [49]

    A fast and provable algorithm for sparse phase retrieval,

    J. F. CAI, Y . Long, R. WEN, and J. Ying, “A fast and provable algorithm for sparse phase retrieval,” inThe Twelfth International Conference on Learning Representations, 2024

  49. [50]

    Estimating unknown sparsity in compressed sensing,

    M. Lopes, “Estimating unknown sparsity in compressed sensing,” in International Conference on Machine Learning. PMLR, 2013, pp. 217–225

  50. [51]

    Sparsity order estima- tion for compressed sensing system using sparse binary sensing matrix,

    S. Thiruppathirajan, S. Sreelal, B. Manojet al., “Sparsity order estima- tion for compressed sensing system using sparse binary sensing matrix,” IEEE Access, vol. 10, pp. 33 370–33 392, 2022