pith. machine review for the scientific record. sign in

arxiv: 2605.11475 · v1 · submitted 2026-05-12 · 💻 cs.CV

Recognition: 2 theorem links

· Lean Theorem

Deep Probabilistic Unfolding for Quantized Compressive Sensing

Authors on Pith no claims yet

Pith reviewed 2026-05-13 01:14 UTC · model grok-4.3

classification 💻 cs.CV
keywords quantized compressive sensingdeep unfoldinglikelihood gradient projectionMamba moduleprobabilistic guidanceimage reconstructionmulti-scale feature fusion
0
0 comments X

The pith

A closed-form likelihood gradient projection respects true quantization physics within deep unfolding for compressive sensing.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper develops a deep probabilistic unfolding model for quantized compressive sensing that improves reconstruction accuracy and efficiency over prior unfolding approaches. It replaces standard L2 projections on measurements with a derived closed-form, numerically stable likelihood gradient projection. This step converts the hard quantization constraint into soft probabilistic guidance that aligns with the actual physics of quantization. The model pairs this projection with a dual-domain Mamba module that captures and fuses multi-scale local and global features from correlated regions. Experiments indicate the resulting reconstructions outperform previous methods.

Core claim

By deriving a closed-form, numerically stable likelihood gradient projection inside an unfolding framework, the model respects the true quantization physics of compressive sensing and converts the hard quantization constraint into soft probabilistic guidance. An efficient dual-domain Mamba module is added to dynamically capture and fuse multi-scale local and global features while modeling interactions between distant but correlated regions, yielding state-of-the-art reconstruction performance.

What carries the argument

The closed-form likelihood gradient projection that supplies soft probabilistic guidance from true quantization physics, together with the dual-domain Mamba module that fuses multi-scale local and global features.

If this is right

  • Reconstructions align more closely with physical quantization effects instead of relying on L2 approximation.
  • Multi-scale correlations across distant image regions are modeled through dynamic feature fusion.
  • Overall accuracy and efficiency improve for quantized compressive sensing tasks.
  • Real-world deployment of quantized compressive sensing becomes more practical due to higher performance.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same closed-form projection technique could be adapted to other inverse problems that involve discretization.
  • Mamba-based dual-domain fusion may transfer to additional image reconstruction settings that require both local detail and long-range context.
  • Iterative stability of the projection supports scaling to deeper unfolding networks.

Load-bearing premise

The closed-form likelihood gradient projection stays accurate and stable across unfolding iterations while the dual-domain Mamba module captures required multi-scale correlations without artifacts.

What would settle it

Reconstruction error or numerical instability appearing when the model is tested on varying quantization bit depths or different sensing matrices, relative to standard L2-projection baselines.

Figures

Figures reproduced from arXiv: 2605.11475 by Gang Qu, Ping Wang, Siming Zheng, Xin Yuan.

Figure 1
Figure 1. Figure 1: The pipline of the proposed DPUNet. (a) The overall architecture of the pro￾posed QCS reconstruction model, which is a DUN framework with K stages. (b) The design of deep denoiser in (a). (c) The proposed dual-domain Mamba block in (b). Likelihood Gradient Projection. Our method explicitly incorporates a closed￾form quantized likelihood data-consistency projection: we use Mills-ratio-based expressions for … view at source ↗
Figure 2
Figure 2. Figure 2: The reconstruction results from 1-bit measurements of CelebA dataset (64×64, the sampling number is 4000). where g ∈ {1, ..., G} indexes channel groups with Cg = C/G channels per group, R ∈ N is the low-rank number, αg ∈ R is a learned coupling scale for group g, λ(t) ∈ [0, 1] is a warmup factor at optimization step t, Ug,r(ω), Vg,r(ω) ∈ C are learned complex frequency basis functions, and the global spect… view at source ↗
Figure 3
Figure 3. Figure 3: The reconstruction results from 1-bit measurements of FFHQ dataset (256 × 256 × 3, the sampling number is 24576). where p(y|zi) follow Eq. (9) and Eq. (13) for multi-bit and 1-bit cases, respec￾tively. Thus, the final objective of the proposed DPUNet is: \label {eq:} \mathcal {L}=||X_K-X_{gt}||_2 + \alpha \mathcal {L}_{\rm NLL}, (29) where α is empirically set to 0.05 in experiment. 4 Simulation Results We… view at source ↗
Figure 4
Figure 4. Figure 4: The reconstruction results of CelebA dataset from multi-bit measurements (64 × 64 × 3, the sampling number is 4000) [PITH_FULL_IMAGE:figures/full_fig_p013_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: The reconstruction results of FFHQ dataset from multi-bit measurements (256 × 256 × 3, the sampling number is 24576). 5.2 Comparison of Computational Complexity. Then, we further make a comparison on the average PSNR, the number of pa￾rameters, FLOPs, and inference time with previous unfolding designs (image size is 256 × 256 × 3) of different methods, as presented in Tab. 3. Benefiting from the efficient … view at source ↗
Figure 6
Figure 6. Figure 6: The OOD reconstruction results of CSet8 dataset from one- and two-bit mea￾surements (256 × 256 × 3, the sampling number is 24576) [PITH_FULL_IMAGE:figures/full_fig_p014_6.png] view at source ↗
read the original abstract

We propose a deep probabilistic unfolding model to address the classical quantized compressive sensing problem that leverages an unfolding framework to enhance the reconstruction accuracy and efficiency. Unlike previous unfolding methods that apply L2 projection to measurements, we derive a closed-form, numerically stable likelihood gradient projection, which allows the model to respect the true quantization physics, turning the hard quantization constraint into a soft probabilistic guidance. Furthermore, an efficient, dual-domain Mamba module is specifically designed to dynamically capture and fuse the multi-scale local and global features, ensuring the interactions between the distant but correlated regions. Extensive experiments demonstrate the state-of-the-art performance of the proposed method over previous works, which is capable of promoting the application of quantized compressive sensing in real life.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The paper proposes a deep probabilistic unfolding model for quantized compressive sensing. It derives a closed-form, numerically stable likelihood gradient projection to replace L2 projections, converting hard quantization constraints into soft probabilistic guidance within the unfolding iterations. A dual-domain Mamba module is introduced to capture and fuse multi-scale local and global features. Extensive experiments are reported to demonstrate state-of-the-art reconstruction performance over prior methods.

Significance. If the closed-form derivation is correct and the projection remains stable, the approach could meaningfully advance quantized CS by better respecting quantization physics rather than relying on heuristic projections, with the Mamba integration offering efficiency gains for multi-scale correlations. This has potential for practical sensing applications if the SOTA claims hold under rigorous validation.

major comments (2)
  1. [Method (derivation of likelihood gradient projection)] The central claim of a closed-form, numerically stable likelihood gradient projection (abstract and method) requires explicit verification that it does not accumulate errors or become unstable across unfolding iterations. No iteration-wise error monitoring, finite-difference gradient comparisons, or ablations on iteration count/bit-depth are described, which is load-bearing for the claim that the model respects true quantization physics without drift.
  2. [Experiments and ablation studies] The dual-domain Mamba module's ability to capture multi-scale correlations without artifacts or extra distributional assumptions is asserted but not load-bearing tested via controlled ablations (e.g., vs. standard attention or CNN baselines) in the experiments section; this underpins the efficiency and SOTA claims.
minor comments (2)
  1. [Abstract] The abstract claims 'state-of-the-art performance' but should include specific quantitative metrics (e.g., PSNR/SSIM gains) and dataset details for immediate clarity.
  2. [Method] Notation for the projection operator and likelihood gradient should be defined more explicitly with equation numbers to aid reproducibility.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and insightful comments. We address each major point below and will revise the manuscript to incorporate additional verification and ablation studies as outlined.

read point-by-point responses
  1. Referee: [Method (derivation of likelihood gradient projection)] The central claim of a closed-form, numerically stable likelihood gradient projection (abstract and method) requires explicit verification that it does not accumulate errors or become unstable across unfolding iterations. No iteration-wise error monitoring, finite-difference gradient comparisons, or ablations on iteration count/bit-depth are described, which is load-bearing for the claim that the model respects true quantization physics without drift.

    Authors: We thank the referee for emphasizing the importance of empirical verification for the stability claim. The closed-form likelihood gradient projection is derived to ensure numerical stability by replacing direct L2 operations with a bounded probabilistic update that respects quantization intervals without matrix inversion. While the paper presents the derivation and overall performance, we acknowledge the absence of the requested diagnostics. In the revised manuscript, we will add iteration-wise error monitoring, finite-difference gradient comparisons, and ablations across iteration counts and bit-depths to demonstrate that errors do not accumulate and the projection remains faithful to quantization physics. revision: yes

  2. Referee: [Experiments and ablation studies] The dual-domain Mamba module's ability to capture multi-scale correlations without artifacts or extra distributional assumptions is asserted but not load-bearing tested via controlled ablations (e.g., vs. standard attention or CNN baselines) in the experiments section; this underpins the efficiency and SOTA claims.

    Authors: We agree that controlled ablations are essential to validate the dual-domain Mamba module's contributions. The current experiments demonstrate overall SOTA results, but we will strengthen the manuscript by adding targeted ablations in the revised version: replacing the Mamba blocks with standard attention and CNN baselines while keeping other components fixed, and reporting reconstruction quality, runtime, and parameter efficiency. Feature map visualizations will also be included to illustrate multi-scale fusion without artifacts or additional assumptions. revision: yes

Circularity Check

0 steps flagged

No circularity: closed-form derivation starts from quantization likelihood

full rationale

The paper's central step is deriving a closed-form numerically stable likelihood gradient projection directly from the quantization likelihood function, which converts the hard constraint into soft probabilistic guidance inside the unfolding iterations. This is presented as a first-principles derivation rather than a fit to data or a self-citation. No equations reduce by construction to fitted parameters, prior self-cited results, or renamed empirical patterns. The dual-domain Mamba module is an architectural choice for feature fusion, not a load-bearing mathematical claim that collapses to inputs. The derivation chain remains independent of the target reconstruction performance, consistent with the reader's assessment of no obvious reduction.

Axiom & Free-Parameter Ledger

1 free parameters · 1 axioms · 0 invented entities

The central claim rests on the validity of modeling quantization via a likelihood function whose gradient can be computed in closed form and remains stable inside unfolding iterations; the Mamba module adds learned parameters whose behavior depends on training data.

free parameters (1)
  • neural network weights and Mamba parameters
    Learned during end-to-end training; their values are not fixed by the derivation.
axioms (1)
  • domain assumption The quantization process admits a likelihood function whose gradient projection is closed-form and numerically stable.
    Invoked to replace L2 projection with the proposed soft probabilistic guidance.

pith-pipeline@v0.9.0 · 5413 in / 1246 out tokens · 52389 ms · 2026-05-13T01:14:22.662292+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

51 extracted references · 51 canonical work pages · 2 internal anchors

  1. [1]

    IEEE Transactions on Signal Processing67(20), 5297–5308 (2019)

    Ameri, A., Bose, A., Li, J., Soltanalian, M.: One-bit radar processing with time- varying sampling thresholds. IEEE Transactions on Signal Processing67(20), 5297–5308 (2019)

  2. [2]

    In: 2008 42nd Annual Conference on Information Sciences and Systems

    Boufounos, P.T., Baraniuk, R.G.: 1-bit compressive sensing. In: 2008 42nd Annual Conference on Information Sciences and Systems. pp. 16–21. IEEE (2008) Deep Probabilistic Unfolding for Quantized Compressive Sensing 15

  3. [3]

    In: Compressed Sensing and its Applications: MATHEON Workshop 2013

    Boufounos,P.T.,Jacques,L.,Krahmer,F.,Saab,R.:Quantizationandcompressive sensing. In: Compressed Sensing and its Applications: MATHEON Workshop 2013. pp. 193–237. Springer (2015)

  4. [4]

    In: 2016 IEEE First International Conference on Internet-of-Things Design and Implementation (IoTDI)

    Cao, D.Y., Yu, K., Zhuo, S.G., Hu, Y.H., Wang, Z.: On the implementation of compressive sensing on wireless sensor network. In: 2016 IEEE First International Conference on Internet-of-Things Design and Implementation (IoTDI). pp. 229–

  5. [5]

    IEEE Trans- actions on Image Processing31, 5412–5426 (2022)

    Chen, B., Zhang, J.: Content-aware scalable deep compressed sensing. IEEE Trans- actions on Image Processing31, 5412–5426 (2022)

  6. [6]

    IEEE Transactions on Image Processing23(8), 3618–3632 (2014)

    Dong, W., Shi, G., Li, X., Ma, Y., Huang, F.: Compressive sensing via nonlocal low-rank regularization. IEEE Transactions on Image Processing23(8), 3618–3632 (2014)

  7. [7]

    IEEE Signal Pro- cessing Magazine25(2), 83–91 (2008)

    Duarte, M.F., Davenport, M.A., Takhar, D., Laska, J.N., Sun, T., Kelly, K.F., Baraniuk, R.G.: Single-pixel imaging via compressive sampling. IEEE Signal Pro- cessing Magazine25(2), 83–91 (2008)

  8. [8]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Guo, Z., Gan, H.: Cpp-net: Embracing multi-scale feature fusion into deep unfold- ing cp-ppa network for compressive sensing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 25086–25095 (2024)

  9. [9]

    IEEE Transactions on Signal and Information Processing over Networks5(1), 15– 30 (2018)

    Kafle, S., Gupta, V., Kailkhura, B., Wimalajeewa, T., Varshney, P.K.: Joint spar- sity pattern recovery with 1-b compressive sensing in distributed sensor networks. IEEE Transactions on Signal and Information Processing over Networks5(1), 15– 30 (2018)

  10. [10]

    arXiv preprint arXiv:2502.12762 (2025)

    Kafle, S., Joseph, G., Varshney, P.K.: One-bit compressed sensing using generative models. arXiv preprint arXiv:2502.12762 (2025)

  11. [11]

    IEEE Transactions on Signal Processing72, 3792–3804 (2022)

    Kafle, S., Wimalajeewa, T., Varshney, P.K.: Noisy one-bit compressed sensing with side-information. IEEE Transactions on Signal Processing72, 3792–3804 (2022)

  12. [12]

    IEEE Signal Processing Letters19(10), 607–610 (2012)

    Kamilov, U.S., Bourquard, A., Amini, A., Unser, M.: One-bit measurements with adaptive thresholds. IEEE Signal Processing Letters19(10), 607–610 (2012)

  13. [13]

    Progressive Growing of GANs for Improved Quality, Stability, and Variation

    Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for im- proved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)

  14. [14]

    In: Proceedings of the IEEE International Confer- ence on Image Processing (ICIP) (2010)

    Kim, Y., Nadar, M.S., Bilgin, A.: Compressed sensing using a gaussian scale mix- tures model in wavelet domain. In: Proceedings of the IEEE International Confer- ence on Image Processing (ICIP) (2010)

  15. [15]

    In: ProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecog- nition (CVPR)

    Kulkarni, K., Lohit, S., Turaga, P., Kerviche, R., Ashok, A.: Reconnet: Non- iterative reconstruction of images from compressively sensed measurements. In: ProceedingsoftheIEEE/CVFConferenceonComputerVisionandPatternRecog- nition (CVPR). pp. 449–458 (2016)

  16. [16]

    IEEE Signal Processing Letters22(7), 857–861 (2014)

    Li, F., Fang, J., Li, H., Huang, L.: Robust one-bit bayesian compressed sensing with sign-flip errors. IEEE Signal Processing Letters22(7), 857–861 (2014)

  17. [17]

    In: Proceedings of the Computer Vision and Pattern Recognition Conference

    Liao, C., Shen, Y., Li, D., Wang, Z.: Using powerful prior knowledge of diffusion model in deep unfolding networks for image compressive sensing. In: Proceedings of the Computer Vision and Pattern Recognition Conference. pp. 18000–18010 (2025)

  18. [18]

    In: Proceedings of the IEEE international conference on computer vision

    Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of the IEEE international conference on computer vision. pp. 3730– 3738 (2015)

  19. [19]

    In: International Conference on Learning Representations

    Meng, X., Kabashima, Y.: Quantized compressed sensing with score-based genera- tive models. In: International Conference on Learning Representations. pp. 23487– 23516 (2023)

  20. [20]

    In: Proceedings of the AAAI Conference on Artifi- cial Intelligence

    Meng,X., Kabashima, Y.: Qcs-sgm+: Improved quantized compressed sensing with score-based generative models. In: Proceedings of the AAAI Conference on Artifi- cial Intelligence. vol. 38, pp. 14341–14349 (2024) 16 Qu. et al

  21. [21]

    International Journal of Computer Vision131(11), 2933–2958 (2023)

    Meng, Z., Yuan, X., Jalali, S.: Deep unfolding for snapshot compressive imaging. International Journal of Computer Vision131(11), 2933–2958 (2023)

  22. [22]

    In: Advances in Neural Information Processing Systems (NeurIPS)

    Metzler, C., Mousavi, A., Baraniuk, R.: Learned d-amp: Principled neural network based compressive image recovery. In: Advances in Neural Information Processing Systems (NeurIPS). vol. 30 (2017)

  23. [23]

    IEEE Transactions on Information Theory62(9), 5117–5144 (2016)

    Metzler, C.A., Maleki, A., Baraniuk, R.G.: From denoising to compressed sensing. IEEE Transactions on Information Theory62(9), 5117–5144 (2016)

  24. [24]

    In: 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP)

    Musa, O., Hannak, G., Goertz, N.: Generalized approximate message passing for one-bit compressed sensing with awgn. In: 2016 IEEE Global Conference on Signal and Information Processing (GlobalSIP). pp. 1428–1432. IEEE (2016)

  25. [25]

    Oh, Y., Lee, N., Jeon, Y.S., Poor, H.V.: Communication-efficient federated learning viaquantizedcompressedsensing.IEEETransactionsonWirelessCommunications 22(2), 1087–1100 (2022)

  26. [26]

    IEEE Transactions on Information Theory 59(1), 482–494 (2012)

    Plan, Y., Vershynin, R.: Robust 1-bit compressed sensing and sparse logistic regres- sion: A convex programming approach. IEEE Transactions on Information Theory 59(1), 482–494 (2012)

  27. [27]

    arXiv preprint arXiv:2501.01262 (2025)

    Qin, M., Feng, Y., Wu, Z., Zhang, Y., Yuan, X.: Detail matters: Mamba-inspired joint unfolding network for snapshot spectral compressive imaging. arXiv preprint arXiv:2501.01262 (2025)

  28. [28]

    Optics and Lasers in Engineering155, 107053 (2022)

    Qu, G., Meng, X., Yin, Y., Yang, X.: A demosaicing method for compressive color single-pixel imaging based on a generative adversarial network. Optics and Lasers in Engineering155, 107053 (2022)

  29. [29]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition

    Qu, G., Wang, P., Yuan, X.: Dual-scale transformer for large-scale single-pixel imaging. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 25327–25337 (2024)

  30. [30]

    IEEE Journal of Selected Topics in Signal Processing pp

    Qu,G.,Zheng,S.,Qin,M.,Yuan,X.:Bmvc+:Anenhancedblockmodulationvideo compression codec for large-scale image compression. IEEE Journal of Selected Topics in Signal Processing pp. 1–12 (2025).https://doi.org/10.1109/JSTSP. 2025.3634288

  31. [31]

    IEEE Transactions on Image Processing31, 6991–7005 (2022)

    Shen, M., Gan, H., Ning, C., Hua, Y., Zhang, T.: Transcs: a transformer-based hybrid architecture for image compressed sensing. IEEE Transactions on Image Processing31, 6991–7005 (2022)

  32. [32]

    In: 2013 IEEE China Summit and International Confer- ence on Signal and Information Processing

    Shen, Y., Fang, J., Li, H.: One-bit compressive sensing and source localization in wireless sensor networks. In: 2013 IEEE China Summit and International Confer- ence on Signal and Information Processing. pp. 379–383. IEEE (2013)

  33. [33]

    In: Proceedings of the 29th ACM international conference on multimedia

    Song, J., Chen, B., Zhang, J.: Memory-augmented deep unfolding network for compressive sensing. In: Proceedings of the 29th ACM international conference on multimedia. pp. 4249–4258 (2021)

  34. [34]

    Advances in neural information processing systems32(2019)

    Song, Y., Ermon, S.: Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems32(2019)

  35. [35]

    In: 2017 3rd IEEE International Conference on Computer and Communications (ICCC)

    Tang, W., Xu, W., Zhang, X., Lin, J.: A low-cost channel feedback scheme in mmwave massive mimo system. In: 2017 3rd IEEE International Conference on Computer and Communications (ICCC). pp. 89–93. IEEE (2017)

  36. [36]

    Neural computation23(7), 1661–1674 (2011)

    Vincent, P.: A connection between score matching and denoising autoencoders. Neural computation23(7), 1661–1674 (2011)

  37. [37]

    Optics Letters48(18), 4813–4816 (2023)

    Wang, P., Wang, L., Qiao, M., Yuan, X.: Full-resolution and full-dynamic-range coded aperture compressive temporal imaging. Optics Letters48(18), 4813–4816 (2023)

  38. [38]

    Deep Probabilistic Unfolding for Quantized Compressive Sensing 17 In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Wang, P., Wang, L., Qu, G., Wang, X., Zhang, Y., Yuan, X.: Proximal algorithm unrolling: Flexible and efficient reconstruction networks for single-pixel imaging. Deep Probabilistic Unfolding for Quantized Compressive Sensing 17 In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 411–421 (June 2025)

  39. [39]

    In: Proceedings of the IEEE/CVF International Conference on Computer Vision

    Wang, P., Wang, L., Yuan, X.: Deep optics for video snapshot compressive imaging. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 10646–10656 (2023)

  40. [40]

    In: European Conference on Computer Vision

    Wang, P., Zhang, Y., Wang, L., Yuan, X.: Hierarchical separable video transformer for snapshot compressive imaging. In: European Conference on Computer Vision. pp. 104–122. Springer (2024)

  41. [41]

    In: Advances in Neural Infor- mation Processing Systems (NeurIPS) (2025)

    Wang, X., He, Z., Wang, P., Wang, L., Hu, Y., Yuan, X.: Spectral compressive imaging via chromaticity-intensity decomposition. In: Advances in Neural Infor- mation Processing Systems (NeurIPS) (2025)

  42. [42]

    IEEE Transactions on Signal Processing60(7), 3868–3875 (2012)

    Yan, M., Yang, Y., Osher, S.: Robust 1-bit compressive sensing using adaptive outlier pursuit. IEEE Transactions on Signal Processing60(7), 3868–3875 (2012)

  43. [43]

    IEEE Transactions on Signal Processing (2025)

    Yang, M.H., Huang, L.C.: Enhancing 1-bit compressive sensing with support esti- mation in noisy wireless sensor networks. IEEE Transactions on Signal Processing (2025)

  44. [44]

    IEEE Transactions on Signal Processing61(11), 2815–2824 (2013)

    Yang, Z., Xie, L., Zhang, C.: Variational bayesian algorithm for quantized com- pressed sensing. IEEE Transactions on Signal Processing61(11), 2815–2824 (2013)

  45. [45]

    IEEE Signal Processing Magazine38(2), 65–88 (2021).https://doi.org/10.1109/MSP.2020.3023869

    Yuan, X., Brady, D.J., Katsaggelos, A.K.: Snapshot compressive imaging: The- ory, algorithms, and applications. IEEE Signal Processing Magazine38(2), 65–88 (2021).https://doi.org/10.1109/MSP.2020.3023869

  46. [46]

    IEEE Sensors Journal16(22), 8091–8102 (2016).https://doi.org/10.1109/JSEN.2016.2609201

    Yuan, X., Jiang, H., Huang, G., Wilford, P.A.: Slope: Shrinkage of local overlapping patches estimator for lensless compressive imaging. IEEE Sensors Journal16(22), 8091–8102 (2016).https://doi.org/10.1109/JSEN.2016.2609201

  47. [47]

    Optics Express26(2), 1962–1977 (Jan 2018)

    Yuan, X., Pu, Y.: Parallel lensless compressive imaging via deep convolutional neural networks. Optics Express26(2), 1962–1977 (Jan 2018)

  48. [48]

    In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    Zhang, J., Ghanem, B.: Ista-net: Interpretable optimization-inspired deep network for image compressive sensing. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1828–1837 (2018)

  49. [49]

    Advanced Imaging1(2), 021002 (2024)

    Zheng,S.,Xue,Y.,Tahir,W.,Wang,Z.,Zhang,H.,Meng,Z.,Qu,G.,Ma,S.,Yuan, X.: Block-modulating video compression: an ultralow complexity image compres- sion encoder for resource-limited platforms. Advanced Imaging1(2), 021002 (2024)

  50. [50]

    Deformable DETR: Deformable Transformers for End-to-End Object Detection

    Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159 (2020)

  51. [51]

    IEEE Signal Processing Letters17(2), 149–152 (2009)

    Zymnis, A., Boyd, S., Candes, E.: Compressed sensing with quantized measure- ments. IEEE Signal Processing Letters17(2), 149–152 (2009)