pith. machine review for the scientific record. sign in

arxiv: 2605.12168 · v1 · submitted 2026-05-12 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

On What We Can Learn from Low-Resolution Data

Authors on Pith no claims yet

Pith reviewed 2026-05-13 05:22 UTC · model grok-4.3

classification 💻 cs.LG
keywords low-resolution datahigh-resolution dataKullback-Leibler divergencedata scarcitymachine learningvision transformerconvolutional neural network
0
0 comments X

The pith

Low-resolution data improves model performance on high-resolution tasks when high-resolution samples are scarce.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper investigates whether low-resolution data retains useful information for training models that will be evaluated on high-resolution inputs. It develops a theoretical analysis using the Kullback-Leibler divergence to characterize how the influence of data points changes with resolution and derives bounds on the relative contributions of high- and low-resolution observations based on information lost under downsampling. Empirically, experiments with a vision transformer and a convolutional neural network show that adding low-resolution data to the training set consistently improves results specifically when high-resolution data is limited. This addresses real-world constraints in domains where high-resolution collection or sharing is restricted by storage, privacy, or device limitations.

Core claim

Low-resolution observations from the same distribution contribute positively to training even when the final model is tested on high-resolution inputs, with their relative value bounded by Kullback-Leibler divergence measures of influence change and information loss under downsampling, leading to measurable performance gains when high-resolution data is scarce.

What carries the argument

The Kullback-Leibler divergence measure of how a data point's influence on the trained model changes with its resolution, used to derive bounds relating high- and low-resolution contributions to downsampling losses.

If this is right

  • Training sets can be usefully expanded with low-resolution samples to raise high-resolution accuracy when high-resolution data volume is limited.
  • Data collection in constrained environments can favor greater volume at lower resolution without complete loss of training value.
  • The performance benefit from low-resolution augmentation holds across architectures including vision transformers and convolutional networks.
  • Theoretical bounds on relative contributions can guide decisions on which low-resolution samples to include in a mixed-resolution training set.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • In settings with data from multiple devices or institutions, this could support mixing resolutions more effectively if the shared-distribution premise holds.
  • The KL-based bounds might be adapted to other modalities such as time-series or audio where downsampling is routine.
  • Training procedures could incorporate the derived influence measures to dynamically weight low-resolution samples during optimization.

Load-bearing premise

The low-resolution observations come from the same underlying distribution as the high-resolution targets, and the KL-based influence measure accurately reflects practical information loss under downsampling.

What would settle it

An experiment showing that adding low-resolution data from the same distribution either fails to improve or degrades performance on a high-resolution test set, or where measured gains fall outside the theoretical bounds derived from the KL analysis.

Figures

Figures reproduced from arXiv: 2605.12168 by Hiba Nassar, Niels Henrik Pontoppidan, Theresa Dahl Frehr, Tommy Sonne Alstr{\o}m.

Figure 1
Figure 1. Figure 1: Illustration of the formation of a mixed-resolution dataset. A central party (the requester) [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Simulation of bounds as a function of progressively removed high-frequency content. Error [PITH_FULL_IMAGE:figures/full_fig_p006_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Left: Variance of low-dimensional representations as a function of image resolution. Computed from PCA projections of progressively downsampled images from CIFAR10. Right: Synthetic two-class data separated using linear discriminant analysis. Marker size given by the magnitude of the change in model parameters when including a datapoint in the training set. The black line is the decision boundary given by … view at source ↗
Figure 4
Figure 4. Figure 4: Effect of adding low-resolution data under varying levels of high-resolution scarcity. [PITH_FULL_IMAGE:figures/full_fig_p008_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Trade-off between accuracy gain and storage cost for the Size experiment compared to the [PITH_FULL_IMAGE:figures/full_fig_p009_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Simulation of bounds as a function of progressively removed high-frequency content. Error [PITH_FULL_IMAGE:figures/full_fig_p020_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: (a): DWT of the time series seen in (c). (b) shows the reconstructions from only a single band. This shows how the DWT ensures the additive condition in eq. (64) is satisfied. 0 Levels 1 Level 2 Levels 3 Levels [PITH_FULL_IMAGE:figures/full_fig_p021_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Illustration of the effect of removing detail coefficients content using DWT on an image. [PITH_FULL_IMAGE:figures/full_fig_p021_8.png] view at source ↗
Figure 2
Figure 2. Figure 2: Levels refer to the number of frequency bands removed in the scalogram generated from [PITH_FULL_IMAGE:figures/full_fig_p022_2.png] view at source ↗
Figure 9
Figure 9. Figure 9: Relative error of the variance approximation in Proposition 4 as a function of input [PITH_FULL_IMAGE:figures/full_fig_p022_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Tightness of the KL-ratio bounds from Proposition [PITH_FULL_IMAGE:figures/full_fig_p023_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Tightness of the KL-difference bounds from Proposition [PITH_FULL_IMAGE:figures/full_fig_p025_11.png] view at source ↗
read the original abstract

Artificial intelligence systems typically rely on large, centrally collected datasets, a premise that does not hold in many real-world domains such as healthcare and public institutions. In these settings, data sharing is often constrained by storage, privacy, or resource limitations. For example, small wearable devices may lack the bandwidth or energy capacity needed to store and transmit high-resolution data, leading to aggregation during data collection and thus a loss of information. As a result, datasets collected from different sources may consist of a mixture of high- and low-resolution samples. Despite the prevalence of this setting, it remains unclear how informative low-resolution data is when models are ultimately evaluated on high-resolution inputs. We provide a theoretical analysis based on the Kullback-Leibler divergence that characterises how the influence of a datapoint changes with resolution, and derive bounds that relate the relative contribution of high- and low-resolution observations to the information lost under downsampling. To support this analysis, we empirically demonstrate, using both a vision transformer and a convolutional neural network, that adding low-resolution data to the training set consistently improves performance when high-resolution data is scarce.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 2 minor

Summary. The manuscript claims that low-resolution data remains informative for models ultimately evaluated on high-resolution inputs. It provides a KL-divergence analysis characterizing how a datapoint's influence varies with resolution, derives bounds relating the relative contributions of high- and low-resolution samples to downsampling-induced information loss, and empirically shows on both a vision transformer and a CNN that adding low-resolution samples consistently improves performance when high-resolution data is scarce.

Significance. If the theoretical bounds hold under the stated assumptions and the empirical gains are robust, the work addresses a practically relevant setting in domains such as healthcare and edge computing where mixed-resolution data arises from storage, privacy, or bandwidth constraints. The combination of a divergence-based characterization with experiments on standard architectures (ViT, CNN) offers both conceptual insight and actionable guidance for training under data scarcity.

major comments (2)
  1. [§3] §3 (theoretical analysis): the KL-based influence measure and subsequent bounds are derived under the assumption that low-resolution observations are drawn from the same underlying measure as the high-resolution targets (i.e., a simple marginal). The manuscript does not analyze the effect of a deterministic many-to-one downsampling operator inducing a pushforward measure, nor does it show that the scalar KL term tracks the scale-specific features a neural network can still exploit; this assumption is load-bearing for the claim that the derived bounds quantify usable training signal.
  2. [Experiments] Experimental section and associated tables: the reported consistent improvements lack explicit statements of the number of independent runs, the precise rule for selecting or excluding low-resolution samples, and whether error bars or statistical tests support the 'consistently improves' statement across different scarcity levels; without these controls it is unclear whether post-hoc choices affect the central empirical claim.
minor comments (2)
  1. [§2] Notation for the downsampling operator and the induced distributions could be introduced earlier and used consistently to avoid ambiguity when relating the KL term to practical information loss.
  2. [Abstract] The abstract states the architectures used but the main text would benefit from a brief reminder of the exact ViT and CNN variants and the resolution pairs tested.

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive and detailed report. We address each major comment point-by-point below, indicating where revisions will be made.

read point-by-point responses
  1. Referee: [§3] §3 (theoretical analysis): the KL-based influence measure and subsequent bounds are derived under the assumption that low-resolution observations are drawn from the same underlying measure as the high-resolution targets (i.e., a simple marginal). The manuscript does not analyze the effect of a deterministic many-to-one downsampling operator inducing a pushforward measure, nor does it show that the scalar KL term tracks the scale-specific features a neural network can still exploit; this assumption is load-bearing for the claim that the derived bounds quantify usable training signal.

    Authors: We thank the referee for highlighting this foundational aspect of the analysis. The low-resolution observations are generated by applying a deterministic downsampling operator to high-resolution samples, which by definition induces the pushforward measure; our reference to a 'simple marginal' is intended to denote exactly this pushforward distribution of the low-resolution data. We agree that the manuscript would benefit from an explicit discussion of this equivalence and of the relationship between the scalar KL term and the features retained at lower resolution. In the revision we will add a short subsection in §3 that (i) formally identifies the low-resolution distribution as the pushforward measure and (ii) clarifies that the derived bounds quantify the relative contribution of this marginal without claiming to isolate scale-specific features; the empirical results on ViT and CNN are then presented as evidence that the retained signal remains usable by standard architectures. This is a partial revision: the core KL bounds themselves are unchanged, but their interpretation and grounding are strengthened. revision: partial

  2. Referee: [Experiments] Experimental section and associated tables: the reported consistent improvements lack explicit statements of the number of independent runs, the precise rule for selecting or excluding low-resolution samples, and whether error bars or statistical tests support the 'consistently improves' statement across different scarcity levels; without these controls it is unclear whether post-hoc choices affect the central empirical claim.

    Authors: We agree that these experimental details are necessary for reproducibility and to substantiate the central claim. In the revised manuscript we will add: (1) an explicit statement that all reported results are averages over 5 independent runs with distinct random seeds; (2) the precise selection rule—low-resolution samples are drawn uniformly at random from the low-resolution pool to reach the target scarcity ratio, with no post-hoc exclusion or cherry-picking; and (3) error bars showing standard deviation across runs together with paired t-test p-values confirming that the observed improvements are statistically significant (p < 0.05) at the majority of scarcity levels examined. These additions will be incorporated into the experimental section and the associated tables/figures. revision: yes

Circularity Check

0 steps flagged

No circularity: theory uses standard KL properties; empirical results are independent validation

full rationale

The paper derives bounds on high- versus low-resolution contributions via KL divergence between distributions under downsampling, starting from the standard definition of KL and the assumption that low-res samples are pushforwards of the same underlying measure. This is not self-referential: the influence characterization follows directly from the KL formula without fitting to the performance claim or importing uniqueness from prior self-work. The empirical section on ViT/CNN training then tests the predicted improvement when high-res data is scarce, rather than re-deriving the same quantity from the fitted parameters. No step reduces by construction to its inputs, and the derivation chain remains self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The work relies on standard information-theoretic properties of KL divergence and conventional neural network training without introducing fitted parameters, new axioms, or postulated entities beyond those already established in the literature.

axioms (1)
  • domain assumption Kullback-Leibler divergence quantifies information loss under downsampling in a manner relevant to model influence
    Invoked to characterize how datapoint influence changes with resolution

pith-pipeline@v0.9.0 · 5504 in / 1114 out tokens · 41821 ms · 2026-05-13T05:22:16.603806+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

138 extracted references · 138 canonical work pages · 2 internal anchors

  1. [1]

    Fisher, R. A. , title =. Annals of Eugenics , volume =. doi:https://doi.org/10.1111/j.1469-1809.1936.tb02137.x , url =. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1469-1809.1936.tb02137.x , abstract =

  2. [2]

    Proceedings of the International Symposium on Auditory and Audiological Research , author =

    Data-driven hearing care with time-stamped data-logging , volume =. Proceedings of the International Symposium on Auditory and Audiological Research , author =. 2018 , pages =

  3. [3]

    Mallat, Stéphane , year =. A. doi:10.1016/B978-0-12-374370-1.X0001-8 , language =

  4. [4]

    Technometrics , author =

    The. Technometrics , author =. 2015 , pages =. doi:10.1080/00401706.2014.902774 , language =

  5. [5]

    Proc Natl Inst Sci India , author =

    On the generalized distance in statistics , volume =. Proc Natl Inst Sci India , author =. 1936 , pages =

  6. [6]

    Probability:

    Gut, Allan , year =. Probability:. doi:10.1007/978-1-4614-4708-5 , language =

  7. [7]

    The advanced theory of statistics , author =

  8. [8]

    International Conference on Machine Learning , pages=

    Privacy for free: How does dataset condensation help privacy? , author=. International Conference on Machine Learning , pages=. 2022 , organization=

  9. [9]

    Kullback and R

    S. Kullback and R. A. Leibler , title =. The Annals of Mathematical Statistics , number =. 1951 , doi =

  10. [10]

    Pattern Recognition , author =

    Explainable. Pattern Recognition , author =. 2024 , pages =. doi:10.1016/j.patcog.2024.110309 , abstract =

  11. [11]

    Information , author =

    Protecting. Information , author =. 2024 , pages =. doi:10.3390/info15100630 , abstract =

  12. [12]

    2025 , pages =

    Electronics , author =. 2025 , pages =. doi:10.3390/electronics14163261 , abstract =

  13. [13]

    Investigating the heat transfer and two-phase fluid flow of nanofluid i n the rough microchannel affected by obstacle structure changes,

    Privacy preservation for federated learning in health care , volume =. Patterns , author =. 2024 , pages =. doi:10.1016/j.patter.2024.100974 , abstract =

  14. [14]

    eBioMedicine , author =

    Decentralised, collaborative, and privacy-preserving machine learning for multi-hospital data , volume =. eBioMedicine , author =. 2024 , pages =. doi:10.1016/j.ebiom.2024.105006 , abstract =

  15. [15]

    JCO Clinical Cancer Informatics , author =

    Systematic. JCO Clinical Cancer Informatics , author =. 2020 , pages =. doi:10.1200/CCI.19.00047 , abstract =

  16. [16]

    2018 , note =

    Adaptive processes in hearing , isbn =. 2018 , note =

  17. [17]

    Frontiers in Digital Health , author =

    Real-. Frontiers in Digital Health , author =. 2021 , pages =. doi:10.3389/fdgth.2021.722186 , abstract =

  18. [18]

    Advances in Neural Information Processing Systems , volume=

    Medformer: A multi-granularity patching transformer for medical time-series classification , author=. Advances in Neural Information Processing Systems , volume=

  19. [19]

    International conference on artificial intelligence and statistics , pages=

    Multi-resolution time-series transformer for long-term forecasting , author=. International conference on artificial intelligence and statistics , pages=. 2024 , organization=

  20. [20]

    Pathformer: Multi-scale trans- formers with adaptive pathways for time series forecast- ing.arXiv preprint arXiv:2402.05956,

    Chen, Peng and Zhang, Yingying and Cheng, Yunyao and Shu, Yang and Wang, Yihang and Wen, Qingsong and Yang, Bin and Guo, Chenjuan , month = sep, year =. Pathformer:. doi:10.48550/arXiv.2402.05956 , abstract =

  21. [21]

    Lightweight

    Hui, Zheng and Gao, Xinbo and Yang, Yunchu and Wang, Xiumei , month = oct, year =. Lightweight. Proceedings of the 27th. doi:10.1145/3343031.3351084 , abstract =

  22. [22]

    Tian, Rui and Wu, Zuxuan and Dai, Qi and Hu, Han and Qiao, Yu and Jiang, Yu-Gang , month = jun, year =. 2023. doi:10.1109/CVPR52729.2023.02176 , abstract =

  23. [23]

    doi:10.48550/arXiv.2403.18361 , abstract =

    Fan, Qihang and You, Quanzeng and Han, Xiaotian and Liu, Yongfei and Tao, Yunzhe and Huang, Huaibo and He, Ran and Yang, Hongxia , month = mar, year =. doi:10.48550/arXiv.2403.18361 , abstract =

  24. [24]

    Information Fusion , author =

    Intermediate features matter in prototype-guided personalized federated learning , volume =. Information Fusion , author =. 2025 , pages =. doi:10.1016/j.inffus.2025.103381 , abstract =

  25. [25]

    Neurocomputing , author =

    Wavelet-enhanced federated learning with personalized adaptive prototypes , volume =. Neurocomputing , author =. 2026 , pages =. doi:10.1016/j.neucom.2026.132871 , abstract =

  26. [26]

    Jastrzębski, Stanisław and Kenton, Zachary and Arpit, Devansh and Ballas, Nicolas and Fischer, Asja and Bengio, Yoshua and Storkey, Amos , month = sep, year =. Three. doi:10.48550/arXiv.1711.04623 , abstract =

  27. [27]

    IEEE Transactions on Neural Networks and Learning Systems , author =

    Clustered. IEEE Transactions on Neural Networks and Learning Systems , author =. 2021 , pages =. doi:10.1109/TNNLS.2020.3015958 , abstract =

  28. [28]

    SIAM Journal on Control and Optimization , author =

    Acceleration of. SIAM Journal on Control and Optimization , author =. 1992 , pages =. doi:10.1137/0330046 , abstract =

  29. [29]

    Adaptive

    Ayromlou, Sana and Emerson, D B , year =. Adaptive

  30. [30]

    Yang, Zilu and Zhao, Yanchao and Zhang, Jiale , editor =. Web and. 2023 , note =. doi:10.1007/978-3-031-25201-3_28 , abstract =

  31. [31]

    Fedmd: Heterogenous federated learning via model distillation,

    Li, Daliang and Wang, Junpu , month = oct, year =. doi:10.48550/arXiv.1910.03581 , abstract =

  32. [32]

    International Conference of Machine Learning , author =

    Privacy for. International Conference of Machine Learning , author =

  33. [33]

    doi:10.48550/arXiv.2305.15706 , abstract =

    Tan, Jiahao and Zhou, Yipeng and Liu, Gang and Wang, Jessie Hui and Yu, Shui , month = may, year =. doi:10.48550/arXiv.2305.15706 , abstract =

  34. [34]

    Fedbabu: Towards enhanced representa- tion for federated image classification.arXiv preprint arXiv:2106.06042,

    Oh, Jaehoon and Kim, Sangmook and Yun, Se-Young , month = mar, year =. doi:10.48550/arXiv.2106.06042 , abstract =

  35. [35]

    Communication-

    McMahan, H Brendan and Moore, Eider and Ramage, Daniel and Hampson, Seth , year =. Communication-

  36. [36]

    ICML , author =

    White-box vs. ICML , author =

  37. [37]

    2024 , pages =

    Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2024 , pages =. doi:10.1609/aaai.v38i15.29617 , abstract =

  38. [38]

    Proceedings of the AAAI Conference on Artificial Intelligence , author =

    Tackling. Proceedings of the AAAI Conference on Artificial Intelligence , author =. 2023 , pages =. doi:10.1609/aaai.v37i6.25891 , abstract =

  39. [39]

    doi:10.48550/arXiv.2401.03230 , abstract =

    Zhang, Jianqing and Liu, Yang and Hua, Yang and Cao, Jian , month = jan, year =. doi:10.48550/arXiv.2401.03230 , abstract =

  40. [40]

    Tackling

    Dai, Yutong and Chen, Zeyuan and Li, Junnan and Heinecke, Shelby and Sun, Lichao and Xu, Ran , month = dec, year =. Tackling. doi:10.48550/arXiv.2212.02758 , abstract =

  41. [41]

    doi:10.48550/arXiv.2105.00243 , abstract =

    Tan, Yue and Long, Guodong and Liu, Lu and Zhou, Tianyi and Lu, Qinghua and Jiang, Jing and Zhang, Chengqi , month = mar, year =. doi:10.48550/arXiv.2105.00243 , abstract =

  42. [42]

    From the

    Makhdoumi, Ali and Salamatian, Salman and Fawaz, Nadia and Medard, Muriel , month = sep, year =. From the. doi:10.48550/arXiv.1402.1774 , abstract =

  43. [43]

    Quantifying attention flow in transformers

    Abnar, Samira and Zuidema, Willem , year =. Quantifying. Proceedings of the 58th. doi:10.18653/v1/2020.acl-main.385 , abstract =

  44. [44]

    Neural Computation , author =

    Backpropagation. Neural Computation , author =. 1989 , pages =. doi:10.1162/neco.1989.1.4.541 , abstract =

  45. [45]

    Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil , month = jun, year =. An. doi:10.48550/arXiv.2010.11929 , abstract =

  46. [46]

    Hao, Jitai and Huang, Qiang and Liu, Hao and Xiao, Xinyan and Ren, Zhaochun and Yu, Jun , month = dec, year =. A. doi:10.48550/arXiv.2505.12781 , abstract =

  47. [47]

    Computers & Security , author =

    Preserving data privacy in machine learning systems , volume =. Computers & Security , author =. 2024 , pages =. doi:10.1016/j.cose.2023.103605 , language =

  48. [48]

    Learning

    Zamir, Syed Waqas and Arora, Aditya and Khan, Salman and Hayat, Munawar and Khan, Fahad Shahbaz and Yang, Ming-Hsuan and Shao, Ling , editor =. Learning. Computer. 2020 , note =. doi:10.1007/978-3-030-58595-2_30 , abstract =

  49. [49]

    Niu, Ben and Wen, Weilei and Ren, Wenqi and Zhang, Xiangde and Yang, Lianping and Wang, Shuzhen and Zhang, Kaihao and Cao, Xiaochun and Shen, Haifeng , editor =. Single. Computer. 2020 , note =. doi:10.1007/978-3-030-58610-2_12 , abstract =

  50. [50]

    IEEE Transactions on Pattern Analysis and Machine Intelligence , author =

    Image. IEEE Transactions on Pattern Analysis and Machine Intelligence , author =. 2022 , pages =. doi:10.1109/TPAMI.2022.3204461 , abstract =

  51. [51]

    International Journal of Applied Mathematics and Computer Science , author =

    Impact of. International Journal of Applied Mathematics and Computer Science , author =. 2018 , pages =. doi:10.2478/amcs-2018-0056 , abstract =

  52. [52]

    Sunkara, Raja and Luo, Tie , editor =. No. Machine. 2023 , note =. doi:10.1007/978-3-031-26409-2_27 , abstract =

  53. [53]

    Wang, Zhaowen and Liu, Ding and Yang, Jianchao and Han, Wei and Huang, Thomas , month = dec, year =. Deep. 2015. doi:10.1109/ICCV.2015.50 , abstract =

  54. [54]

    Proceedings of

    Liu, Ziwei and Luo, Ping and Xiaogang, Wang and Xiaoou, Tang , month = dec, year =. Proceedings of

  55. [55]

    2009 , url=

    Learning Multiple Layers of Features from Tiny Images , author=. 2009 , url=

  56. [56]

    IEEE Transactions on Image Processing , author =

    Low-. IEEE Transactions on Image Processing , author =. 2019 , pages =. doi:10.1109/TIP.2018.2883743 , abstract =

  57. [57]

    Signal Processing , author =

    Low-resolution face recognition with single sample per person , volume =. Signal Processing , author =. 2017 , pages =. doi:10.1016/j.sigpro.2017.05.012 , abstract =

  58. [58]

    Expert Systems with Applications , author =

    Low resolution face recognition using a two-branch deep convolutional neural network architecture , volume =. Expert Systems with Applications , author =. 2020 , pages =. doi:10.1016/j.eswa.2019.112854 , abstract =

  59. [59]

    IEEE Transactions on Pattern Analysis and Machine Intelligence , author =

    Low. IEEE Transactions on Pattern Analysis and Machine Intelligence , author =. 2016 , pages =. doi:10.1109/TPAMI.2015.2469282 , abstract =

  60. [60]

    IEEE Signal Processing Letters , author =

    Deep. IEEE Signal Processing Letters , author =. 2018 , pages =. doi:10.1109/LSP.2018.2810121 , abstract =

  61. [61]

    vision-transformers-cifar10:

    Yoshioka, Kentaro , year =. vision-transformers-cifar10:

  62. [62]

    Neural Networks , author =

    Learning in compressed space , volume =. Neural Networks , author =. 2013 , pages =. doi:10.1016/j.neunet.2013.01.020 , abstract =

  63. [63]

    Randnet:

    Chang, Thomas and Tolooshams, Bahareh and Ba, Demba , month = oct, year =. Randnet:. 2019. doi:10.1109/MLSP.2019.8918878 , abstract =

  64. [64]

    IEEE Transactions on Image Processing , author =

    Efficient. IEEE Transactions on Image Processing , author =. 2020 , pages =. doi:10.1109/TIP.2020.2995049 , abstract =

  65. [65]

    Authenticated down-sampling for privacy-preserving energy usage data sharing , isbn =

    Mashima, Daisuke , month = nov, year =. Authenticated down-sampling for privacy-preserving energy usage data sharing , isbn =. 2015. doi:10.1109/SmartGridComm.2015.7436367 , abstract =

  66. [66]

    , month = jun, year =

    Wang, Zhangyang and Chang, Shiyu and Yang, Yingzhen and Liu, Ding and Huang, Thomas S. , month = jun, year =. Studying. 2016. doi:10.1109/CVPR.2016.518 , abstract =

  67. [67]

    IEEE Signal Processing Letters , author =

    Discriminative. IEEE Signal Processing Letters , author =. 2018 , pages =. doi:10.1109/LSP.2017.2746658 , abstract =

  68. [68]

    Khadka, Puskal and Rizk, Rodrigue and Wang, Longwei and Santosh, K. C. , month = sep, year =. doi:10.48550/arXiv.2509.08959 , abstract =

  69. [69]

    and Saenko, Kate , month = may, year =

    Peng, Xingchao and Hoffman, Judy and Yu, Stella X. and Saenko, Kate , month = may, year =. Fine-to-coarse. doi:10.48550/arXiv.1605.06695 , abstract =

  70. [70]

    Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang

    Abadi, Martin and Chu, Andy and Goodfellow, Ian and McMahan, H. Brendan and Mironov, Ilya and Talwar, Kunal and Zhang, Li , month = oct, year =. Deep. Proceedings of the 2016. doi:10.1145/2976749.2978318 , abstract =

  71. [71]

    Personalized

    Zhuo, Wei and Zhan, Zhaohuan and Yu, Han , month = oct, year =. Personalized. doi:10.48550/arXiv.2505.23864 , abstract =

  72. [72]

    Decentralized

    Yang, Danni and Chen, Zhikang and Cui, Sen and Yang, Mengyue and Li, Ding and Wuerkaixi, Abudukelimu and Li, Haoxuan and Ren, Jinke and Gong, Mingming , month = sep, year =. Decentralized. doi:10.48550/arXiv.2509.23683 , abstract =

  73. [73]

    doi:10.48550/arXiv.2511.00480 , abstract =

    Bo, Weihao and Sun, Yanpeng and Wang, Yu and Zhang, Xinyu and Li, Zechao , month = nov, year =. doi:10.48550/arXiv.2511.00480 , abstract =

  74. [74]

    Personalized

    Wei, Ting and Mei, Biao and Lyu, Junliang and Zhang, Renquan and Zhou, Feng and Sun, Yifan , month = oct, year =. Personalized. doi:10.48550/arXiv.2505.14161 , abstract =

  75. [75]

    , month = oct, year =

    Li, Xiang and Su, Buxin and Wang, Chendi and Long, Qi and Su, Weijie J. , month = oct, year =. Mitigating. doi:10.48550/arXiv.2510.19934 , abstract =

  76. [76]

    2021 , url =

    Sun, Jingwei and Li, Ang and Wang, Binghui and Yang, Huanrui and Li, Hai and Chen, Yiran , month = jun, year =. Soteria:. 2021. doi:10.1109/CVPR46437.2021.00919 , abstract =

  77. [77]

    Hatamizadeh, Ali and Yin, Hongxu and Roth, Holger and Li, Wenqi and Kautz, Jan and Xu, Daguang and Molchanov, Pavlo , month = jun, year =. 2022. doi:10.1109/CVPR52688.2022.00978 , abstract =

  78. [78]

    IEEE/ACM Transactions on Audio, Speech, and Language Processing , author =

    An. IEEE/ACM Transactions on Audio, Speech, and Language Processing , author =. 2016 , pages =. doi:10.1109/TASLP.2016.2585878 , abstract =

  79. [79]

    MViTv2: Improved Multiscale Vision Transformers for Classification and Detection , isbn =

    Li, Zhuohang and Zhang, Jiaxin and Liu, Luyang and Liu, Jian , month = jun, year =. Auditing. 2022. doi:10.1109/CVPR52688.2022.00989 , abstract =

  80. [80]

    2021 , url =

    Yin, Hongxu and Mallya, Arun and Vahdat, Arash and Alvarez, Jose M. and Kautz, Jan and Molchanov, Pavlo , month = jun, year =. See through. 2021. doi:10.1109/CVPR46437.2021.01607 , abstract =

Showing first 80 references.