pith. machine review for the scientific record. sign in

arxiv: 2605.10009 · v1 · submitted 2026-05-11 · 💻 cs.CV

Recognition: no theorem link

Hystar: Hypernetwork-driven Style-adaptive Retrieval via Dynamic SVD Modulation

Authors on Pith no claims yet

Pith reviewed 2026-05-12 03:39 UTC · model grok-4.3

classification 💻 cs.CV
keywords query-based image retrievalhypernetworkstyle adaptationsingular value perturbationcontrastive lossvision-language modelsparameter-efficient adaptationdynamic modulation
0
0 comments X

The pith

A hypernetwork generates per-query singular-value perturbations in attention layers to adapt vision-language models to diverse query styles for improved image retrieval.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents Hystar as a lightweight way to adapt image retrieval systems to stylistically varied queries such as sketches, artworks, or low-resolution images. It uses a hypernetwork to create small, input-specific changes to the singular values inside attention layers while applying fixed offsets to MLP layers for overall stability. A custom contrastive loss called StyleNCE helps the model focus on hard negatives that cross style boundaries. If correct, this would let retrieval tools handle real-world input diversity more reliably without retraining entire models for each new style or style combination.

Core claim

Hystar employs a hypernetwork to generate singular-value perturbations for attention layers to enable per-input style adaptation, paired with static singular-value offsets on MLP layers for stability, and incorporates StyleNCE, an optimal-transport-weighted contrastive loss, to mitigate semantic confusions, resulting in superior performance on multi-style retrieval tasks compared to baselines.

What carries the argument

Hypernetwork that generates dynamic singular-value perturbations ΔS for attention layers to enable flexible per-input style adaptation while static offsets maintain cross-style stability.

If this is right

  • Outperforms strong baselines on multi-style retrieval and cross-style classification benchmarks.
  • Achieves state-of-the-art results while remaining parameter-efficient.
  • Maintains stable performance across different query styles.
  • Reduces semantic confusions between styles through the optimal-transport-weighted contrastive loss.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The approach could extend to other vision-language tasks where input style shifts occur, such as zero-shot classification or captioning.
  • Focusing modulation on singular values in attention layers may prove sufficient for many domain adaptations, lowering the cost of updating large models.
  • If the hypernetwork generalizes to novel styles, retrieval systems could operate with less style-specific training data.

Load-bearing premise

The hypernetwork-generated singular-value perturbations will generalize effectively to truly unseen query styles without causing instability or overfitting.

What would settle it

Measuring retrieval accuracy on a benchmark of query styles completely withheld from training data and checking whether performance stays near the reported state-of-the-art levels on seen styles.

Figures

Figures reproduced from arXiv: 2605.10009 by Boxuan Li, Chenghao Xu, Jiexi Yan, Yujia Cai.

Figure 1
Figure 1. Figure 1: Overview of the Hystar framework. For multi-style queries, style features are first extracted using DINOv2. These features are fed into a hypernetwork to produce dynamic singular￾value increments for attention layers, enabling style-conditioned modulation of the feature encoder. Additionally, static singular-value increments are applied to the MLP layers, serving as a fixed parameter modulation. Together, … view at source ↗
Figure 2
Figure 2. Figure 2: Motivation behind StyleNCE. In multi-style queries, most negatives lie far from the query and provide little training signal, whereas hard negatives, though fewer, may be closer to the query than positives due to style-induced abstraction. StyleNCE focuses on these hard negatives, preventing gradients from being dominated by easy samples and ensuring effective optimization for style-diverse retrieval. Cros… view at source ↗
Figure 3
Figure 3. Figure 3: t-SNE visualization of feature embeddings derived by different methods on the DSR [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Examples of images with extreme styles. F.2 ANALYSIS OF MIXED-STYLE QUERIES To study how Hystar responds to style mixtures, we use Stable Diffusion to synthesize three sets of images: Sketch, Art, and their Mixture. To control semantic content, we fix the object category and generate 50 images per set. The prompts are as follows (where {object} is a placeholder, e.g., “cat”): • Sketch: A {object}, clean pe… view at source ↗
Figure 5
Figure 5. Figure 5: Style-aware feature distribution learned by Hystar. Hystar maps images from differ￾ent styles (Art, Sketch, and their Mixture) into a coherent embedding space. As shown in the t-SNE plot, the mixed-style samples (blue) form a transitional manifold between Art (black) and Sketch (red), demonstrating that Hystar’s representations smoothly capture style blending while maintain￾ing semantic alignment. For each… view at source ↗
Figure 6
Figure 6. Figure 6: Effect of the positive–negative balance coefficient [PITH_FULL_IMAGE:figures/full_fig_p024_6.png] view at source ↗
Figure 7
Figure 7. Figure 7: Sensitivity analysis of the hard-negative weight [PITH_FULL_IMAGE:figures/full_fig_p024_7.png] view at source ↗
Figure 8
Figure 8. Figure 8: Qualitative retrieval examples on the DSR dataset. We illustrate three common error types [PITH_FULL_IMAGE:figures/full_fig_p025_8.png] view at source ↗
Figure 9
Figure 9. Figure 9: Qualitative Top-10 retrieval results on the DomainNet dataset across unseen styles (Clipart, [PITH_FULL_IMAGE:figures/full_fig_p027_9.png] view at source ↗
Figure 10
Figure 10. Figure 10: Qualitative Top-10 retrieval results on the DomainNet dataset across unseen styles (Paint [PITH_FULL_IMAGE:figures/full_fig_p028_10.png] view at source ↗
Figure 11
Figure 11. Figure 11: Qualitative Top-10 retrieval results on the DomainNet dataset across unseen styles (In [PITH_FULL_IMAGE:figures/full_fig_p029_11.png] view at source ↗
read the original abstract

Query-based image retrieval (QBIR) requires retrieving relevant images given diverse and often stylistically heterogeneous queries, such as sketches, artworks, or low-resolution previews. While large-scale vision--language representation models (VLRMs) like CLIP offer strong zero-shot retrieval performance, they struggle with distribution shifts caused by unseen query styles. In this paper, we propose the Hypernetwork-driven Style-adaptive Retrieval (Hystar), a lightweight framework that dynamically adapts model weights to each query's style. Hystar employs a hypernetwork to generate singular-value perturbations ($\Delta S$) for attention layers, enabling flexible per-input adaptation, while static singular-value offsets on MLP layers ensure cross-style stability. To better handle semantic confusions across styles, we design StyleNCE as part of Hystar, an optimal-transport-weighted contrastive loss that emphasizes hard cross-style negatives. Extensive experiments on multi-style retrieval and cross-style classification benchmarks demonstrate that Hystar consistently outperforms strong baselines, achieving state-of-the-art performance while being parameter-efficient and stable across styles.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 0 minor

Summary. The paper proposes Hystar, a lightweight framework for query-based image retrieval under stylistic distribution shifts. It uses a hypernetwork to dynamically generate singular-value perturbations (ΔS) for attention layers per query style, applies static singular-value offsets to MLP layers for stability, and introduces StyleNCE, an optimal-transport-weighted contrastive loss to emphasize hard cross-style negatives. The manuscript claims that this yields state-of-the-art performance on multi-style retrieval and cross-style classification benchmarks while remaining parameter-efficient and stable across styles.

Significance. If the central claims hold after verification, the work would be significant for practical QBIR systems that must handle heterogeneous query styles (sketches, artworks, low-res previews) without full model retraining. The hypernetwork-driven SVD modulation offers a parameter-efficient adaptation route for VLRMs such as CLIP, and StyleNCE provides a targeted contrastive objective for cross-style semantic confusion. The combination is novel and directly addresses a recognized limitation of zero-shot VLRM retrieval.

major comments (3)
  1. [Abstract] Abstract: the central claim that the hypernetwork-generated ΔS perturbations enable effective adaptation to 'unseen query styles' and 'cross-style stability' is load-bearing, yet the abstract supplies no information on style diversity, train/test partitioning, or whether test styles are strictly disjoint from the hypernetwork's training distribution. Without these details the reported gains could arise from style memorization rather than the dynamic mechanism.
  2. [Abstract] Abstract and experimental description: no quantitative metrics, baseline details, ablation studies, or error analysis are supplied to support the SOTA and stability assertions. This absence prevents assessment of whether the dynamic SVD component, the static MLP offsets, or StyleNCE is responsible for the claimed improvements.
  3. [Method] Method (hypernetwork and StyleNCE sections): the generalization assumption that hypernetwork-generated singular-value perturbations will remain stable and useful for truly unseen styles lacks any described validation protocol, failure-case analysis, or ablation that removes style overlap. This is a load-bearing point for the parameter-efficiency and stability claims.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive feedback. We address each major comment point by point below and agree to revisions that strengthen the abstract and method descriptions without altering the core contributions.

read point-by-point responses
  1. Referee: [Abstract] Abstract: the central claim that the hypernetwork-generated ΔS perturbations enable effective adaptation to 'unseen query styles' and 'cross-style stability' is load-bearing, yet the abstract supplies no information on style diversity, train/test partitioning, or whether test styles are strictly disjoint from the hypernetwork's training distribution. Without these details the reported gains could arise from style memorization rather than the dynamic mechanism.

    Authors: We agree that the abstract should explicitly address this to rule out memorization concerns. The manuscript details that the hypernetwork is trained exclusively on a diverse subset of styles drawn from the training partition, with all evaluation performed on held-out styles that share no overlap with the training styles. This disjoint split is central to the cross-style benchmarks. We will revise the abstract to include a brief statement on style diversity and the strict train/test style disjointness. revision: yes

  2. Referee: [Abstract] Abstract and experimental description: no quantitative metrics, baseline details, ablation studies, or error analysis are supplied to support the SOTA and stability assertions. This absence prevents assessment of whether the dynamic SVD component, the static MLP offsets, or StyleNCE is responsible for the claimed improvements.

    Authors: The full experimental section contains quantitative tables with metrics, multiple baselines, component-wise ablations (isolating dynamic SVD, static MLP offsets, and StyleNCE), and stability/error analysis across styles. The abstract, however, remains high-level. We will update the abstract to reference key quantitative gains and improve cross-references in the experimental description so that the contribution of each element is clearer. revision: partial

  3. Referee: [Method] Method (hypernetwork and StyleNCE sections): the generalization assumption that hypernetwork-generated singular-value perturbations will remain stable and useful for truly unseen styles lacks any described validation protocol, failure-case analysis, or ablation that removes style overlap. This is a load-bearing point for the parameter-efficiency and stability claims.

    Authors: The method section describes the hypernetwork training protocol on style subsets and reports generalization results on disjoint test styles, with existing ablations on the SVD modulation. We acknowledge that a more explicit validation protocol, dedicated failure-case analysis, and an ablation enforcing complete removal of style overlap would further substantiate the stability and efficiency claims. We will add these elements to the revised method section. revision: yes

Circularity Check

0 steps flagged

No significant circularity; proposal is self-contained

full rationale

The paper introduces Hystar as a novel architecture combining a hypernetwork for generating per-query singular-value perturbations ΔS in attention layers, static offsets on MLPs, and a new StyleNCE loss. No equations or claims reduce by construction to fitted inputs, self-citations, or renamed prior results. The central claims rest on empirical benchmarks rather than definitional equivalence or load-bearing self-citation chains. The derivation chain is independent of its own outputs.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 2 invented entities

Review based solely on abstract; full details unavailable. The approach assumes that SVD-based modulation can capture style variations and that the hypernetwork learns useful perturbations from limited style data.

invented entities (2)
  • Hypernetwork for generating ΔS no independent evidence
    purpose: Produce per-query singular-value perturbations for attention layers
    Core adaptation mechanism described in abstract
  • StyleNCE loss no independent evidence
    purpose: Emphasize hard cross-style negatives via optimal transport weighting
    New contrastive loss component introduced for semantic confusion handling

pith-pipeline@v0.9.0 · 5486 in / 1078 out tokens · 36229 ms · 2026-05-12T03:39:50.410630+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

32 extracted references · 32 canonical work pages · 5 internal anchors

  1. [1]

    One-for-all: Generalized lora for parameter-efficient fine-tuning.arXiv preprint arXiv:2306.07967,

    Arnav Chavan, Zhuang Liu, Deepak Gupta, Eric Xing, and Zhiqiang Shen. One-for-all: Generalized lora for parameter-efficient fine-tuning.arXiv preprint arXiv:2306.07967,

  2. [2]

    Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton

    URLhttps: //arxiv.org/abs/2205.13535. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. InInternational conference on machine learning, pp. 1597–1607. PmLR,

  3. [3]

    Advances in multimodal adaptation and generalization: From traditional approaches to foundation models.arXiv preprint arXiv:2501.18592,

    Hao Dong, Moru Liu, Kaiyang Zhou, Eleni Chatzi, Juho Kannala, Cyrill Stachniss, and Olga Fink. Advances in multimodal adaptation and generalization: From traditional approaches to foundation models.arXiv preprint arXiv:2501.18592,

  4. [4]

    Adapt and align to improve zero-shot sketch-based image retrieval.arXiv preprint arXiv:2305.05144,

    Shiyin Dong, Mingrui Zhu, Nannan Wang, and Xinbo Gao. Adapt and align to improve zero-shot sketch-based image retrieval.arXiv preprint arXiv:2305.05144,

  5. [5]

    Dynamic mixture of curriculum lora experts for continual multimodal instruction tuning.arXiv preprint arXiv:2506.11672, 2025

    Chendi Ge, Xin Wang, Zeyang Zhang, Hong Chen, Jiapei Fan, Longtao Huang, Hui Xue, and Wenwu Zhu. Dynamic mixture of curriculum lora experts for continual multimodal instruction tuning.arXiv preprint arXiv:2506.11672,

  6. [6]

    Sara: singular-value based adaptive low-rank adaption.arXiv preprint arXiv:2408.03290,

    Jihao Gu, Shuai Chen, Zelin Wang, Yibo Zhang, and Ping Gong. Sara: singular-value based adaptive low-rank adaption.arXiv preprint arXiv:2408.03290,

  7. [7]

    HyperNetworks

    David Ha, Andrew Dai, and Quoc V Le. Hypernetworks.arXiv preprint arXiv:1609.09106,

  8. [8]

    Dimensionality reduction by learning an invariant mapping

    11 Published as a conference paper at ICLR 2026 Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimensionality reduction by learning an invariant mapping. In2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06), volume 2, pp. 1735–1742. IEEE,

  9. [9]

    Gaussian Error Linear Units (GELUs)

    URLhttps:// arxiv.org/abs/1606.08415. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models.ICLR, 1(2):3,

  10. [10]

    doi: 10.1109/tcsvt.2019.2936710

    ISSN 1558-2205. doi: 10.1109/tcsvt.2019.2936710. URLhttp://dx.doi.org/10.1109/TCSVT.2019. 2936710. Hao Li, Xu Li, Belhal Karimi, Jie Chen, and Mingming Sun. Joint learning of object graph and rela- tion graph for visual question answering. In2022 IEEE International Conference on Multimedia and Expo (ICME), pp. 01–06. IEEE, 2022a. Hao Li, Jinfa Huang, Peng...

  11. [11]

    Freestyleret: retrieving images from style-diversified queries

    Hao Li, Yanhao Jia, Peng Jin, Zesen Cheng, Kehan Li, Jialu Sui, Chang Liu, and Li Yuan. Freestyleret: retrieving images from style-diversified queries. InEuropean Conference on Com- puter Vision, pp. 258–274. Springer, 2024a. 12 Published as a conference paper at ICLR 2026 Jindong Li, Yongguang Li, Yali Fu, Jiahong Liu, Yixin Liu, Menglin Yang, and Irwin ...

  12. [12]

    Align before fuse: Vision and language representation learning with momentum distillation

    Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language representation learning with momentum distillation. In M. Ranzato, A. Beygelzimer, Y . Dauphin, P.S. Liang, and J. Wortman Vaughan (eds.),Advances in Neural Information Processing Systems, volume 34, pp. 9694–9705...

  13. [13]

    Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning.Advances in Neural Information Processing Systems, 35:1950–1965,

    Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin A Raffel. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning.Advances in Neural Information Processing Systems, 35:1950–1965,

  14. [14]

    Together, then apart: Revisiting multimodal survival analysis via a min-max perspective.arXiv preprint arXiv:2511.18089,

    Wenjing Liu, Qin Ren, Wen Zhang, Yuewei Lin, and Chenyu You. Together, then apart: Revisiting multimodal survival analysis via a min-max perspective.arXiv preprint arXiv:2511.18089,

  15. [15]

    Representation Learning with Contrastive Predictive Coding

    Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predic- tive coding.arXiv preprint arXiv:1807.03748,

  16. [16]

    DINOv2: Learning Robust Visual Features without Supervision

    Maxime Oquab, Timoth ´ee Darcet, Th ´eo Moutakanni, Huy V o, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision.arXiv preprint arXiv:2304.07193,

  17. [17]

    Computational optimal transport: With applications to data science.Foundations and Trends® in Machine Learning, 11(5-6):355–607,

    13 Published as a conference paper at ICLR 2026 Gabriel Peyr ´e, Marco Cuturi, et al. Computational optimal transport: With applications to data science.Foundations and Trends® in Machine Learning, 11(5-6):355–607,

  18. [18]

    Benchmarking robustness of multimodal image-text models under distribution shift.arXiv preprint arXiv:2212.08044,

    Jielin Qiu, Yi Zhu, Xingjian Shi, Florian Wenzel, Zhiqiang Tang, Ding Zhao, Bo Li, and Mu Li. Benchmarking robustness of multimodal image-text models under distribution shift.arXiv preprint arXiv:2212.08044,

  19. [19]

    Otsurv: A novel multiple instance learning framework for survival prediction with heterogeneity-aware optimal transport

    Qin Ren, Yifan Wang, Ruogu Fang, Haibin Ling, and Chenyu You. Otsurv: A novel multiple instance learning framework for survival prediction with heterogeneity-aware optimal transport. InInternational Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 439–449. Springer, 2025a. Qin Ren, Yufei Wang, Lanqing Guo, Wen Zhang, Zhiwen Fa...

  20. [20]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition.arXiv preprint arXiv:1409.1556,

  21. [21]

    Domain-smoothing network for zero-shot sketch-based image retrieval.arXiv preprint arXiv:2106.11841,

    Zhipeng Wang, Hao Wang, Jiexi Yan, Aming Wu, and Cheng Deng. Domain-smoothing network for zero-shot sketch-based image retrieval.arXiv preprint arXiv:2106.11841,

  22. [22]

    Singular value fine-tuning for few-shot class-incremental learning.arXiv preprint arXiv:2503.10214,

    Zhiwu Wang, Yichen Wu, Renzhen Wang, Haokun Lin, Quanziang Wang, Qian Zhao, and Deyu Meng. Singular value fine-tuning for few-shot class-incremental learning.arXiv preprint arXiv:2503.10214,

  23. [23]

    Tcp: Textual-based class-aware prompt tuning for visual-language model

    14 Published as a conference paper at ICLR 2026 Hantao Yao, Rui Zhang, and Changsheng Xu. Tcp: Textual-based class-aware prompt tuning for visual-language model. InProceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 23438–23448,

  24. [24]

    Neeko: Leveraging dynamic lora for efficient multi-character role-playing agent.arXiv preprint arXiv:2402.13717,

    Xiaoyan Yu, Tongxu Luo, Yifan Wei, Fangyu Lei, Yiming Huang, Hao Peng, and Liehuang Zhu. Neeko: Leveraging dynamic lora for efficient multi-character role-playing agent.arXiv preprint arXiv:2402.13717,

  25. [25]

    Supervise less, see more: Training-free nuclear instance segmentation with prototype-guided prompting.arXiv preprint arXiv:2511.19953,

    Wen Zhang, Qin Ren, Wenjing Liu, Haibin Ling, and Chenyu You. Supervise less, see more: Training-free nuclear instance segmentation with prototype-guided prompting.arXiv preprint arXiv:2511.19953,

  26. [26]

    Domain generalization: A survey.IEEE transactions on pattern analysis and machine intelligence, 45(4):4396–4415, 2022a

    Kaiyang Zhou, Ziwei Liu, Yu Qiao, Tao Xiang, and Chen Change Loy. Domain generalization: A survey.IEEE transactions on pattern analysis and machine intelligence, 45(4):4396–4415, 2022a. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Cocoop: Conditional prompt learning for vision-language models.arXiv preprint arXiv:2203.05557, 2022b. Kaiyang...

  27. [27]

    This confirms that explicit style conditioning—rather than the particular choice of extractor—is the key factor driving improvement

    or the CLIP backbone itself) substantially outperforms the static baseline without external style cues. This confirms that explicit style conditioning—rather than the particular choice of extractor—is the key factor driving improvement. Moreover, the gap betweenDIN Ov2and CLIP-based features is relatively small (83.6 vs. 83.0 Top-1 average), suggesting th...

  28. [28]

    We compare against baselines including CoOp (Zhou et al., 2022c), ProGrad (Zhu et al., 2023a), KgCoOp (Yao et al., 2023), MaPLe (Khattak et al., 2023), and TCP (Yao et al., 2024)

    and SUN397 (Xiao et al., 2010), where each class provides 16 labeled samples for training. We compare against baselines including CoOp (Zhou et al., 2022c), ProGrad (Zhu et al., 2023a), KgCoOp (Yao et al., 2023), MaPLe (Khattak et al., 2023), and TCP (Yao et al., 2024). As summarized in Table 11, Hystar delivers the strongest generalization onNewandH(harm...

  29. [29]

    These styles are highly abstract, visually unconventional, and entirely unseen during training

    to generate corresponding versions in three extreme artistic styles: Surrealist Abstract Art,Post-Impressionist Painting, andInk-Wash Painting. These styles are highly abstract, visually unconventional, and entirely unseen during training. We employ Stable Diffusion with the following textual prompts ( where{object}is a placeholder, e.g., “cat”): • Surrea...

  30. [30]

    Our proposed Hystar achieves the highest Top-1 accuracy across all three challenging styles, outperforming the strongest baseline by 3.5% on average

    As shown in Table 13, all methods experience a noticeable performance drop under these extremely abstract and out-of-distribution styles, confirming the significant domain gap between realistic and artistic representations. Our proposed Hystar achieves the highest Top-1 accuracy across all three challenging styles, outperforming the strongest baseline by ...

  31. [31]

    These findings highlight the importance of maintaining a balanced contribution between positive and negative samples for stable optimization

    does not provide additional bene- fits and instead introduces slight instability, leading to performance drops relative to the mid-range values. These findings highlight the importance of maintaining a balanced contribution between positive and negative samples for stable optimization. G.2 SENSITIVITYANALYSIS OFHARD-NEGATIVEWEIGHT INOT OPTIMIZATION Figure...

  32. [32]

    result in slower convergence and lower accuracy, while moderate values (80≤γ≤120) achieve the best trade-off between stability and performance. 2 4 6 8 Epoch 62 64 66 68Accuracy Art 2 4 6 8 Epoch 76 78 80 82 84 86Accuracy Sketch 2 4 6 8 Epoch 90 92 94 96Accuracy Low-Res =0.1 =0.3 =0.5 =1.0 =2.0 =3.0 =5.0 =10.0 Figure 7: Sensitivity analysis of the hard-ne...