pith. machine review for the scientific record. sign in

Cycle consistency as reward: Learning image- text alignment without human preferences

3 Pith papers cite this work. Polarity classification is still indexing.

3 Pith papers citing it

citation-role summary

method 1

citation-polarity summary

fields

cs.CV 3

years

2026 3

roles

method 1

polarities

use method 1

representative citing papers

Bias at the End of the Score

cs.CV · 2026-04-14 · unverdicted · novelty 6.0

Reward models used as quality scorers in text-to-image generation encode demographic biases that cause reward-guided training to sexualize female subjects, reinforce stereotypes, and reduce diversity.

citing papers explorer

Showing 3 of 3 citing papers.

  • Back into Plato's Cave: Examining Cross-modal Representational Convergence at Scale cs.CV · 2026-04-20 · unverdicted · none · ref 3

    Evidence for cross-modal representational convergence weakens substantially at scale and in realistic many-to-many settings, indicating models learn rich but distinct representations.

  • Bias at the End of the Score cs.CV · 2026-04-14 · unverdicted · none · ref 3

    Reward models used as quality scorers in text-to-image generation encode demographic biases that cause reward-guided training to sexualize female subjects, reinforce stereotypes, and reduce diversity.

  • VERTIGO: Visual Preference Optimization for Cinematic Camera Trajectory Generation cs.CV · 2026-04-02 · conditional · none · ref 2

    VERTIGO post-trains camera trajectory generators with visual preference signals from Unity-rendered previews scored by a cinematically fine-tuned VLM, cutting character off-screen rates from 38% to near zero while improving framing and prompt adherence.