pith. machine review for the scientific record. sign in

arxiv: 1703.07737 · v4 · submitted 2017-03-22 · 💻 cs.CV · cs.NE

Recognition: unknown

In Defense of the Triplet Loss for Person Re-Identification

Authors on Pith no claims yet
classification 💻 cs.CV cs.NE
keywords learninglosstripletdeepend-to-endlargemetricperson
0
0 comments X
read the original abstract

In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 10 Pith papers

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. From Global to Local: Rethinking CLIP Feature Aggregation for Person Re-Identification

    cs.CV 2026-04 conditional novelty 7.0

    SAGA-ReID improves CLIP-based person ReID by using structured anchor-guided aggregation of patch tokens, delivering up to 10.6 Rank-1 gains on occluded benchmarks over global pooling.

  2. MELD: Multi-Task Equilibrated Learning Detector for AI-Generated Text

    cs.CL 2026-05 unverdicted novelty 6.0

    MELD is a multi-task AI-text detector using auxiliary heads, uncertainty-weighted losses, EMA distillation, and pairwise ranking that reaches 99.9% TPR at 1% FPR on a new held-out benchmark while remaining competitive...

  3. Prompt-Anchored Vision-Text Distillation for Lifelong Person Re-identification

    cs.CV 2026-05 unverdicted novelty 6.0

    PAD uses prompt distillation on the text side and domain-adaptive EMA prompts on the visual side to balance stability and plasticity in lifelong person re-identification.

  4. ICPR 2026 Competition on Privacy-Preserving Person Re-Identification from Top-View RGB-Depth Camera (TVRID)

    cs.CV 2026-05 accept novelty 6.0

    A new benchmark dataset and competition for top-view RGB-Depth person re-identification is released, with competition results showing RGB easier than depth and cross-modal retrieval.

  5. Complexity of Linear Regions in Self-supervised Deep ReLU Networks

    cs.LG 2026-04 unverdicted novelty 6.0

    Self-supervised ReLU networks form substantially fewer linear regions than supervised models for comparable accuracy, with contrastive methods rapidly expanding regions and self-distillation consolidating them, enabli...

  6. Thinking Before Matching: A Reinforcement Reasoning Paradigm Towards General Person Re-Identification

    cs.CV 2026-04 unverdicted novelty 6.0

    ReID-R achieves competitive person re-identification performance using chain-of-thought reasoning and reinforcement learning with only 14.3K non-trivial samples, about 20.9% of typical data scales, while providing int...

  7. CraterBench-R: Instance-Level Crater Retrieval for Planetary Scale

    cs.CV 2026-04 unverdicted novelty 6.0

    CraterBench-R is a new retrieval benchmark where self-supervised ViTs with a training-free instance-token aggregation method achieve high accuracy for identifying individual craters while reducing storage needs.

  8. Beyond Pedestrians: Caption-Guided CLIP Framework for High-Difficulty Video-based Person Re-Identification

    cs.CV 2026-04 unverdicted novelty 5.0

    CG-CLIP adds caption-guided memory refinement and token-based spatiotemporal aggregation to CLIP for video person ReID, outperforming SOTA on MARS, iLIDS-VID, SportsVReID and DanceVReID.

  9. On the Properties of Feature Attribution for Supervised Contrastive Learning

    cs.LG 2026-04 unverdicted novelty 4.0

    Neural networks trained via supervised contrastive learning yield feature attributions that are more faithful, less complex, and more continuous than those from cross-entropy trained networks.

  10. Identity-Aware U-Net: Fine-grained Cell Segmentation via Identity-Aware Representation Learning

    cs.CV 2026-04 unverdicted novelty 4.0

    Identity-Aware U-Net augments a U-Net backbone with an auxiliary embedding branch and triplet metric learning to discriminate among cells with near-identical shapes and textures.