pith. machine review for the scientific record. sign in

arxiv: 2506.20588 · v2 · submitted 2025-06-25 · 💻 cs.CV

Recognition: unknown

TRIM: A Self-Supervised Video Summarization Framework Maximizing Temporal Relative Information and Representativeness

Authors on Pith no claims yet
classification 💻 cs.CV
keywords videosummarizationsupervisedarchitecturesdatasetsefficientframeworkinformation
0
0 comments X
read the original abstract

The increasing ubiquity of video content and the corresponding demand for efficient access to meaningful information have elevated video summarization and video highlights as a vital research area. However, many state-of-the-art methods depend heavily either on supervised annotations or on attention-based models, which are computationally expensive and brittle in the face of distribution shifts that hinder cross-domain applicability across datasets. We introduce a pioneering self-supervised video summarization model that captures both spatial and temporal dependencies without the overhead of attention, RNNs, or transformers. Our framework integrates a novel set of Markov process-driven loss metrics and a two-stage self supervised learning paradigm that ensures both performance and efficiency. Our approach achieves state-of-the-art performance on the SUMME and TVSUM datasets, outperforming all existing unsupervised methods. It also rivals the best supervised models, demonstrating the potential for efficient, annotation-free architectures. This paves the way for more generalizable video summarization techniques and challenges the prevailing reliance on complex architectures.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. TRIMMER: A New Paradigm for Video Summarization through Self-Supervised Reinforcement Learning

    cs.CV 2026-05 unverdicted novelty 5.0

    TRIMMER proposes a self-supervised RL method for video summarization that uses entropy-based rewards to capture temporal dynamics and semantic diversity, claiming SOTA results among unsupervised approaches.