Recognition: unknown
TRIM: A Self-Supervised Video Summarization Framework Maximizing Temporal Relative Information and Representativeness
read the original abstract
The increasing ubiquity of video content and the corresponding demand for efficient access to meaningful information have elevated video summarization and video highlights as a vital research area. However, many state-of-the-art methods depend heavily either on supervised annotations or on attention-based models, which are computationally expensive and brittle in the face of distribution shifts that hinder cross-domain applicability across datasets. We introduce a pioneering self-supervised video summarization model that captures both spatial and temporal dependencies without the overhead of attention, RNNs, or transformers. Our framework integrates a novel set of Markov process-driven loss metrics and a two-stage self supervised learning paradigm that ensures both performance and efficiency. Our approach achieves state-of-the-art performance on the SUMME and TVSUM datasets, outperforming all existing unsupervised methods. It also rivals the best supervised models, demonstrating the potential for efficient, annotation-free architectures. This paves the way for more generalizable video summarization techniques and challenges the prevailing reliance on complex architectures.
This paper has not been read by Pith yet.
Forward citations
Cited by 1 Pith paper
-
TRIMMER: A New Paradigm for Video Summarization through Self-Supervised Reinforcement Learning
TRIMMER proposes a self-supervised RL method for video summarization that uses entropy-based rewards to capture temporal dynamics and semantic diversity, claiming SOTA results among unsupervised approaches.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.