Recognition: unknown
Don't forget, there is more than forgetting: new metrics for Continual Learning
read the original abstract
Continual learning consists of algorithms that learn from a stream of data/tasks continuously and adaptively thought time, enabling the incremental development of ever more complex knowledge and skills. The lack of consensus in evaluating continual learning algorithms and the almost exclusive focus on forgetting motivate us to propose a more comprehensive set of implementation independent metrics accounting for several factors we believe have practical implications worth considering in the deployment of real AI systems that learn continually: accuracy or performance over time, backward and forward knowledge transfer, memory overhead as well as computational efficiency. Drawing inspiration from the standard Multi-Attribute Value Theory (MAVT) we further propose to fuse these metrics into a single score for ranking purposes and we evaluate our proposal with five continual learning strategies on the iCIFAR-100 continual learning benchmark.
This paper has not been read by Pith yet.
Forward citations
Cited by 5 Pith papers
-
LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning
LIBERO is a new benchmark for lifelong robot learning that evaluates transfer of declarative, procedural, and mixed knowledge across 130 manipulation tasks with provided demonstration data.
-
Unlocking Positive Transfer in Incrementally Learning Surgical Instruments: A Self-reflection Hierarchical Prompt Framework
A hierarchical prompt tree with self-reflection graph propagation enables positive forward and backward knowledge transfer in incremental surgical instrument segmentation, improving over baselines by more than 5% and ...
-
FreeMOCA: Memory-Free Continual Learning for Malicious Code Analysis
FreeMOCA enables memory-free continual learning for malicious code analysis by adaptive layer-wise parameter interpolation between task updates, outperforming baselines on EMBER and AZ malware benchmarks with up to 42...
-
FreeMOCA: Memory-Free Continual Learning for Malicious Code Analysis
FreeMOCA enables memory-free continual learning for malicious code analysis via adaptive layer-wise interpolation between warm-started task optima, outperforming baselines on EMBER and AZ benchmarks with up to 42% acc...
-
Temporal Taskification in Streaming Continual Learning: A Source of Evaluation Instability
Different valid temporal partitions of the same streaming dataset can produce materially different rankings and performance numbers for continual learning methods.
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.