Online continual learning aims to get closer to a live learning experience by learning directly on a stream of data with temporally shifting distribution and by storing a minimum amount of data from that stream. In this empirical evaluation, we evaluate various methods from the literature that tackle online continual learning. More specifically, we focus on the class-incremental setting in the context of image classification, where the learner must learn new classes incrementally from a stream of data. We compare these methods on the Split-CIFAR100 and Split-TinyImagenet benchmarks, and measure their average accuracy, forgetting, stability, and quality of the representations, to evaluate various aspects of the algorithm at the end but also during the whole training period. We find that most methods suffer from stability and underfitting issues. However, the learned representations are comparable to i.i.d. training under the same computational budget. No clear winner emerges from the results and basic experience replay, when properly tuned and implemented, is a very strong baseline. We release our modular and extensible codebase at https://github.com/AlbinSou/ocl_survey based on the avalanche framework to reproduce our results and encourage future research.

Soutif-Cormerais, Albin; Carta, Antonio; Cossu, Andrea; Hurtado, Julio; Lomonaco, Vincenzo; Van De Weijer, Joost; Hemati, Hamed. (2023). A Comprehensive Empirical Evaluation on Online Continual Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops (pp. 3518- 3528). Isbn: 979-8-3503-0744-3. Doi: 10.1109/ICCVW60793.2023.00378. https://ieeexplore.ieee.org/document/10351009.

A Comprehensive Empirical Evaluation on Online Continual Learning

Vincenzo Lomonaco;
2023

Abstract

Online continual learning aims to get closer to a live learning experience by learning directly on a stream of data with temporally shifting distribution and by storing a minimum amount of data from that stream. In this empirical evaluation, we evaluate various methods from the literature that tackle online continual learning. More specifically, we focus on the class-incremental setting in the context of image classification, where the learner must learn new classes incrementally from a stream of data. We compare these methods on the Split-CIFAR100 and Split-TinyImagenet benchmarks, and measure their average accuracy, forgetting, stability, and quality of the representations, to evaluate various aspects of the algorithm at the end but also during the whole training period. We find that most methods suffer from stability and underfitting issues. However, the learned representations are comparable to i.i.d. training under the same computational budget. No clear winner emerges from the results and basic experience replay, when properly tuned and implemented, is a very strong baseline. We release our modular and extensible codebase at https://github.com/AlbinSou/ocl_survey based on the avalanche framework to reproduce our results and encourage future research.
2023
979-8-3503-0744-3
Incremental Learning. Empirical Evaluation. Online Continual Learning. Benchmark. Average Accuracy. Data Streams. Quality Of Representations. Experience Replay. Algorithmic Aspects. Proper Tuning. Validation Set. Recent Survey. Batch Size. Cross-entropy. Implementation Of Method. Online Learning. Reference Method. Learning Settings. Linear Classifier. Labeling Task. Stability Metrics. Contrastive Loss. Hyperparameter Selection. Memory Size. Replay Buffer. Strength Of Representation. Amount Of Memory. Subsequent Task. Backpropagation.
Soutif-Cormerais, Albin; Carta, Antonio; Cossu, Andrea; Hurtado, Julio; Lomonaco, Vincenzo; Van De Weijer, Joost; Hemati, Hamed. (2023). A Comprehensive Empirical Evaluation on Online Continual Learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops (pp. 3518- 3528). Isbn: 979-8-3503-0744-3. Doi: 10.1109/ICCVW60793.2023.00378. https://ieeexplore.ieee.org/document/10351009.
File in questo prodotto:
File Dimensione Formato  
A_Comprehensive_Empirical_Evaluation_on_Online_Continual_Learning.pdf

Solo gestori archivio

Tipologia: Versione dell'editore
Licenza: Tutti i diritti riservati
Dimensione 4.65 MB
Formato Adobe PDF
4.65 MB Adobe PDF   Visualizza/Apri
2308.10328v3.pdf

Solo gestori archivio

Tipologia: Documento in Pre-print
Licenza: Tutti i diritti riservati
Dimensione 5.38 MB
Formato Adobe PDF
5.38 MB Adobe PDF   Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11385/253570
Citazioni
  • Scopus 10
  • ???jsp.display-item.citation.isi??? 8
  • OpenAlex ND
social impact