Continual Learning in the Frequency Domain
Authors: RuiQi Liu, Boyu Diao, Libo Huang, Zijia An, Zhulin An, Yongjun Xu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted in both cloud and edge environments demonstrate that CLFD consistently improves the performance of state-of-the-art (SOTA) methods in both precision and training efficiency. |
| Researcher Affiliation | Academia | Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | Yes | Algorithm 1 outlines the procedure for the CFFS. |
| Open Source Code | Yes | Code is available at https://github.com/EMLS-ICTCAS/CLFD.git |
| Open Datasets | Yes | We conduct comprehensive experimental analyses on extensively used public datasets, including Split CIFAR-10 (S-CIFAR-10) [6] and Split Tiny Image Net (S-Tiny-Image Net) [9]. |
| Dataset Splits | No | The paper describes training and test sets but does not explicitly mention or detail a validation dataset split. |
| Hardware Specification | Yes | We conduct comprehensive experiments utilizing the NVIDIA GTX 2080Ti GPU paired with the Intel Xeon Gold 5217 CPU, as well as the NVIDIA Jetson Orin NX 16GB, boasting NVIDIA Ampere architecture GPU and Octa-core Arm CPU. |
| Software Dependencies | No | We expand the Mammoth CL repository in PyTorch [6]. |
| Experiment Setup | Yes | All models are trained using the Stochastic Gradient Descent optimizer with a fixed batch size of 32. Additional details regarding other hyperparameters are detailed in Appendix D and E. For the S-Tiny-Image Net dataset, models undergo training for 100 epochs, whereas for the S-CIFAR-10 dataset, training lasts for 50 epochs per task. |