Neuro-Symbolic Continual Learning: Knowledge, Reasoning Shortcuts and Concept Rehearsal

Authors: Emanuele Marconato, Gianpaolo Bontempo, Elisa Ficarra, Simone Calderara, Andrea Passerini, Stefano Teso

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on three novel benchmarks highlights how COOL attains sustained high performance on neuro-symbolic continual learning tasks in which other strategies fail.
Researcher Affiliation Academia 1University of Pisa, Italy 2DISI, University of Trento, Italy 3University of Modena and Reggio Emilia, Italy 4CIMe C, University of Trento, Italy.
Pseudocode No The paper describes methods in prose and equations but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The data and code are available at https://github.com/emamarconato/Ne Sy-CL
Open Datasets Yes The data and code are available at https://github.com/emamarconato/Ne Sy-CL
Dataset Splits Yes MNAdd-Seq... 42k training examples, of which we used 8.4k for validation, and 6k test examples. ... MNAdd-Shortcut... 13.8k examples, 2.8k of which are reserved for validation and 2k for testing. ... CLE4EVR... almost 5.5k training data, 500 data for validation and 2.5k data for test.
Hardware Specification Yes All experiments were implemented using Python 3 and Pytorch (Paszke et al., 2019) and run on a server with 128 CPUs, 1Ti B RAM, and 8 A100 GPUs.
Software Dependencies No The paper mentions 'Python 3' and 'Pytorch (Paszke et al., 2019)' but does not provide specific version numbers for Python or PyTorch, nor for the other mentioned frameworks like 'mammoth' or 'VAEL'. A reproducible description requires specific version numbers.
Experiment Setup Yes All continual strategies have been trained with the same number of epochs and buffer dimension. The actual values depend on the specific benchmark: 25 epochs per task and a buffer size of 1000 examples for MNAdd-Seq, 100 epochs and 1000 examples for MNAdd-Shortcut, and 50 epochs each task and 250 examples for CLE4EVR. In all experiments, we employed the Adam optimizer (Kingma & Ba, 2015) combined with exponential decay (γ = 0.95). The initial learning rate is restored at the beginning of a new task.