Learning with Selective Forgetting
Authors: Takashi Shibata, Go Irie, Daiki Ikami, Yu Mitsuzumi
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on common benchmark datasets demonstrate the remarkable superiority of the proposed method over several existing methods. |
| Researcher Affiliation | Industry | Takashi Shibata , Go Irie , Daiki Ikami and Yu Mitsuzumi NTT Communication Science Laboratories, NTT Corporation, Japan |
| Pseudocode | No | The paper describes the method textually but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | Datasets: We use three widely used benchmark datasets for lifelong learning, i.e., CIFAR-100, CUB200-2011 [Wah et al., 2011], and Stanford Cars [Krause et al., 2013]. |
| Dataset Splits | No | CUB200-2011 has 200 classes with 5,994 training images and 5,794 test images. CIFAR-100 contains 50,000 training images and 1,0000 test images overall. Stanford Cars comprises 196 cars of 8,144 images for training and 8,041 for testing. The paper does not explicitly state validation set sizes or splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using ResNet-18 and SGD for optimization, but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x). |
| Experiment Setup | Yes | We trained the network for 200 epochs for each task. Minibatch sizes were set to 128 for new tasks and 32 for past tasks in CIFAR-100, and 32 for new tasks and 8 for previous tasks in CUB-200-2011 and Stanford Cars. The weight decay was 5.0 10 4. We used SGD for optimization. We employed a standard data augmentation strategy: random crop, horizontal flip, and rotation. |