Efficient Heterogeneous Meta-Learning via Channel Shuffling Modulation
Authors: Minh Hoang, Carl Kingsford
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our proposed methods on two public datasets, namely Mini-ImageNet and Tiered-ImageNet [35, 34]. For Mini-ImageNet, each image is of size 84 × 84. It contains 100 classes with 600 images per class. Following the common setting [3, 23], we divide Mini-ImageNet into 64, 16, and 20 classes for meta-training, meta-validation, and meta-testing, respectively. For Tiered-ImageNet, each image is of size 84 × 84. It contains 608 classes which are divided into 34, 10, and 16 classes for meta-training, meta-validation, and meta-testing, respectively. We follow the standard few-shot classification setting for all experiments. In particular, we evaluate our method on the 5-way 1-shot/5-shot settings. For a fair comparison, we repeat each experiment 3 times using different random seeds and report the average performance. |
| Researcher Affiliation | Academia | Hanyang University, Korea |
| Pseudocode | No | The paper describes the method algorithmically but does not provide a formal pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide a statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | We evaluate our proposed methods on two public datasets, namely Mini-ImageNet and Tiered-ImageNet [35, 34]. For Mini-ImageNet, each image is of size 84 × 84. It contains 100 classes with 600 images per class. Following the common setting [3, 23], we divide Mini-ImageNet into 64, 16, and 20 classes for meta-training, meta-validation, and meta-testing, respectively. For Tiered-ImageNet, each image is of size 84 × 84. It contains 608 classes which are divided into 34, 10, and 16 classes for meta-training, meta-validation, and meta-testing, respectively. |
| Dataset Splits | Yes | For Mini-ImageNet, each image is of size 84 × 84. It contains 100 classes with 600 images per class. Following the common setting [3, 23], we divide Mini-ImageNet into 64, 16, and 20 classes for meta-training, meta-validation, and meta-testing, respectively. For Tiered-ImageNet, each image is of size 84 × 84. It contains 608 classes which are divided into 34, 10, and 16 classes for meta-training, meta-validation, and meta-testing, respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., PyTorch 1.9, Python 3.8). |
| Experiment Setup | Yes | For all few-shot classification experiments, we use ResNet-18 [10] as the backbone. In the meta-training stage, we use stochastic gradient descent (SGD) as the optimizer with a learning rate of 0.001. We train our model for 100 epochs, and the learning rate is decayed by a factor of 0.1 at 80 epochs. We use a batch size of 256. For data augmentation, we use random horizontal flip, random crop, and color jittering. In the meta-testing stage, for fair comparison, we use the same evaluation protocol as [23, 24]. In particular, we randomly sample 600 episodes for each setting and report the average accuracy and 95% confidence interval. |