Transductive Relation-Propagation Network for Few-shot Learning
Authors: Yuqing Ma, Shihao Bai, Shan An, Wei Liu, Aishan Liu, Xiantong Zhen, Xianglong Liu
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on several benchmark datasets demonstrate that our method can significantly outperform a variety of state-of-the-art fewshot learning methods.3 Experiments |
| Researcher Affiliation | Collaboration | 1State Key Lab of Software Development Environment, Beihang University, China 2Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, China 3Department of Augmented Reality and Virtual Reality, JD 4Inception Institute of Artificial Intelligence |
| Pseudocode | No | The paper describes the methods using equations and prose, but does not include any explicit pseudocode or algorithm blocks labeled as such. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We employ the widely used datasets in prior studies, including mini Image Net dataset [Vinyals et al., 2016] and tiered Image Net dataset [Ren et al., 2018]. |
| Dataset Splits | Yes | The classes of tiered Image Net are grouped into 34 higher-level nodes based on Word Net hierarchy [Deng et al., 2009], and further partitioned into disjoint sets of training, testing, and validation nodes, ensuring a distinct distance between training and testing classes thus making the classification more challenging. We use the validation set to select the training episodes with the best accuracy. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam optimizer' but does not specify its version or other software dependencies with version numbers (e.g., programming languages, libraries, frameworks). |
| Experiment Setup | Yes | Standard data augmentation including random crop, left-right flip, and color jitter are applied in the training stage. The number of training iterations on mini Image Net and tiered Image Net are 100K and 200K, respectively. We use Adam optimizer [Kingma and Ba, 2014] with an initial learning rate of 0.001, and reduce the learning rate by half every 15K and 30K iterations, respectively on mini Image Net and tiered Image Net. The weight decay is set to 1e 6. The mini-batch size for all experiments is 20. |