PTN: A Poisson Transfer Network for Semi-supervised Few-shot Learning

Authors: Huaxi Huang, Junjie Zhang, Jian Zhang, Qiang Wu, Chang Xu1602-1609

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments indicate that PTN outperforms the state-of-the-art few-shot and SSFSL models on mini Image Net and tiered Image Net benchmark datasets. The paper includes an 'Experiments' section with subsections 'Datasets', 'Implementation Details', and 'Experimental Results', along with tables of accuracy comparisons and ablation studies.
Researcher Affiliation Academia 1University of Technology Sydney, Sydney NSW 2007, Australia 2Shanghai University, Shanghai, China 3The University of Sydney, Sydney NSW 2006, Australia
Pseudocode Yes Algorithm 1: PTN for SSFSL
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology.
Open Datasets Yes The mini Image Net dataset (Vinyals et al. 2016)... The tiered Image Net (Ren et al. 2018) is another subset...
Dataset Splits Yes We follow the standard split of 64 base, 16 validation , and 20 test classes (Vinyals et al. 2016; Tian et al. 2020). We follow the standard split of 351 base, 97 validation, and 160 test classes for the experiments (Ren et al. 2018; Liu et al. 2018).
Hardware Specification No The paper mentions using WRN-28-10 as the backbone and reports inference time but does not specify the hardware (e.g., GPU model, CPU, memory) used for experiments.
Software Dependencies No The paper mentions components like SGD optimizer, cosine learning rate scheduler, and WRN-28-10, but it does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes We set the batch size to 64 with SGD learning rate as 0.05 and weight decay as 5e 4. We reduce the learning rate by 0.1 after 60 and 80 epochs. The base model is trained for 100 epochs. ... The learning rate is initialized as 1e 3, and the cosine learning rate scheduler is used for 10 epochs. We set the batch size to 80 with λ = 1 in Eq. (2). ... We set K = 30 ... We set the max tp = 100 ... We set hyper-parameters µ = 1.5, M1 = 20, M2 = 40 and M3 = 100 empirically. Moreover, we set ϕ = 10, υα = 0.5, υσ = 1.0.