Image Block Augmentation for One-Shot Learning

Authors: Zitian Chen, Yanwei Fu, Kaiyu Chen, Yu-Gang Jiang3379-3386

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments and ablation study not only evaluate the efficacy but also reveal the insights, of the proposed Self-Jig method. Extensive experiments are conducted on mini Image Net and Image Net1k challenge datasets. Results on Image Net1k Challenge Dataset. Ablation study. Results on mini Image Net. Table 1: Top-1 / Top-5 results of Imagenet1K (Res Net-10). Figure 3: Top 5 accuracy(%) on Imagenet1K (Resnet-10). Table 3: Results on mini Image Net. Conclusion This work proposes a self-training Jigsaw data augmentation method for one-shot learning. Extensive experiments show the efficacy of our framework in synthesizing new instances to boost the recognition performance.
Researcher Affiliation Academia 1School of Computer Science, Fudan University, 2School of Data Science, Fudan University 3Shanghai Key Lab of Intelligent Information Processing, Fudan University
Pseudocode No Explanation: The paper describes the methods textually and with illustrations (Figure 1), but does not provide structured pseudocode or algorithm blocks.
Open Source Code No The codes and models will be released upon acceptance.
Open Datasets Yes Extensive experiments are conducted on mini Image Net and Image Net1k challenge datasets. The mini Image Net proposed in (Vinyals et al. 2016). We use the same split Cbase and Cnovel as proposed in (Hariharan and Girshick 2017).
Dataset Splits Yes The data split setup is used by (Ravi and Larochelle 2017) with 64, 16, 20 classes as training validation, and testing set individually.
Hardware Specification No Explanation: The paper specifies network architectures (Res Net-10, Res Net-50, Res Net-18) and training parameters but does not provide any specific details about the hardware (e.g., GPU model, CPU type, memory) used for the experiments.
Software Dependencies No Explanation: The paper mentions using SGD, ResNet (He et al. 2015), SVM, and Logistic Regression, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes For both networks, we use SGD to train networks which gets converged in 300 epochs: the learning rate is set to 1 10 1 and degraded by 10 every 30 epochs with the batch size 128. We augment each training instance up to five instances and train for 10 epochs.In fine-tuning, each probe image Ii helps produce 10 synthesize image e Ii and we trained for 10 epochs. The learning rate of the last layer and the other layers are set to 1 10 1, 1 10 2 respectively in our experiments.