LiftPool: Bidirectional ConvNet Pooling

Authors: Jiaojiao Zhao, Cees G. M. Snoek

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show the proposed methods achieve better results on image classification and semantic segmentation, using various backbones.
Researcher Affiliation Academia Jiaojiao Zhao & Cees G. M. Snoek Video & Image Sense Lab University of Amsterdam {jzhao3,cgmsnoek}@uva.nl
Pseudocode No The paper contains diagrams illustrating the process (Figure 2) and mathematical equations, but no structured pseudocode or explicitly labeled algorithm blocks.
Open Source Code Yes Code is available at https://github.com/jiaozizhao/Lift Pool/.
Open Datasets Yes Image Classification We first verify the proposed Lift Down Pool for image classification on CIFAR-100 (Krizhevsky & Hinton, 2009)... We also report results on Image Net (Deng et al., 2009)... Semantic Segmentation We also test the Lift Down Pool and Lift Up Pool for semantic segmentation on PASCAL-VOC12 (Everingham et al., 2010)...
Dataset Splits Yes We also report results on Image Net (Deng et al., 2009) with 1.2M training and 5000 validation images for 1000 classes. ... An augmented version with 10582 training images and 1449 validation images is used [for PASCAL-VOC12].
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No We train Res Nets for 100 epochs and Mobile Net for 150 epochs on Image Net, following the standard training recipe from the public Py Torch (Paszke et al., 2017) repository.
Experiment Setup Yes The VGG13 (Simonyan & Zisserman, 2015) network trained on CIFAR100 is optimized by SGD with a batch size of 100, weight decay of 0.0005, momentum of 0.9. The learning rate starts from 0.1 and is reduced by multiplying 0.1 after 80 and 120 epochs for a total of 160 epochs.