Unified Robust Semi-Supervised Variational Autoencoder

Authors: Xu Chen

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results highlight the superiority of the proposed framework by the evaluating on image classification tasks and comparing with the state-of-the-art approaches. 3. Experimental Results Dataset: We evaluate the performance on five benchmark datasets including the MNIST, CIFAR-10, CIFAR-100 as benchmark datasets for image classification.
Researcher Affiliation Academia 1Cary, NC, USA. Correspondence to: Xu Chen <steven.xu.chen@gmail.com>. Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. The provided text only contains correspondence details for one author (city and personal email), not comprehensive institutional affiliations for all authors. Given the publication in the proceedings of the International Conference on Machine Learning (ICML), an academic venue, the classification leans towards Academia.
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide links to a code repository.
Open Datasets Yes Dataset: We evaluate the performance on five benchmark datasets including the MNIST, CIFAR-10, CIFAR-100 as benchmark datasets for image classification. The proposed algorithm is also evaluated with two real world large scale image datasets including Clothing1M and Web Vision1.0.
Dataset Splits No The paper states: 'Each dataset is randomly sampled and divided into three disjointed subsets including the labeled set (5% samples), unlabeled set (75% samples) and test set (20% samples).' While this describes the training and test data, it does not explicitly specify a separate 'validation' split with a percentage or count for hyperparameter tuning or model selection in the traditional sense, distinct from the labeled/unlabeled training data.
Hardware Specification Yes The training time of URSVAE on CIFAR-10 evaluated on a single Nvidia V100 GPU is 4.9 hours.
Software Dependencies No The paper describes the deep neural network architecture and optimization methods (e.g., 'SGD', 'Resnet-18') but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific library versions).
Experiment Setup Yes The network is trained with SGD using a batch size of 128. A momentum of 0.9 is set with a weight decay of 0.0005. The network is trained for 300 epochs. We set the initial learning rate as 0.02, and reduce it by a factor of 10 after 150 epochs. The warm up period is 10 epochs for CIFAR-10 and 30 epochs for CIFAR-100.