Semi-Supervised Learning with Variational Bayesian Inference and Maximum Uncertainty Regularization

Authors: Kien Do, Truyen Tran, Svetha Venkatesh7236-7244

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show clear improvements in classification errors of various CR based methods when they are combined with VBI or MUR or both.
Researcher Affiliation Academia Applied Artificial Intelligence Institute (A2I2), Deakin University, Geelong, Australia {k.do, truyen.tran, svetha.venkatesh}@deakin.edu.au
Pseudocode No No explicit pseudocode or algorithm blocks are provided. The methodology is described using mathematical equations and textual explanations.
Open Source Code No The paper does not provide any explicit statement about releasing source code or a link to a code repository for the described methodology.
Open Datasets Yes We evaluate our approaches on three standard benchmark datasets: SVHN, CIFAR-10 and CIFAR-100.
Dataset Splits No The paper mentions 'labeled and unlabeled training datasets' (Dl, Du) and uses different numbers of labeled samples (e.g., 500, 1000, 250 on SVHN; 1000, 2000, 4000, 10000 on CIFAR-10/100). However, it does not explicitly provide specific train/validation/test split percentages or absolute counts for the datasets in the main text, nor does it refer to standard splits with citations that would provide this detail.
Hardware Specification No The paper does not explicitly provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments in the main text.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library names like PyTorch, TensorFlow, or specific Python versions) used in the experiments.
Experiment Setup Yes The paper discusses specific experimental setup details such as the coefficient of DKL (qφ(w) p(w)) in VBI, the radius r in MUR, and learning rates (α) and number of steps (s) for iterative approximations of x. For example, 'Fig. 2c shows the error of MT+MUR on CIFAR-10 with 1000 labels as a function of r (r {4, 7, 10, 20, 40}).' and 'We try both projected gradient ascent (PGA) and vanilla gradient ascent (GA) updates with the learning rate α varying in {0.1, 1.0, 10.0} and the number of steps s varying in {2, 5, 8}.'