Multifaceted Uncertainty Estimation for Label-Efficient Deep Learning

Authors: Weishi Shi, Xujiang Zhao, Feng Chen, Qi Yu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted over both synthetic and real data and comparison with competitive AL methods demonstrate the effectiveness of the proposed ADL model.
Researcher Affiliation Academia Rochester Institute of Technology1 University of Texas at Dallas2 {ws7586, qi.yu}@rit.edu1 {xujiang.zhao, feng.chen}@utdallas.edu2
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an unambiguous statement about releasing source code or a direct link to a code repository.
Open Datasets Yes The real-world experiment is conducted on three datasets, MNIST, not MNIST, and CIFAR-10, all of which have ten classes.
Dataset Splits No The paper does not explicitly describe a validation dataset split. It mentions 'leaving 2-5 classes out for initial training' as part of the AL scenario setup, but not a distinct validation set.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using '3-layer MLP with tanh' and 'Le Net with Relu' but does not specify software dependencies with version numbers (e.g., Python, PyTorch versions).
Experiment Setup Yes For synthetic data, we adopt a 3-layer MLP with tanh for activation. For real data, we use Le Net with Relu for activation. ... d is a fixed decay rate (set to 1/100K in our experiments).