Credal Self-Supervised Learning

Authors: Julian Lienen, Eyke Hüllermeier

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In an exhaustive empirical evaluation, we compare our methodology to state-of-the-art self-supervision approaches, showing competitive to superior performance especially in low-label scenarios incorporating a high degree of uncertainty.
Researcher Affiliation Academia Julian Lienen Department of Computer Science Paderborn University Paderborn 33098, Germany julian.lienen@upb.de Eyke Hüllermeier Institute of Informatics University of Munich (LMU) Munich 80538, Germany eyke@ifi.lmu.de
Pseudocode Yes The pseudo-code of the complete algorithm can be found in the appendix.
Open Source Code No The paper does not include an explicit statement about releasing source code or provide a link to a code repository.
Open Datasets Yes More precisely, we follow the semi-supervised learning evaluation setup as described in [41] and perform experiments on CIFAR-10/-100 [24], SVHN [34], and STL-10 [8] with varying fractions of labeled instances sampled from the original data sets, also considering label-scarce settings with only a few labels per class.
Dataset Splits No The paper states it performs experiments on standard datasets like CIFAR-10 and SVHN using 'varying fractions of labeled instances' and refers to an external paper for the 'semi-supervised learning evaluation setup', but it does not explicitly provide the specific percentages or sample counts for training, validation, or test splits within the text provided.
Hardware Specification No The paper mentions 'computing time provided by the Paderborn Center for Parallel Computing (PC2)' but does not specify any particular hardware details such as GPU models, CPU types, or memory used for the experiments.
Software Dependencies No The paper mentions software components like CTAugment, SGD with Nesterov momentum, and cosine annealing, but does not provide specific version numbers for any of these or other software dependencies.
Experiment Setup Yes To guarantee a fair comparison to existing methods related to Fix Match, we keep the hyperparameters the same as in the original experiments. ... we set the learning rate to η cos 7πk / 16K , where η is the initial learning rate, k the current training step and K the total number of steps (2^20 by default).