Confidence Estimation Using Unlabeled Data

Authors: Chen Li, Xiaoling Hu, Chao Chen

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On both image classification and segmentation tasks, our method achieves state-of-the-art performances in confidence estimation. Furthermore, we show the benefit of the proposed method through a downstream active learning task.The experimental results are shown in Tab. 2 (The results of full setting are included in Append. A.5). All the experiments are repeated for five times, and we report the means and standard deviations.
Researcher Affiliation Academia Chen Li Stony Brook University Xiaoling Hu Stony Brook University Chao Chen Stony Brook University Email: Chen Li (li.chen.8@stonybrook.edu).
Pseudocode Yes Algorithm 1: Consistency ranking loss training Input: Dataloader for labeled and labeled training samples Output: Trained deep model Definition: u Loader and s Loader denote the dataloader for unlabeled and labeled samples; Dcorr is the dictionary, storing the count of correctness for each labeled sample. Dcon is the dictionary, storing the count of consistency for each sample.
Open Source Code No The paper does not contain any explicit statement about releasing source code for the methodology described, nor does it provide a link to a code repository.
Open Datasets Yes We evaluate our method on benchmark datasets, CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009).Here we use a publicly available dataset to conduct the experiment: the international skin imaging collaboration (ISIC) lesion segmentation dataset 2017 (Codella et al., 2018)
Dataset Splits Yes The international skin imaging collaboration (ISIC) lesion segmentation dataset 2017 (Codella et al., 2018), which consists 2000 training, 150 validation and 600 testing annotated images.
Hardware Specification No The paper describes various training parameters and software settings (e.g., 'trained by SGD with a momentum of 0.9', 'mini-batch size of 192', 'Adam with a learning rate 0.0001'), but does not provide any specific hardware details such as GPU/CPU models or types of computing resources used.
Software Dependencies No The paper mentions optimizers like SGD and Adam, and specific network architectures like Pre Act-Res Net110, Dense Net BC, UNet with Res Net34 backbone, but does not specify any software libraries or frameworks with their version numbers.
Experiment Setup Yes All methods are trained by SGD with a momentum of 0.9 and a weight decay of 0.0001. We train our method for 300 epochs with the mini-batch size of 192, in which 64 are labeled, and use initial learning rate of 0.1 with a reduction by a factor of 10 at 150 and 250 epochs. A standard data augmentation scheme for image classification is used, including random horizontal flip, 4 pixels padding and 32 32 random crop.