On the Calibration of Multiclass Classification with Rejection

Authors: Chenri Ni, Nontawat Charoenphakdee, Junya Honda, Masashi Sugiyama

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct experiments to validate the relevance of our theoretical findings. In this section, we report the results of two experiments based on synthetic and benchmark datasets.
Researcher Affiliation Academia 1 The University of Tokyo, Japan 2 RIKEN Center for Advanced Intelligence Project, Japan
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement about releasing open-source code or a link to a code repository.
Open Datasets Yes M. Lichman et al. UCI machine learning repository, 2013. URL http://archive.ics. uci.edu/ml.
Dataset Splits No The paper mentions 'training data' and 'test data' but does not specify explicit train/validation/test splits, percentages, or absolute sample counts for each split. It mentions 'training data size is 10,000 per class' without defining the split methodology.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper mentions 'AMSGRAD [21] was used for optimization' but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes For all methods, we used one-hidden-layer neural networks with the rectified linear units (Re LU) as activation functions, where the number of hidden units is 3 for synthetic datasets, and 50 for benchmark datasets. We added weight decay with candidates {10 7, 10 4, 10 1}.