Pointwise Binary Classification with Pairwise Confidence Comparisons

Authors: Lei Feng, Senlin Shu, Nan Lu, Bo Han, Miao Xu, Gang Niu, Bo An, Masashi Sugiyama

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate by experiments the effectiveness of our methods, which suggests Pcomp is a valuable and practically useful type of pairwise supervision besides the pairwise label.
Researcher Affiliation Academia 1College of Computer Science, Chongqing University, China 2College of Computer and Information Science, Southwest University, China 3The University of Tokyo, Japan 4RIKEN Center for Advanced Intelligence Project, Japan 5Department of Computer Science, Hong Kong Baptist University, China 6School of Information Technology and Electrical Engineering, The University of Queensland, Australia 7School of Computer Science and Engineering, Nanyang Technological University, Singapore.
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes Codes are avaible in supplementary materials.
Open Datasets Yes We use four popular benchmark datasets, including MNIST (Le Cun et al., 1998), Fashion-MNIST (Xiao et al., 2017), Kuzushiji-MNIST (Clanuwat et al., 2018), and CIFAR-10 (Krizhevsky et al., 2009).
Dataset Splits No The paper uses standard benchmark datasets but does not explicitly provide specific train/validation/test split percentages or sample counts in the main text. It mentions hyper-parameter settings are in Appendix H, but doesn't guarantee split details there.
Hardware Specification Yes All the experiments are conducted on Ge Force GTX 1080 Ti GPUs.
Software Dependencies No The paper states 'We implement our methods using Py Torch (Paszke et al., 2019)' but does not provide a specific version number for PyTorch or any other software dependencies.
Experiment Setup Yes We implement our methods using Py Torch (Paszke et al., 2019) and use the Adam (Kingma & Ba, 2015) optimizer with mini-batch size set to 256 and the number of training epochs set to 200 for the four large-scale datasets and 100 for other four datasets.