Ambiguity-Induced Contrastive Learning for Instance-Dependent Partial Label Learning
Authors: Shi-Yu Xia, Jiaqi Lv, Ning Xu, Xin Geng
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on benchmark datasets demonstrate its substantial improvements over state-of-the-art methods for learning from the instance-dependent partially labeled data. |
| Researcher Affiliation | Academia | Shi-Yu Xia1,2 , Jiaqi Lv3 , Ning Xu1,2 and Xin Geng1,2 1School of Computer Science and Engineering, Southeast University, Nanjing 211189, China 2Key Laboratory of Computer Network and Information Integration (Ministry of Education), Southeast University, Nanjing 211189, China 3RIKEN Center for Advanced Intelligence Project |
| Pseudocode | Yes | Algorithm 1 Pseudo code of ABLE (one epoch) |
| Open Source Code | Yes | Source code is available at https://github.com/Alpha_Xia/ABLE. |
| Open Datasets | Yes | We adopt four widely used benchmark datasets including MNIST [Le Cun et al., 1998], Fashion MNIST [Xiao et al., 2017], Kuzushiji-MNIST [Clanuwat et al., 2018], CIFAR-101. |
| Dataset Splits | No | The paper mentions using "training dataset" and "test set" and a "validation" process, but does not provide explicit details on the training/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper mentions implementation with "PyTorch" but does not specify its version number or other software dependencies with specific versions. |
| Experiment Setup | Yes | The trade-off parameter is set as α = 1.0. The temperature parameter is set as τ = 0.1. The mini-batch size, the number of training epochs, the initial learning rate and the weight decay are set to 64, 500, 0.01 and 1e-3, respectively. We adopt cosine learning rate scheduling. The optimizer is stochastic gradient descent (SGD) with momentum 0.9. |