Unbiased Risk Estimator to Multi-Labeled Complementary Label Learning
Authors: Yi Gao, Miao Xu, Min-Ling Zhang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate the effectiveness of the proposed approach on various datasets. |
| Researcher Affiliation | Academia | 1School of Cyber Science and Engineering, Southeast University, China 2School of Computer Science and Engineering,Southeast University, China 3University of Queensland, Australia 4Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China |
| Pseudocode | Yes | Algorithm 1 MLCLL with the GDF loss |
| Open Source Code | Yes | We use Py Torch [Paszke et al., 2019] and NVIDIA TITAN RTX to implement our experiments, where the code is available at https://github.com/Gao Yi439/GDF. |
| Open Datasets | Yes | We use eight widely-used MLL datasets to experiments1, where we adopt two preprocessing ways to process datasets to verify the performance of the proposed approach. 1Publicly available at https://mulan.sourceforge.net/datasetsmlc.html. |
| Dataset Splits | Yes | Ten-fold cross-validation is used to evaluate the performance of all approaches. |
| Hardware Specification | Yes | We use Py Torch [Paszke et al., 2019] and NVIDIA TITAN RTX to implement our experiments |
| Software Dependencies | No | No, the paper mentions 'Py Torch' but does not specify a version number for it or any other software dependencies crucial for replication. |
| Experiment Setup | Yes | We set batch-size and training epoch as 256 and 200 respectively. Weight decay is set as 10 3 and learning rate is selected from {10 1, 10 2, 10 3}, where the learning rate is multiplied by 0.1 at 100 and 150 epochs [Wu et al., 2018]. |