Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Mining Multi-Label Samples from Single Positive Labels
Authors: Youngin Cho, Daejin Kim, MOHAMMAD AZAM KHAN, Jaegul Choo
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on real image datasets verify the effectiveness and correctness of our method, even when compared to a model trained with fully annotated datasets. |
| Researcher Affiliation | Collaboration | Youngin Cho*1 Daejin Kim*1,2 Mohammad Azam Khan1 Jaegul Choo1 1KAIST AI 2NAVER WEBTOON AI EMAIL |
| Pseudocode | Yes | A summary of our S2M sampling algorithm is provided in Appendix B.1. |
| Open Source Code | Yes | We provide the codes in the supplemental material. |
| Open Datasets | Yes | Extensive experiments on real image datasets verify the effectiveness and correctness of our method, even when compared to a model trained with fully annotated datasets. |
| Dataset Splits | Yes | Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] See Appendix. |
| Hardware Specification | Yes | Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] See Appendix. |
| Software Dependencies | No | The paper mentions 'Detailed experimental settings, such as architectures and hyperparameters, are provided in Appendix D.' but does not explicitly list specific software dependencies with version numbers in the provided text. |
| Experiment Setup | Yes | Detailed experimental settings, such as architectures and hyperparameters, are provided in Appendix D. |