Learning With Incomplete Labels

Authors: Yingming Li, Zenglin Xu, Zhongfei Zhang

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive evaluations on three benchmark datasets demonstrate that ICVL and ICVL-OD outstand with superior performances in comparison with the competing methods.
Researcher Affiliation Academia College of Information Science & Electronic Engineering, Zhejiang University, China School of Computer Science and Engineering, University of Electronic Science and Technology of China
Pseudocode Yes Algorithm 1: ICVL Algorithm
Open Source Code No The paper does not provide an explicit statement about releasing the source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes All datasets are obtained from http://mulan.sourceforge.net/datasets-mlc.html.
Dataset Splits Yes On the Enron and Birds datasets, we follow the experimental setup used in Mulan. Since there is no fixed split in the Bookmarks dataset in Mulan, we use a fixed training set of 80% of the data, and evaluate the performance of our predictions on the fixed test set of 20% of the data.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper describes algorithms and optimization methods but does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers).
Experiment Setup No The paper mentions regularization parameters and a label dropout probability but does not provide specific values for these or other hyperparameters (e.g., learning rate, batch size, epochs) or detailed training configurations in the main text.