Confidence-Rated Discriminative Partial Label Learning

Authors: Cai-Zhi Tang, Min-Ling Zhang

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on artificial as well as real-world partial label data sets validate the effectiveness of the confidence-rated discriminative modeling.
Researcher Affiliation Academia 1School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China 3Collaborative Innovation Center of Wireless Communications Technology, China 220141515@seu.edu.cn, zhangml@seu.edu.cn (corresponding author)
Pseudocode Yes Table 1: The pseudo-code of CORD.
Open Source Code Yes Code package for CORD is publicly-available at: http://cse.seu.edu.cn/Personal Page/zhangml/Resources.htm#aaai17
Open Datasets Yes Two series of comparative studies are conducted among the comparing algorithms, with one series on controlled UCI data sets (Bache and Lichman 2013) and another series on real-world partial label data sets.4These data sets are publicly-available at: http://cse.seu.edu.cn/ Personal Page/zhangml/Resources.htm#partial data
Dataset Splits Yes Ten-fold cross-validation is performed on each data set, and the mean predictive accuracies (as well as the standard deviations) of all comparing algorithms are reported in the rest of this section.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are mentioned for running the experiments. The paper does not specify the computational environment beyond general terms.
Software Dependencies No The paper mentions that 'maximum entropy model...is employed to serve as the base classifier' but does not specify any software libraries, frameworks, or their version numbers used for implementation or experiments.
Experiment Setup Yes In this paper, the confidence updating parameter β is set to be 0.5 and the maximum boosting rounds T is set to be 10. Furthermore, maximum entropy model (Jin and Ghahramani 2003; Della Pietra, Della Pietra, and Lafferty 1997) is employed to serve as the base classifier which is trained with gradientbased optimization (Table 1, Step 4).