Risk Minimization in the Presence of Label Noise

Authors: Wei Gao, Lu Wang, Yu-Feng li, Zhi-Hua Zhou

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The effectiveness of the LICS algorithm is justified both theoretically and empirically. ... We evaluate the performance of the LICS algorithm on six UCI datasets: australian, breast, diabetes, german, heart and splice. ... The performance is evaluated by five trials of 5-fold cross validation, and the test accuracies are obtained by averaging over these 25 runs, as summarized in Table 1.
Researcher Affiliation Academia Wei Gao and Lu Wang and Yu-Feng Li and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University Collaborative Innovation Center of Novel Software Technology and Industrialization Nanjing 210023, China {gaow, wangl, liyf, zhouzh}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 Median-of-means estimator of label mean... Algorithm 2 The Labeled Instance Centroid Smooth (LICS) algorithm
Open Source Code No The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement) for the source code of the methodology described.
Open Datasets Yes We evaluate the performance of the LICS algorithm on six UCI1 datasets: australian, breast, diabetes, german, heart and splice. 1http://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/
Dataset Splits Yes The performance is evaluated by five trials of 5-fold cross validation, and the test accuracies are obtained by averaging over these 25 runs, as summarized in Table 1.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers needed to replicate the experiment.
Experiment Setup Yes In the proposed LICS algorithm, five-fold cross-validation is executed to select the regularized parameter nλ {2 12, 2 11, . . . , 22} (n is size of training data), approximation parameter nβ {2 12, 2 12, . . . , 212}, noise rate η {0.1, 0.2, 0.3, 0.4}, and we set group number k = 3 in Algorithm 1. The parameters in all compared methods are chosen by cross-validation in a similar manner.