Robust Softmax Regression for Multi-class Classification with Self-Paced Learning

Authors: Yazhou Ren, Peng Zhao, Yongpan Sheng, Dezhong Yao, Zenglin Xu

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments
Researcher Affiliation Academia 1SMILE Lab & Big Data Research Center, School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, China. 2Key Laboratory for Neuro Information of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 611731, China.
Pseudocode No The paper describes methods verbally and with mathematical formulations but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No No statement or link regarding the public availability of source code for the described methodology was found.
Open Datasets Yes Thirteen real data sets from different sources are tested in our experiments. The characteristics of these data sets are shown in Table 1. ... IJCNN1, Seismic, and Vowel are available at https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/. Prostate is a gene expression data set 4. Balance (Balance Scale), Biodeg (QSAR biodegradation), Haberman (Haberman s Survival), Letter (Letter Recognition), Musk1 (Musk Version 1), Skin (Skin Segmentation), and Spambase are all UCI data sets 5. ... 3www.cs.ucr.edu/~eamonn/time series data/ 4http://stat.ethz.ch/~dettling/bagboost.html 5http://archive.ics.uci.edu/ml/index.html
Dataset Splits Yes For each method, we perform a 10-fold cross validation on each data set ten times and report the average results.
Hardware Specification No No specific hardware details (e.g., CPU/GPU models, memory, or cloud computing resources with specifications) used for running experiments are mentioned in the paper.
Software Dependencies No The paper mentions using 'L-BFGS method' and provides a URL to a software page, but does not specify version numbers for any software dependencies or frameworks used for implementation (e.g., Python, specific machine learning libraries, or their versions).
Experiment Setup Yes The iteration stops when it converges or reaches the maximum number of iterations, which is set to 250 in our experiments. ... In the model, we set λk (k = 1, . . . , K) such that half the instances (whose weights are bigger than 0) from the k-th class are selected in the first iteration. Then, in every following iteration, we increase λk to add 10% more instances from the k-th class. ... We tune µ for each method on each data set, where µ {1e-5, 3e-5, 1e-4, 3e-4, 1e-3, 3e-3, 1e-2, 3e-2, 0.1, 0.3, 1}.