Generalized Majorization-Minimization for Non-Convex Optimization

Authors: Hu Zhang, Pan Zhou, Yi Yang, Jiashi Feng

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical studies on non-convex logistic regression and sparse PCA demonstrate the advantageous efficiency of the proposed algorithm and validate our theoretical results. We conduct two groups of experiments on non-convex problems to evaluate our proposed results.
Researcher Affiliation Collaboration Hu Zhang1 , Pan Zhou2 , Yi Yang1,3 and Jiashi Feng2 1School of Computer Science, University of Technology Sydney, Australia 2Dept. ECE, National University of Singapore, Singapore 3Baidu Research
Pseudocode Yes Algorithm 1 Classic MM Algorithm and Algorithm 2 SPI-MM Algorithm
Open Source Code No The paper does not provide a specific link or explicit statement indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We choose four algorithms as baselines, i.e. classic MM in Algorithm 1, MISO [Mairal, 2015], MISO1 [Mairal, 2015] and SMM [Mairal, 2013b], for performance comparison with ours on four datasets including ijcnn, splice, covtype, phishing1[Chang and Lin, 2011] and two larger datasets including alpha and gamma2. 1https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ 2ftp://largescale.ml.tu-berlin.de/largescale/
Dataset Splits No The paper mentions the use of several datasets but does not provide specific details on training, validation, or test splits (e.g., percentages, sample counts, or cross-validation methodology).
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies or libraries used in the implementation of the proposed algorithm or experiments.
Experiment Setup No The paper describes the mathematical formulation of the algorithm and surrogate construction but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs, or optimizer settings) used in the experiments.