Relaxed Majorization-Minimization for Non-Smooth and Non-Convex Optimization

Authors: Chen Xu, Zhouchen Lin, Zhenyu Zhao, Hongbin Zha

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our relaxed MM methods to the robust matrix factorization (RMF) problem with different regularizations, where our locally majorant algorithm shows great advantages over the state-of-the-art approaches for RMF. This is the first algorithm for RMF ensuring, without extra assumptions, that any limit point of the iterates is a stationary point. ... Experimental results testify to the robustness and effectiveness of our locally majorant algorithm over the state-of-the-art algorithms for RMF.
Researcher Affiliation Academia 1 Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, P. R. China 2 Cooperative Medianet Innovation Center, Shanghai Jiaotong University, P. R. China 3Department of Mathematics, School of Science, National University of Defense Technology, P. R. China
Pseudocode Yes Algorithm 1 Sketch of MM Input: x0 C. 1: while not converged do 2: Construct a surrogate function gk(x) of f(x) at the current iterate xk. 3: Minimize the surrogate to get the next iterate: xk+1 = argminx C gk(x). 4: k k + 1. 5: end while Output: The solution xk.
Open Source Code No The paper mentions implementing code for a baseline ('We implemented the code of ℓ1-NMF (Pan et al., 2014) ourselves.') and that another baseline's code was provided ('The code of UNu Bi (Cabral et al., 2013) was kindly provided by the authors.'), but it does not state that the code for the authors' own proposed methodology is open-source or provide a link to it.
Open Datasets Yes Low Rank Matrix Recovery: We generate a data matrix M = U0V T 0 , where U0 R500 10 and V0 R500 10 are sampled i.i.d. from a Gaussian distribution N(0, 1). ... Non-negative Matrix Factorization: We generate a data matrix M = U0V T 0 , where U0 R500 10 and V0 R500 10 are sampled i.i.d. from a uniform distribution U(0, 1). ... Here we use the famous Oxford Dinosaur sequence 4, which consists of 36 images with a resolution of 720 576 pixels. ... 4http://www.robots.ox.ac.uk/ vgg/data1.html ... We test the performance of robust NMF by clustering... The experiments are conducted on four benchmark datasets of face images, which includes: AT&T, UMIST, a subset of PIE 5 and a subset of AR 6. ... 5http://www.zjucadcg.cn/dengcai/Data/data.html 6http://www2.ece.ohio-state.edu/ aleix/ARdatabase.html
Dataset Splits No The paper does not explicitly state specific training, validation, or test dataset splits. For synthetic data, it describes generation, and for real data, it uses existing datasets without specifying how they were partitioned into these subsets.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, or memory specifications).
Software Dependencies No The paper mentions implementing code for baselines but does not provide specific software dependencies or version numbers (e.g., programming languages, libraries, or frameworks with their versions).
Experiment Setup Yes Here we set the regularization parameters λu = λv = 20/(m + n) and stop our relaxed MM algorithms when the relative change in the objective function is less than 10 4. ... We initialize all compared algorithms with the rank-r truncation of the singularvalue decomposition of W M. ... We change the regularization parameter λu to 2000/(m + n) and maintain λv as 20/(m + n). ... We empirically terminate ℓ1-NMF, RMF-GMMM, and RMF-LMMM after 5000, 500, and 20 iterations, respectively.