Dropping Symmetry for Fast Symmetric Nonnegative Matrix Factorization

Authors: Zhihui Zhu, Xiao Li, Kai Liu, Qiuwei Li

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments on both synthetic data and image clustering to support our result.
Researcher Affiliation Academia Zhihui Zhu Mathematical Institute for Data Science Johns Hopkins University Baltimore, MD, USA zzhu29@jhu.edu Xiao Li Department of Electronic Engineering The Chinese University of Hong Kong Shatin, NT, Hong Kong xli@ee.cuhk.edu.hk Kai Liu Department of Computer Science Colorado School of Mines Golden, CO, USA kaliu@mines.edu Qiuwei Li Department of Electrical Engineering Colorado School of Mines Golden, CO, USA qiuli@mines.edu
Pseudocode Yes Algorithm 1 Sym ANLS Initialization: k = 1 and U 0 = V 0. 1: while stop criterion not meet do 2: U k = arg min V 0 1 2 X UV T k 1 2 F + λ 2 U V k 1 2 F ; 3: V k = arg min U 0 1 2 X U k V T 2 F + λ 2 U k V 2 F ; 4: k = k + 1. 5: end while Output: factorization (U k, V k).
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it include a specific repository link or explicit code release statement by the authors.
Open Datasets Yes We also test on real world dataset CBCL 5, where there are 2429 face image data with dimension 19 19. Footnote 5: http://cbcl.mit.edu/software-datasets/Face Data2.html ORL: 400 facial images from 40 different persons with each one has 10 images from different angles and emotions 6. Footnote 6: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html COIL-20: 1440 images from 20 objects 7. Footnote 7: http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php TDT2: 10,212 news articles from 30 categories 8. Footnote 8: https://www.ldc.upenn.edu/collaborations/past-projects MNIST: classical handwritten digits dataset 9, where 60,000 are for training (denoted as MNISTtrain), and 10,000 for testing (denoted as MNISTtest). Footnote 9: http://yann.lecun.com/exdb/mnist/
Dataset Splits Yes MNIST: classical handwritten digits dataset 9, where 60,000 are for training (denoted as MNISTtrain), and 10,000 for testing (denoted as MNISTtest).
Hardware Specification No The paper does not specify the hardware used for running the experiments. It lacks details such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiments.
Experiment Setup No The paper mentions initializing algorithms with uniformly distributed entries and tuning the lambda parameter ('we tune the best parameter λ for each experiment'). However, it does not provide specific hyperparameter values, training configurations, or system-level settings typically found in an experimental setup description (e.g., learning rates, batch sizes, optimizers).