Multi-View Clustering via Deep Matrix Factorization

Authors: Handong Zhao, Zhengming Ding, Yun Fu

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The superior experimental results on three face benchmarks show the effectiveness of the proposed deep matrix factorization model. We choose three face image/video benchmarks in our experiments, as face contains good structural information, which is beneficial to manifesting the strengths of deep NMF structure.
Researcher Affiliation Academia Handong Zhao, Zhengming Ding, Yun Fu Department of Electrical and Computer Engineering, Northeastern University, Boston, USA, 02115 College of Computer and Information Science, Northeastern University, Boston, USA, 02115 {hdzhao,allanding,yunfu}@ece.neu.edu
Pseudocode Yes Algorithm 1: Optimization of Problem (3)
Open Source Code No The paper does not provide any specific repository link or explicit statement about the release of the source code for the methodology described.
Open Datasets Yes We choose three face image/video benchmarks in our experiments... Yale consists of 165 images of 15 subjects... Extended Yale B consists of 38 subjects of face images... Notting-Hill is a well-known video face benchmark (Zhang et al. 2009)...
Dataset Splits No The paper mentions 'train' in the context of pre-training and optimization, and evaluates on 'test' data through the performance tables, but does not provide specific train/validation/test dataset split information (percentages, counts, or predefined splits) for reproducibility.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers for any libraries, frameworks, or programming languages used in the experiments.
Experiment Setup Yes The corresponding parameters γ, β and layer size are set as 0.5, 0.1 and [100, 50], respectively. Parameter β is set as 0.1. γ is evaluated in the grid of {5 10 3, 5 10 2, 5 10 1, 5 100, 5 101, 5 102}. We fix parameter γ = 0.5 as default in our experiments. In practice, we choose β = 0.01 as default. For the layer size analysis, from Figure 3 and Figure 4, we observe that the setting of [100 50] always performs best. Empirically, we find that the last layer dimension usually plays a more important role than other layer size (blue curves are always close to red ones).