Discriminative Unsupervised Dimensionality Reduction

Authors: Xiaoqian Wang, Yun Liu, Feiping Nie, Heng Huang

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive empirical results on dimensionality reduction as well as clustering are presented to corroborate the performance of our method. and 6 Experimental Results
Researcher Affiliation Academia University of Texas at Arlington Arlington, Texas 76019, USA
Pseudocode Yes Algorithm 1 Algorithm to solve Problem (7).
Open Source Code No The paper does not provide a link to open-source code or explicitly state that the code for their method is available.
Open Datasets Yes Pathbased, Compound, Spiral, Movements [Asuncion and Newman, 2007], Jaffe [Lyons et al., 1998], AR Im Data [Martınez and Benavente, 1998], XM2VTS [Messer et al., 1999] and Coil20 [Nene et al., 1996] and Downloaded from http://cs.joensuu.fi/sipu/datasets/
Dataset Splits No The paper mentions 'cross validation' for tuning a parameter ('we tuned the γ value via cross validation') but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, sample counts, or explicit splits).
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies, such as libraries or solvers with their version numbers, that were used in the experiments.
Experiment Setup Yes For each method we repeated K-Means for 100 times with the same initialization and recorded the best result w.r.t. the K-means objective function value in these 100 runs. Among these three methods, LPP requires an affinity matrix constructed before hand, so in this experiment we constructed the graph with the self-tune Gaussian method [Chen et al., 2011]. We set the number of neighbors to be 5 and the parameter σ to be self-tuned so as to guarantee the input graph quality. As for DUDR, we tuned the γ value via cross validation. And also, we set the number of clusters to be the ground truth k in each data set and we set the projected dimension in DUDR to be k-1.