A Critique of Self-Expressive Deep Subspace Clustering

Authors: Benjamin David Haeffele, Chong You, Rene Vidal

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our theoretical results experimentally and also repeat prior experiments reported in the literature, where we conclude that a significant portion of the previously claimed performance benefits can be attributed to an ad-hoc post processing step rather than the deep subspace clustering model.
Researcher Affiliation Academia Benjamin D. Haeffele Mathematical Institute for Data Science Johns Hopkins University Baltimore, MD, USA bhaeffele@jhu.edu Chong You Department of Electrical Engineering and Computer Sciences University of California, Berkeley Berkeley, CA, USA cyou@berkeley.edu Ren e Vidal Department of Biomedical Engineering Johns Hopkins University Baltimore, MD, USA
Pseudocode No The paper does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper provides a link to the code from a *prior work* (Ji et al., 2017) which they repeated for their experiments: "We use the code provided by the authors of Ji et al. (2017)3." and footnote 3 links to "https://github.com/panji1990/Deep-subspace-clustering-networks". However, this is not their own novel open-source code for the methodology *they* describe, but rather the code of the system they are critiquing.
Open Datasets Yes We first evaluate the Autoencoder Regularization form given in (3) by repeating all of the experiments from Ji et al. (2017)... on the Extended Yale-B (38 faces), ORL, and COIL100 datasets.
Dataset Splits No The paper does not explicitly describe the train/validation/test splits for the datasets used. It refers to the training procedure as described in Ji et al. (2017).
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, cloud platforms) used to run the experiments.
Software Dependencies No The paper mentions using "Adam optimizer" and refers to the code of Ji et al. (2017) which implements specific loss functions. However, it does not provide specific version numbers for any software libraries, frameworks (like PyTorch or TensorFlow), or other dependencies necessary for replication.
Experiment Setup Yes We use regularization hyper-parameters (λ, γ) = (10^-4, 2) for all cases and γ2 = 10^-4 for the Instance Normalization case... The training procedure, as described in Ji et al. (2017), involves pre-training an antoencoder network without the F(Z, C) term... Then, the encoder and decoder networks of SEDSC are initialized by the pre-trained networks and all model parameters are trained via the Adam optimizer... To be consistent with Ji et al. (2017), we report the results at 1000/120/700 iterations for Yale B / COIL100 / ORL, respectively.