Deep Subspace Clustering with Data Augmentation

Authors: Mahdi Abavisani, Alireza Naghizadeh, Dimitris Metaxas, Vishal Patel

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method against state-of-the-art subspace clustering algorithms on three standard datasets.
Researcher Affiliation Academia Mahdi Abavisani Rutgers University New Brunswick, NJ mahdi.abavisani@rutgers.edu Alireza Naghizadeh Rutgers University New Brunswick, NJ ar.naghizadeh@rutgers.edu. Dimitris N. Metaxas Rutgers University New Brunswick, NJ dnm@cs.rutgers.edu Vishal M. Patel Johns Hopkins University Baltimore, MD vpatel36@jhu.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The source code is available at: https://github.com/mahdiabavisani/DSCwith DA.git.
Open Datasets Yes Extended Yale-B dataset [39] contains 2432 facial images of 38 individuals from 9 poses and under 64 illuminations settings. ORL dataset [42] includes 400 facial images from 40 individuals. COIL-100 [40] and COIL-20 [41] datasets are respectively consisted from images of 100 and 20 objects placed on a motorized turnable.
Dataset Splits No Note that in the subspace clustering tasks, the datasets are not split into training and testing sets. Instead, all the existing samples are used in both the learning stage and the performance evaluation stage.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments (e.g., GPU/CPU models, memory).
Software Dependencies No The paper mentions 'We implemented our method with Py Torch' but does not specify the version number of PyTorch or any other software dependencies.
Experiment Setup Yes We use the same training settings as described in [17]. We set the EMA decay to α = 0.999 in all the experiments (selected by cross-validation and mean silhouette coefficient as the evaluation metric). We use the adaptive momentum-based gradient descent method (ADAM) [43] with a learning rate of 10-3 to minimize the loss functions.