Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Dimensionality Reduction with Subspace Structure Preservation
Authors: Devansh Arpit, Ifeoma Nwogu, Venu Govindaraju
NeurIPS 2014 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We support our theoretical analysis with empirical results on both synthetic and real world data achieving state-of-the-art results compared to popular dimensionality reduction techniques. |
| Researcher Affiliation | Academia | Devansh Arpit Department of Computer Science SUNY Buffalo Buffalo, NY 14260 EMAIL Ifeoma Nwogu Department of Computer Science SUNY Buffalo Buffalo, NY 14260 EMAIL Venu Govindaraju Department of Computer Science SUNY Buffalo Buffalo, NY 14260 EMAIL |
| Pseudocode | Yes | Algorithm 1 Computation of projection matrix P INPUT: X,K,λ, itermax |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | For real world data, we use the following datasets: 1. Extended Yale dataset B [3]: It consists of 2414 frontal face images of 38 individuals (K = 38) with 64 images per person. [...] 2. AR dataset [10]: This dataset consists of more than 4000 frontal face images of 126 individuals with 26 images per person. [...] 3. PIE dataset [12]: The pose, illumination, and expression (PIE) database is a subset of CMU PIE dataset consisting of 11, 554 images of 68 people (K = 68). |
| Dataset Splits | Yes | For Extended Yale dataset B, we use all 38 classes for evaluation with 50% 50% train-test split 1 and 70% 30% train-test split 2. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., CPU, GPU models) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., libraries, frameworks, or programming languages with versions). |
| Experiment Setup | Yes | Algorithm 1 lists INPUT: X,K,λ, itermax. Lambda (λ) and itermax are specific hyperparameters for the algorithm. Section 3.3 states: 'Assume that we run this while loop for T iterations and that we use conjugate gradient descent to solve the quadratic program in each iteration.' |