Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
LDLE: Low Distortion Local Eigenmaps
Authors: Dhruv Kohli, Alexander Cloninger, Gal Mishne
JMLR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results will show that LDLE largely preserved distances up to a constant scale while other techniques produced higher distortion. We also demonstrate that LDLE produces high quality embeddings even when the data is noisy or sparse. |
| Researcher Affiliation | Academia | Dhruv Kohli EMAIL Department of Mathematics University of California San Diego CA 92093, USA Alexander Cloninger EMAIL Department of Mathematics University of California San Diego CA 92093, USA Gal Mishne EMAIL Halicio glu Data Science Institute University of California San Diego CA 92093, USA |
| Pseudocode | Yes | Algorithm 1: Sparse Unnormalized Graph Laplacian based on (Zelnik-Manor and Perona, 2005) ... Algorithm 2: Bi Lipschitz-Local-Parameterization ... Algorithm 3: Postprocess-Local-Parameterization ... Algorithm 4: Clustering ... Algorithm 5: Calculate-Global-Embedding |
| Open Source Code | Yes | The python code is available at https://github.com/chiggum/pyLDLE |
| Open Datasets | Yes | 6.5.2 Face Image Data In Figure 22, we show the embedding obtained by applying LDLE on the face image data (Tenenbaum et al., 2000)... 6.5.3 Rotating Yoda-Bulldog Data set In Figure 23, we show the 2d embeddings of the rotating figures data set presented in (Lederman and Talmon, 2018). |
| Dataset Splits | No | The paper describes various datasets, such as synthetic sensor data, face image data, and rotating Yoda-Bulldog data, and uses synthetic manifolds like Swiss Roll. However, it does not explicitly state how these datasets were split into training, validation, or testing sets (e.g., percentages or sample counts for each split). |
| Hardware Specification | Yes | Machine specification: Mac OS version 11.4, Apple M1 Chip, 16GB RAM. |
| Software Dependencies | No | The paper mentions 'The python code is available at https://github.com/chiggum/pyLDLE', indicating Python is used. However, it does not specify specific Python versions or any other software libraries (e.g., NumPy, SciPy, PyTorch, TensorFlow) along with their version numbers that are critical for reproducibility. |
| Experiment Setup | Yes | To embed using LDLE, we use the Euclidean metric and the default values of the hyperparameters and their description are provided in Table 1. Only the value of ηmin is tuned across all the examples in Sections 6.2, 6.3 and 6.4 (except for Section 6.2.3), and is provided in Appendix G. For high dimensional data sets in Section 6.5, values of the hyperaparameters which differ from the default values are again provided in Appendix G. |