Multidimensional Scaling: Approximation and Complexity

Authors: Erik Demaine, Adam Hesterberg, Frederic Koehler, Jayson Lynch, John Urschel

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implemented the GREEDYCSP-based algorithm described above as well as a standard gradient descent approach to minimizing the Kamada-Kawai objective. In this section we compare the behavior of these algorithms in a few interesting instances.
Researcher Affiliation Academia 1Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA 2John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA 3Department of Mathematics, MIT, Cambridge, MA, USA 4Cheriton School of Computer Science, University of Waterloo, Waterloo, ON, Canada.
Pseudocode Yes Algorithm 1 Greedy Algorithm for Dense CSPs (Mathieu & Schudy, 2008; Schudy, 2012) and Algorithm 2 Approximation Algorithm KKSCHEME
Open Source Code No The paper does not provide an unambiguous statement or link for the release of source code for the methodology described.
Open Datasets Yes In Figure 2 we show an embedding of the 3elt graph from (Diekmann & Preis); in Figure 3, we show the result of embedding a well-known social network graph, the Davis Southern Women Network (Davis et al., 2009).
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into train, validation, and test sets.
Hardware Specification Yes All experiments were performed on a standard Kaggle GPU Kernel with a V80 GPU.
Software Dependencies No The paper mentions running experiments on a 'Kaggle GPU Kernel' but does not provide specific software dependencies with version numbers.
Experiment Setup Yes Gradient descent was run with learning rate 0.005 for 4000 steps on all instances. We seeded the RNG with zero before each simulation for reproducibility. For the greedy method... the parameter R was set to 2.5 for the Davis experiment, and set to 4 for all others.