Out-of-sample extension of graph adjacency spectral embedding
Authors: Keith Levin, Fred Roosta, Michael Mahoney, Carey Priebe
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we briefly explore our results through simulations. We leave a more thorough experimental examination of our results, particularly as they apply to realworld data, for future work. We first give a brief exploration of how quickly the asymptotic distribution in Theorem 3 becomes a good approximation. |
| Researcher Affiliation | Academia | 1Department of Statistics, University of Michigan, USA. 2School of Mathematics and Physics, University of Queensland, Australia. 3International Computer Science Institute, Berkeley, USA. 4Department of Statistics, University of California at Berkeley, USA. 5Department of Applied Mathematics and Statistics, Johns Hopkins University, USA. |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. It refers to arXiv preprints but not code repositories. |
| Open Datasets | No | The paper describes generating synthetic data based on a defined distribution (e.g., "For each trial, we draw n + 1 independent latent positions from F, and generate a binary adjacency matrix from these latent positions."), rather than using a publicly available dataset. |
| Dataset Splits | No | The paper does not explicitly provide training/validation/test dataset splits. It describes an "in-sample" and "out-of-sample" setup for its analysis but not traditional data splits for model training and validation. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned in the paper. |
| Experiment Setup | Yes | For each trial, we draw n + 1 independent latent positions from F, and generate a binary adjacency matrix from these latent positions. We let the (n+1)-th vertex be the OOS vertex. Retaining the subgraph induced by the first n vertices, we obtain an estimate ˆX Rn 2 via ASE, from which we obtain an estimate for the OOS vertex via the LS OOS extension as defined in (4). We remind the reader that for each RDPG draw, we initially recover the latent positions only up to a rotation. Thus, for each trial, we compute a Procrustes alignment (Gower & Dijksterhuis, 2004) of the in-sample estimates ˆX to their true latent positions. This yields a rotation matrix R, which we apply to the OOS estimate. |