Compressive spectral embedding: sidestepping the SVD
Authors: Dinesh Ramasamy, Upamanyu Madhow
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our numerical results on network datasets demonstrate the efficacy of the proposed method, and motivate further exploration of its application to large-scale inference tasks. |
| Researcher Affiliation | Academia | Dinesh Ramasamy dineshr@ece.ucsb.edu ECE Department, UC Santa Barbara Upamanyu Madhow madhow@ece.ucsb.edu ECE Department, UC Santa Barbara |
| Pseudocode | Yes | Algorithm 1 Proposed algorithm to compute approximate d-dimensional eigenvector embedding of a n n symmetric matrix S (such that S 1) using the n d random projection matrix Ω. |
| Open Source Code | Yes | A freely downloadable Python implementation of the proposed algorithm that exploits this inherent parallelism can be found in [9]. [9] Python implementation of Fast Embed. [Online]. Available: https://bitbucket.org/dineshkr/fastembed/src/NIPS2015 |
| Open Datasets | Yes | We consider two real world undirected graphs in [27] for our evaluation, and compute embeddings for the normalized adjacency matrix e A... [27] J. Yang and J. Leskovec, Defining and evaluating network communities based on ground-truth, in 2012 IEEE 12th International Conference on Data Mining (ICDM), Dec. 2012. |
| Dataset Splits | No | The paper does not provide specific training/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) for reproducibility of data partitioning, especially for the K-means clustering application. |
| Hardware Specification | No | The paper mentions running the algorithm "on a standard workstation" but does not provide any specific hardware details such as GPU/CPU models, memory, or cloud instance types. |
| Software Dependencies | No | The paper states the implementation was "in Python using the Scipy s sparse matrix-multiplication routines" but does not provide specific version numbers for Python, Scipy, or any other libraries. |
| Experiment Setup | Yes | For Figure (1a), we use L = 180 matrix-vector products for each randomly picked starting vector and set cascading parameter b = 2 for the algorithm in Section 4. [...] we set f(λ) = I(λ 0.98) and S = e A in Algorithm 1. [...] We consider 25 instances of K-means clustering with K = 200 throughout. |