Node Embeddings and Exact Low-Rank Representations of Complex Networks
Authors: Sudhanshu Chanpuriya, Cameron Musco, Konstantinos Sotiropoulos, Charalampos Tsourakakis
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we perform a large number of experiments that verify the ability of very low-dimensional embeddings to capture local structure in real-world networks. |
| Researcher Affiliation | Academia | Sudhanshu Chanpuriya University of Massachusetts Amherst schanpuriya@umass.edu Cameron Musco University of Massachusetts Amherst cmusco@cs.umass.edu Charalampos E. Tsourakakis Boston University & ISI Foundation tsourolampis@gmail.com Konstantinos Sotiropoulos Boston University ksotirop@bu.edu |
| Pseudocode | No | The paper describes algorithms in text but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/schariya/ exact-embeddings. |
| Open Datasets | Yes | Our evaluations are based on 11 popular real-network datasets, detailed below. Table 2 lists and shows some statistics of these datasets. For all networks, we ignore weights (setting non-zero weights to 1) and remove self-loops where applicable. PROTEIN-PROTEIN INTERACTION (PPI) [SBCA+10]... WIKIPEDIA [GL16]... BLOGCATALOG [ALM+09]... FACEBOOK [LM12]... CA-HEPPH and CA-GRQC [LKF07]... PUBMED [NLG+12]... P2P-GNUTELLA04 [LKF07]... WIKI-VOTE [LHK10]... CITESEER [SNB+08]... CORA [SNB+08] |
| Dataset Splits | No | The paper evaluates the ability to reconstruct network structure but does not explicitly describe training, validation, and test dataset splits in the context of model training for a predictive task. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Sci Py' but does not provide a specific version number. No other software dependencies are listed with their version numbers. |
| Experiment Setup | Yes | We initialize elements of the factors X, Y independently and uniformly at random on [ 1, +1]. We find factors that approximately minimize the loss using the Sci Py [JOP+ ] implementation of the L-BFGS [LN89, ZBLN97] algorithm with default hyper-parameters and up to a maximum of 2000 iterations. |