Convergence Guarantees for the DeepWalk Embedding on Block Models
Authors: Christopher Harker, Aditya Bhaskara
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On the experimental side, we validate our results: we show a clear separation between the embeddings of vertices across clusters for different choices of the embedding dimension. |
| Researcher Affiliation | Academia | 1Kahlert School of Computing, University of Utah, Salt Lake City, UT, USA. |
| Pseudocode | Yes | Algorithm 1 Deep Walk Gradient Descent |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper uses graphs drawn from a "stochastic block model (SBM)" which is a generative model for graphs. It does not provide access information (link, DOI, specific citation) for a publicly available, pre-existing dataset used for training or evaluation. |
| Dataset Splits | No | The paper describes generating graphs from a stochastic block model for experiments. It does not mention explicit training, validation, or test splits of a fixed dataset, as the data is generated rather than partitioned from an existing source. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., CPU, GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software libraries, frameworks, or environments used in the experiments. |
| Experiment Setup | Yes | We run the algorithm for T = 100 iterations and used a learning rate of η = 0.01. The embeddings are initialized randomly so that x(t) 0.01 and y(t) 0.01. ... The learning rate was set for η = 1 n and training was rate for T = 75 iterations. |