Unsupervised Joint k-node Graph Representations with Compositional Energy-Based Models
Authors: Leonardo Cotta, Carlos H. C. Teixeira, Ananthram Swami, Bruno Ribeiro
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that the unsupervised joint k-node representations of MHM-GNN produce better unsupervised representations than existing approaches from the literature. In this section, we evaluate the quality of the unsupervised motif representations learned by MHMGNN over six datasets using two joint k-node transfer learning tasks. |
| Researcher Affiliation | Academia | Leonardo Cotta Purdue University cotta@purdue.edu Carlos H. C. Teixeira Universidade Federal de Minas Gerais, Brazil carlos@dcc.ufmg.br Ananthram Swami United States Army Research Laboratory ananthram.swami.civ@mail.mil Bruno Ribeiro Purdue University ribeiro@cs.purdue.edu |
| Pseudocode | No | The information is insufficient. The paper includes Figure 2 which illustrates a graph and its k-CNHON with an RWT example, but this is an illustrative diagram and not a structured pseudocode or algorithm block. |
| Open Source Code | Yes | Our code is available at https://github.com/PurdueMINDS/minds-mhm-gnn. |
| Open Datasets | Yes | Datasets. We use the Cora, Citeseer and Pubmed [40] citation networks, the DBLP coauthorship network [49], the Steam [32] and the Rent the Runway [26] product networks (more details about the datasets are in the supplement). |
| Dataset Splits | No | The information is insufficient. The paper describes how it divides data into training and test sets but does not explicitly mention a distinct validation set with specific proportions or sample counts in the main text. While it mentions hyperparameter tuning in the supplement, the details for a validation split are not provided in the main paper. |
| Hardware Specification | No | The information is insufficient. The paper does not provide any specific hardware details such as GPU/CPU models, memory, or cloud computing instance types used for running the experiments. |
| Software Dependencies | No | The information is insufficient. While the paper mentions using Graph SAGE-mean as the GNN, it does not specify any software versions for programming languages, libraries, or frameworks (e.g., Python version, PyTorch/TensorFlow version, CUDA version). |
| Experiment Setup | Yes | MHM-GNN architecture. The energy function of MHM-GNN is as described in Equation (2), where we use a one-hidden layer feedforward network with Leaky Re LU activations as ρ, a row-wise sum followed by also a one-hidden layer feedforward network with Leaky Re LU activations as the READOUT function and a single layer Graph SAGE-mean Hamilton et al. [15] as the GNN. To this end, we construct Dtrue by subsampling the original graph with Forest Fire [24]. As for the noise distribution, we turn to the one used by Veliˇckovi c et al. [46], where for each positive example we generate M negative samples by keeping the adjacency matrix and shuffling the feature matrix. |