MagNet: A Neural Network for Directed Graphs

Authors: Xitong Zhang, Yixuan He, Nathan Brugnone, Michael Perlmutter, Matthew Hirn

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our network to a variety of directed graph node classification and link prediction tasks showing that Mag Net performs well on all tasks and that its performance exceeds all other methods on a majority of such tasks. The underlying principles of Mag Net are such that it can be adapted to other GNN architectures. In Section 5, we apply our network to node classification and link prediction tasks. We compare against several spectral and spatial methods as well as networks designed for directed graphs.
Researcher Affiliation Academia 1Michigan State University, Department of Computational Mathematics, Science & Engineering, East Lansing, Michigan, United States 2University of Oxford, Department of Statistics, Oxford, England, United Kingdom 3Michigan State University, Department of Community Sustainability, East Lansing, Michigan, United States 4University of California, Los Angeles, Department of Mathematics, Los Angeles, California, United States 5Michigan State University, Department of Mathematics, East Lansing, Michigan, United States 6Michigan State University, Center for Quantum Computing, Science & Engineering, East Lansing, Michigan, United States
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code. It mentions: "We also provide a supplementary document with full implementation details, theoretical results concerning the magnetic Laplacian, extended examples, and further numerical details." but does not include a link or explicit statement of code release.
Open Datasets Yes Texas, Wisconsin, and Cornell are Web KB datasets modeling links between websites at different universities [36]. Telegram [5] is a pairwise influence network... The datasets Chameleon and Squirrel [37] represent links between Wikipedia pages... Wiki CS [31] is a collection of Computer Science articles... Cora-ML and Cite Seer are popular citation networks with node labels corresponding to scientific subareas. We use the versions of these datasets provided in [4].
Dataset Splits Yes For the datasets Cornell, Texas, Wisconsin, and Telegram we use a 60%/20%/20% training/validation/test split... For Cora-ML and Cite Seer, we use the same split as [41]. For all of these datasets we use 10 random data splits. For the DSBM datasets... We use 20% of the nodes for validation and we vary the proportion of training samples based on the classification difficulty, using 2%, 10%, and 60% of nodes per class for the ordered, cyclic, and noisy cyclic DSBM graphs, respectively, during training, and the rest for testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No The paper mentions "Since currently complex tensors are still in beta in Py Torch, we did not use them..." but does not specify a version number for PyTorch or any other software dependencies.
Experiment Setup No The paper mentions that "The setting of the hyperparameter q and other network hyperparameters is obtained by cross-validation." It also specifies architectural choices like "In most cases we set K = 1" and "In our experiments, we set L = 2 or 3." However, it does not provide specific values for common training hyperparameters such as learning rate, batch size, or optimizer settings, nor system-level configurations.