SigMaNet: One Laplacian to Rule Them All
Authors: Stefano Fiorini, Stefano Coniglio, Michele Ciavotta, Enza Messina
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we report on a set of computational experiments carried out on four tasks: link sign prediction, link existence prediction, link direction prediction, and node classification. |
| Researcher Affiliation | Academia | 1 University of Milano-Bicocca, Milan, Italy, 2 University of Bergamo, Bergamo, Italy |
| Pseudocode | No | The paper describes the model architecture and equations but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available on Git Hub.2 https://github.com/Stefa1994/Sig Ma Net. |
| Open Datasets | Yes | We test Sig Ma Net on six real-world datasets from the literature: Bitcoin-OTC and Bitcoin Alpha (Kumar et al. 2016); Slashdot and Epinions (Leskovec, Huttenlocher, and Kleinberg 2010b); Wiki Rfa (West et al. 2014); and Telegram (Bovet and Grindrod 2020). |
| Dataset Splits | Yes | The experiments are run with k-cross validation with k = 5, reporting the average score obtained across the k splits. Connectivity is maintained when building each training set by guaranteeing that the graph used for training in each fold contain a spanning tree. Following (Huang et al. 2021), we we adopt a 80%-20% training-testing split. ... Following Zhang et al. (2021b), in each task we reserve 15% of the edges for testing, 5% for validation, and use the remaining ones for training. The experiments are run with k-cross validation with k = 10. ... We rely on the standard 60%/20%/20% split for training/validation/testing across all datasets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with their version numbers. |
| Experiment Setup | Yes | As such, it can easily be applied to a variety of tasks in an almost task-agnostic way (provided that one defines a suitable loss function) while architectures such as Mag Net are suitable only for tasks whose graph has nonnegative edge weights. As Lσ is entirely parameter-free, Sig Ma Net does not require any fine-tunings to optimize the propagation of topological information through the network, differently from, e.g., Di Graph (Tong et al. 2020a) and Mag Net. ... in this work, the number of filters is chosen from {16, 32, 64} via hyperameter optimization. ... we follow Zhang et al. (2021b) and rely on a complex version of the Re LU activation function which is defined for a given z IC as ϕ(z) = z if ℜ(z) 0 and ϕ(z) = 0 otherwise. As the output of the convolutional layer Zσ is complex-valued, to coerce it into the reals without information loss we apply an unwind operation by which Zσ(X) ICn f is transformed into [ℜ(Zσ(X)); ℑ(Zσ(X)] IRn 2f. To obtain the final result based on the task at hand, we apply either a linear layer with weights W or a 1D convolution. |