Shift Aggregate Extract Networks

Authors: Francesco Orsini, Daniele Baracchi, Paolo Frasconi

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art.
Researcher Affiliation Academia Francesco Orsini 12, Daniele Baracchi 2 and Paolo Frasconi 2 1Department of Computer Science Katholieke Universiteit Leuven 2Department of Information Engineering Universit a degli Studi di Firenze
Pseudocode Yes DOMAIN-COMPRESSION(X, R) 1 C0, D0 = COMPUTE-CD(X) 2 Xcomp = C0X // Compress the X matrix. 3 Rcomp = {} // Initialize an empty container for compressed matrices. 4 for l = 1 to L 5 Rcol comp = [Rl,πDl 1, π = 1, . . . , |Πl|] // column compression 6 Cl, Dl = COMPUTE-CD(Rcol comp) 7 for π = 1 to |Πl| 8 Rcomp l,π = Cl Rcol comp π // row compression 9 return Xcomp, Rcomp
Open Source Code No The paper does not provide any explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes In order to answer the experimental questions we tested our method on six publicly available datasets first proposed by Yanardag & Vishwanathan (2015). COLLAB is a dataset where each graph represent the ego-network of a researcher... IMDB-BINARY, IMDB-MULTI are datasets derived from IMDB... REDDIT-BINARY, REDDIT-MULTI5K, REDDIT-MULTI12K are datasets where each graph is derived from a discussion thread from Reddit.
Dataset Splits Yes The classification accuracy of SAEN was measured with 10-times 10-fold cross-validation.
Hardware Specification Yes For the purpose of this experiment, all tests were run on a computer with two 8-cores Intel Xeon E5-2665 processors and 94 GB RAM.
Software Dependencies No The paper mentions using the 'Adam algorithm' for optimization and 'Leaky Re LU' as an activation function, but it does not specify version numbers for any software libraries or frameworks (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We manually chose the number of layers and units for each level of the part-of decomposition; the number of epochs was chosen manually for each dataset and we kept the same value for all the 100 runs of the 10-times 10-fold cross-validation. We used the Leaky Re LU (Maas et al.) activation function on all the units. We report the chosen parameters in Table A1 of the appendix. In all our experiments we trained the neural networks by using the Adam algorithm to minimize a cross entropy loss.