Janossy Pooling: Learning Deep Permutation-Invariant Functions for Variable-Size Inputs

Authors: Ryan L. Murphy, Balasubramaniam Srinivasan, Vinayak Rao, Bruno Ribeiro

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We explore a few in our experiments, which demonstrate improved performance over current state-of-the-art methods.
Researcher Affiliation Academia Ryan L. Murphy Department of Statistics Purdue University murph213@purdue.edu; Balasubramaniam Srinivasan Department of Computer Science Purdue University bsriniv@purdue.edu; Vinayak Rao Department of Statistics Purdue University varao@purdue.edu; Bruno Ribeiro Department of Computer Science Purdue University ribeiro@cs.purdue.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code used to generate the results in this section are available on Git Hub2. 2https://github.com/PurdueMINDS/JanossyPooling
Open Datasets Yes We consider the three graph datasets considered in Hamilton et al. (2017): Cora and Pubmed (Sen et al., 2008) and the larger Protein-Protein Interaction (PPI) (Zitnik & Leskovec, 2017).
Dataset Splits No The paper mentions tuning on a 'validation set' but does not provide specific details on its size, composition, or how it was split from the main dataset.
Hardware Specification Yes Training was performed on Ge Force GTX 1080 Ti GPUs.
Software Dependencies No We extended the code from Zaheer et al. (2017), which was written in Keras(Chollet et al., 2015), and subsequently ported to Py Torch. (Explanation: While software is mentioned, specific version numbers are missing for reproducibility.)
Experiment Setup Yes The MLPs in f have 30 neurons whereas the MLPs in ρ have 100 neurons, the LSTMs have 50 neurons, and the GRUs have 80 hidden neurons. All activations are tanh except for the output layer which is linear. ... Optimization is done with Adam (Kingma & Ba, 2015) with a tuned the learning rate, searching over {0.01, 0.001, 0.0001, 0.00001}. ... For every dataset, the embedding dimension was set to q = 256 at both conv layers. For Pubmed and PPI, the learning rate is set at 0.01 while for Cora it is set at 0.005.