Unveiling the sampling density in non-uniform geometric graphs

Authors: Raffaele Paolino, Aleksandar Bojchevski, Stephan Günnemann, Gitta Kutyniok, Ron Levie

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental findings support our theory and provide strong evidence for our model. 3 EXPERIMENTS In the following, we validate the Nu G model experimentally. Moreover, we verify the validity of our method first on synthetic datasets, then on real-world graphs in a transductive (node classification) and inductive (graph classification) setting. Finally, we propose proof-of-concept applications in explainability, learning GSOs, and differentiable pooling.
Researcher Affiliation Academia Raffaele Paolino Department of Mathematics & Munich Center for Machine Learning (MCML), Ludwig-Maximilians-Universit at M unchen paolino@math.lmu.de Aleksandar Bojchevski CISPA Helmholtz Center for Information Security bojchevski@cispa.de Stephan G unnemann Department of Computer Science & Munich Data Science Institute, Technical University of Munich s.guennemann@tum.de Gitta Kutyniok Department of Mathematics & Munich Center for Machine Learning (MCML), Ludwig-Maximilians-Universit at M unchen kutyniok@math.lmu.de Ron Levie Faculty of Mathematics, Technion Israel Institute of Technology levieron@technion.ac.il
Pseudocode No The paper describes the methods textually, for example, 'Hence, we implement the density estimator as a message-passing graph neural network (MPNN) Θ' and details its components, but it does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a link to a code repository.
Open Datasets Yes In Section 3.2, we considered G to be one of the graphs reported in Tab. 1. The datasets mentioned include Citeseer (Yang et al., 2016), Cora (Yang et al., 2016), Pubmed (Yang et al., 2016), Amazon Computers (Shchur et al., 2019), Amazon Photo (Shchur et al., 2019), Facebook Page Page (Rozemberczki et al., 2021), and the AIDS dataset (Riesen & Bunke, 2008).
Dataset Splits Yes We split the nodes in training (85%), validation (5%), and test (10%) in a stratified fashion, and apply early stopping. We perform a stratified splitting of the graphs in training (85%), validation (5%), and test (10%), and applied early stopping.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory amounts, or detailed computer specifications used for running its experiments.
Software Dependencies No The paper mentions software components like 'ADAM', 'Edge Conv neural network', 'GCN', 'Cheb Net', and 'Cayley Net' but does not provide specific version numbers for any of these or other key software dependencies.
Experiment Setup Yes The number of hidden layers, hidden channels, and output channels is 3, 32, and 1, respectively. The optimizer is ADAM (Kingma & Ba, 2015) with learning rate 10^-2. The number of hidden channels, hidden layers, and output channels is respectively 32, 2, and 2. The order of the polynomial spectral filters is 1, the number of hidden channels 32, and the number of hidden layers 2; The chosen batch size is 64.