Diffusion Scattering Transforms on Graphs

Authors: Fernando Gama, Alejandro Ribeiro, Joan Bruna

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we first show empirically the dependence of the stability result with respect to the spectral gap, and then we illustrate the discriminative power of the diffusion scattering transform in two different classification tasks; namely, the problems of authorship attribution and source localization.
Researcher Affiliation Academia Fernando Gama Dept. of Electrical and Systems Engineering University of Pennsylvania Alejandro Ribeiro Dept. of Electrical and Systems Engineering University of Pennsylvania Joan Bruna Courant Institute of Mathematical Sciences New York University
Pseudocode No The paper does not contain any explicitly labeled pseudocode or algorithm blocks, nor does it present structured steps for a method or procedure formatted like code.
Open Source Code No The paper does not provide any explicit statements about releasing code or links to a source code repository for the methodology described.
Open Datasets Yes For the second task, let G be a 234-node graph modeling real-world Facebook interactions (Mc Auley & Leskovec, 2012).
Dataset Splits No The paper mentions 'training set' and 'test set' for its experiments, but it does not specify a 'validation set' or explicit train/validation/test splits.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments, such as GPU/CPU models, memory, or specific computing environments.
Software Dependencies No The paper mentions using 'SVM linear model' but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, scikit-learn versions).
Experiment Setup No The paper describes some procedural details of the experiments, such as 'train a SVM linear model' and 't tmax = 20' for diffusion, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs, optimizer settings) or a detailed experimental setup typically found in research papers.