Pruned Graph Scattering Transforms
Authors: Vassilis N. Ioannidis, Siheng Chen, Georgios B. Giannakis
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments showcase that i) p GST performs comparably to the baseline GST that uses all scattering features, while achieving significant computational savings; ii) p GST achieves comparable performance to state-of-the-art GCNs; and iii) Graph data from various domains lead to different scattering patterns, suggesting domain-adaptive p GST network architectures. |
| Researcher Affiliation | Collaboration | Vassilis N. Ioannidis Dpt. of Electrical and Computer Engineering Univ. of Minnesota Minneapolis, MN, USA ioann006@umn.edu Siheng Chen Mitsubishi Electric Research Laboratories Cambridge, MA, USA schen@merl.com Georgios B. Giannakis Dpt. of Electrical and Computer Engineering Univ. of Minnesota Minneapolis, MN, USA georgios@umn.edu |
| Pseudocode | No | The paper describes the pruning algorithm conceptually and provides a mathematical formula (5) for the optimal pruning assignment variables, but it does not include a formally structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not provide any statements about open-sourcing code or links to a code repository for the described methodology. |
| Open Datasets | Yes | To address RQ1, we reproduce the experiments of two tasks in (Gama et al., 2019a): authorship attribution and source localization. [...] Authorship attribution amounts to determining if a certain text was written by a specific author. Each text is represented by a graph with N = 244, where words (nodes) are connected based on their relative positions in the text, and x is a bag-of-words representation of the text; see also (Gama et al., 2019b). [...] Source localization amounts to recovering the source of a rumor given a diffused signal over a Facebook subnetwork with N = 234; see the detailed settings in (Gama et al., 2019b). [...] The proposed p GST is compared with the following state-of-the-art approaches.5 The kernel methods shortest-path (Borgwardt & Kriegel, 2005), and Weisfeiler-Lehman optimal assignment (WL-OA) (Kriege et al., 2016); the deep learning approaches Patchy San (Niepert et al., 2016), Graph Sage (Hamilton et al., 2017), edge-conditioned filters in CCNs (ECC) (Simonovsky & Komodakis, 2017), Set2Set (Vinyals et al., 2015), Sort Pool (Zhang et al., 2018), and Diff Pool (Ying et al., 2018); and the geometric scattering classifier (GSC) (Gao et al., 2019). Results are presented with protein data sets D&D, Enzymes and Proteins, and the scientific collaboration data set Collab. Detailed description of the datasets is included in the Appendix. [...] We further test p GST in classifying 3D point clouds. [...] for the Model Net40 dataset (Wu et al., 2015). |
| Dataset Splits | Yes | The parameter τ is selected via cross-validation. [...] We perform 10-fold cross validation and report the classification accuracy averaged over the 10 folds. [...] In Fig. 4 (a) 9,843 clouds are used for training and 2,468 for testing using the gradient boosting classifier; whereas, in Fig. 4 (b) only 615 clouds are used for training and the rest for testing using a fully connected neural network classifier with 3 layers. |
| Hardware Specification | No | The paper does not specify any hardware details such as GPU models, CPU types, or memory used for the experiments. |
| Software Dependencies | No | The paper mentions using a 'linear support vector machine (SVM) classifier', 'gradient boosting classifier', and 'fully connected neural network classifier', but does not provide specific version numbers for any software libraries or frameworks used. |
| Experiment Setup | Yes | During training, the structure of the p GST T is determined, which is kept fixed during validation and testing. The parameter τ is selected via cross-validation. Our goal is to provide tangible answers to the following research questions. [...] For the scattering transforms, we consider three implementations of graph filter banks: the diffusion wavelets (DS) in (Gama et al., 2019b), the monic cubic wavelets (MCS) in (Hammond et al., 2011) and the tight Hann wavelets (THS) in (Shuman et al., 2015).4 The scattering transforms use J = 5 filters, L = 5 layers, and τ = 0.01. The extracted features from GSTs are subsequently utilized by a linear support vector machine (SVM) classifier. [...] The gradient boosting classifier is employed for p GST and GST with parameters chosen based on the performance on the validation set. The graph scattering transforms use the MC wavelet with L = 5, J = 5 and τ = 0.01. [...] The scattering transforms use an MC wavelet with J = 5 for Fig. 4 (a) and J = 9 for Fig. 4 (b). |