Deep Homomorphism Networks

Authors: Takanori Maehara, Hoang NT

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conducted experiments and observed that the DHN solved difficult benchmark problems (CSL, EXP, and SR25) with fewer parameters than the existing models. For real-world datasets, the proposed model showed promising results, but was still not competitive to the state-of-the-art models that involve a lot of engineering (see Section 6 for discussion).
Researcher Affiliation Collaboration Takanori Maehara* Roku, Inc. Cambridge, UK tmaehara@roku.com Hoang NT University of Tokyo Tokyo, Japan hoangnt@g.ecc.u-tokyo.ac.jp
Pseudocode Yes Algorithm 1 Algorithm for tree pattern P. 1: procedure RECURSION(P , p) 2: dpprus Ð 0 for all u P V p Gq 3: for q P childrenppq do 4: dpq Ð RECURSIONp P , qq 5: dpprus Ð dpprus µppxuq ř v PNpuq dpqrvs for all u P V p Gq 6: end for 7: return dpp 8: end procedure
Open Source Code Yes 5The source code for DHN is provided at https://github.com/gear/dhn
Open Datasets Yes The Circular Skip Links (CSL) dataset consists of 150 undirected regular graphs of degree four [54]. EXP [1] and SR25 [2, 56] are datasets not distinguishable by 1-WL (EXP) and 3-WL (SR25). The ENZYMES [66, 8] and PROTEINS [8, 22] datasets represent the protein function prediction task formulated as the graph classification problem4 These datasets are parts of the TUDataset collection.
Dataset Splits Yes We report the stratified 10-fold cross-validation accuracies for ENZYMES and PROTEINS datasets in Table 1.
Hardware Specification Yes The reported results are obtained on a single GPU machine that houses an RTX4090 with 24GB of GPU memory.
Software Dependencies No The paper mentions "Pytorch Geometric s API" but does not specify its version or other software dependencies with version numbers.
Experiment Setup Yes For our DHN, we use two sets of patterns as the building blocks. Ci:j t Ci, . . . , Cju denotes the sets of cycles of lengths i to j. Similarly, Ki:j t Ki, . . . , Kju denotes the set of cliques of size i to j. We use 3-layer MLPs for both ρ and µp for the homomorphism layer (Eq. (4)). In Table 1, we present the models configurations inside the single brackets. ... All DHN models in Table 1 have 20 hidden units MLP layers; these MLP blocks (3 layers) correspond to functions µ in Equation 3. Each homomorphism kernel is embedded in 10 dimensions. The DHN models are trained using the Adam optimizer with an initial learning rate of 0.001.