Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Invariant and Equivariant Reynolds Networks

Authors: Akiyoshi Sannai, Makoto Kawano, Wataru Kumagai

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on benchmark data indicate that our approach is more efficient than existing methods. 8. Experiments We evaluated the performance of Rey Nets in equivariant and invariant tasks using multiple data sets. First, we created synthetic data sets for equivariant and invariant tasks and compared Rey Nets with fully-connected neural networks (FNNs) and invariant and equivariant Graph Networks (IEGN) (Maron et al., 2018). Then, to verify the performance with real data, we conducted experiments using eight types of graph benchmark data sets. Please refer to the appendix for the details of each experiment. Our code is publicly available at: https://github.com/makora9143/Rey Net.
Researcher Affiliation Academia Akiyoshi Sannai EMAIL Department of Physics Kyoto University, RIKEN Kitashirakawa, Sakyo, Kyoto 606-8502 Japan Makoto Kawano EMAIL Graduate School of Engineering The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654 Japan Wataru Kumagai EMAIL Graduate School of Engineering The University of Tokyo, RIKEN 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654 Japan
Pseudocode Yes Algorithm 1 Na ıve implementation of the d-reduced equivariant Rey Net (Definition 11) Algorithm 2 mapping1(x, N) Algorithm 3 Efficient implementation of the d-reduced equivariant Rey Net (Definition 11)
Open Source Code Yes Our code is publicly available at: https://github.com/makora9143/Rey Net.
Open Datasets Yes As an example of real-world data, we selected eight benchmark data sets from the TU Dortmund data collection (Kersting et al., 2016): five from bioinformatics and three from social networks. In order to further validate the effectiveness of Rey Net on real data, we chose and performed Mol-HIV data task from the Open Graph Benchmark Dataset (OGB). The Mol-HIV data task is a classification task to predict the property of the given graph, and is one of the largest in the Molecule Net data sets. Each graph represents a molecule, where nodes are atoms and edges are chemical bonds. Input node features are of nine dimensions. For more details, please refer to Hu et al. (2020).
Dataset Splits Yes In our experiments, we provided n {3, 5, 10, 20}, and the size of the training data set and test data set was both 1000. Due to the small size of these data sets, we followed an evaluation protocol that included a 10-fold for the data sets of Yanardag and Vishwanathan (2015).
Hardware Specification Yes We conducted the experiments using an NVIDIA Titan X or an NVIDIA V100.
Software Dependencies No The paper mentions "Python" implicitly through "Listing 1: Python Code of Rey Net using Py Torch." and explicitly mentions "torch.tensor/numpy.ndarray" as data structures. However, it does not specify any version numbers for Python, PyTorch, or NumPy, which is required for a reproducible description of software dependencies.
Experiment Setup Yes The squared error function with the ground truth output was used as the objective function for training. Adam optimizer and set the learning rate as 1e-3 and weight decay as 1e-5. Batch size was 100. The batch size was set to 5 according to Maron et al. (2018), except for GIN, which had a batch size of 32. For Rey Net, we used 1e-4 as learning rate, except for MUTAG, for which the learning rate was 1e-3. We adopted the Re LU function as activation.