Extending Test-Time Augmentation with Metamorphic Relations for Combinatorial Problems

Authors: Siwei Wei, Xudong Zhang, Zhiyang Zhou, Yan Cai

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed MAGG method on three mainstream machine learning tasks for combinatorial problems, namely Boolean Satisfiability Prediction (SAT), Decision Traveling Salesman Problem Satisfiability Prediction (Decision TSP), and Graph Edit Distance Estimation (GED). The evaluation result shows significant improvements over base models in all three tasks, corroborating the effectiveness and versatility of the proposed method.
Researcher Affiliation Academia 1Key Laboratory of System Software (Chinese Academy of Sciences) and State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences, and University of Chinese Academy of Sciences, Beijing, China
Pseudocode No The paper describes mathematical update rules for the MPNN model but does not provide a formal pseudocode or algorithm block.
Open Source Code Yes The code of the experiments is available at https://github.com/MetamorphicAgg/MAgg.
Open Datasets Yes The dataset used in the SAT task is SR(n), proposed in (Selsam et al., 2019). ... The dataset used in the Decision TSP task is TSP(dev), proposed in (Prates et al., 2019). ... Three datasets are used in the GED task, namely AIDS, IMDB and LINUX, which all consist of real-world graphs. ... For the GED task, graph pairs are constructed using graphs in each dataset, for training, validation, and testing. The statistics are shown in Table 8.
Dataset Splits Yes For the GED task, graph pairs are constructed using graphs in each dataset, for training, validation, and testing. The statistics are shown in Table 8.
Hardware Specification Yes All experiments are conducted on a Ubuntu 22.04.2 LTS machine with Intel Xeon Gold 6338 processor and NVIDIA A800 80GB PCIe GPU.
Software Dependencies No The paper mentions the operating system (Ubuntu 22.04.2 LTS) but does not provide specific version numbers for other key software components, such as programming languages or libraries (e.g., Python, PyTorch).
Experiment Setup Yes The dimension of literal embeddings is d = 128. For each MLP, 3 hidden layers are used. The message passing is performed for 26 iterations. The model is trained using ADAM optimizer, with a 2 × 10−5 learning rate. The model is trained with batches, where each batch contains 12000 nodes. ... The hidden dimensions are set to dt = 2...trained on the corresponding set for 100 epochs with a 0.001 learning rate. ...trained for 100 epochs, with a 0.0001 learning rate. ...trained for 500 epochs with a 0.0001 learning rate and a 0.5 dropout.