A Context-Enhanced Framework for Sequential Graph Reasoning
Authors: Shuo Shi, Chao Peng, Chenyang Xu, Zhengfeng Yang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical evaluations are conducted on the challenging CLRS Reasoning Benchmark, and the results demonstrate that the proposed framework significantly improves the performance of existing architectures, yielding state-of-the-art results across the majority of the datasets within the benchmark. |
| Researcher Affiliation | Academia | Shuo Shi , Chao Peng , Chenyang Xu and Zhengfeng Yang Shanghai Key Laboratory of Trustworthy Computing, Software Engineering Institute, East China Normal University, Shanghai, China 51255902122@stu.ecnu.edu.cn, {cpeng, cyxu, zfyang}@sei.ecnu.edu.cn |
| Pseudocode | No | The paper provides mathematical formulations and model structures but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/Ghost-st/CEF. |
| Open Datasets | Yes | We use the CLRS Algorithmic Reasoning Benchmark [Velickovic et al., 2022], a proven benchmark that offers a unified evaluation (micro-F1 score) for assessing the (seq-graph) reasoning capabilities of neural networks. |
| Dataset Splits | No | The paper refers to the CLRS Algorithmic Reasoning Benchmark but does not explicitly provide specific details on how the dataset was split into training, validation, and test sets within the paper itself (e.g., percentages or sample counts for each split). |
| Hardware Specification | Yes | The experiments are conducted on a machine equipped with an i7-13700K CPU and an RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions using the Adam optimizer but does not specify version numbers for any software libraries, frameworks, or programming languages used (e.g., Python, PyTorch versions). |
| Experiment Setup | Yes | Specifically, for CEF-GMPNN, we set the batch size to 32 and the network is trained for 10,000 steps by Adam optimizer with a learning rate of 0.001; while for CEF-RT, we set the batch size to 4 and the network is trained for 10,000 steps by Adam optimizer with a learning rate of 0.00025. |