Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Understanding and Simplifying Architecture Search in Spatio-Temporal Graph Neural Networks

Authors: Zhen Xu, quanming yao, Yong Li, Qiang Yang

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental All experiments in this section are run on two datasets Pe MS04 and Pe MS08... In this part, we compare more thoroughly our models with related works. From Table 3, incoporating architecture priors, the proposed simple baseline could find novel STGNN models better than hand-designed and NAS-based methods. The generalizability of the distilled principles is demonstrated in two ways. First, in Section 4, we show at the same time empirical results on two datasets and similar principles are observed. Second, in Table 3, note that we search only one model and evaluate this model on more datasets... We show further the comparison on a new and different dataset NE-BJ... We also provide evaluation on another non-traffic dataset and the configuration of searched model in Appx E.
Researcher Affiliation Collaboration Zhen Xu EMAIL 4Paradigm Quanming Yao EMAIL Department of Electronic Engineering Tsinghua University Yong Li EMAIL Department of Electronic Engineering Tsinghua University Qiang Yang EMAIL Department of Computer Science and Engineering Hong Kong University of Science and Technology
Pseudocode No The paper describes methods and processes using descriptive text and figures but does not include any explicitly labeled pseudocode or algorithm blocks with structured steps.
Open Source Code Yes Our code is available at https://github.com/Auto ML-Research/Simple STG.
Open Datasets Yes To evaluate quantitatively the performance of different models, we experiment on four public real world datasets: Pe MS03, Pe MS04, Pe MS07 and Pe MS08 (Guo et al., 2019; Song et al., 2020; Bai et al., 2020; Fang et al., 2021). These datasets can be accessed on Git Hub2. https://github.com/Davidham3/ASTGCN/tree/master/data ...We show further the comparison on a new and different dataset NE-BJ released by (Li et al., 2021a). ...we use another non-traffic dataset Electricity (Wu et al., 2021)
Dataset Splits Yes Datasets are splitted in a 6,2,2 manner.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running the experiments. It only mentions 'GPU hours' in the context of search efficiency but not the hardware itself.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., library or framework versions like PyTorch 1.9, Python 3.8, etc.) needed to replicate the experiment.
Experiment Setup Yes We consider training hyperparameters (HP) that are common from STGNN literature as in Table 2... The found best model on Pe MS08 uses the following configuration. Learning rate 1e-3; Batch size 128; Optimizer Adam W; Weight decay 0; Gradient clip 5; Dropout 0.3; Curriculum learning 3 or 5; Temporal channels 64; Dilation 1; Kernet set 7,8; GCN channels 64; Graph convolution Mixhop GCN; Embedding dim 20; Node degree 40; Mixture coefficient 0.75.