HAGEN: Homophily-Aware Graph Convolutional Recurrent Network for Crime Forecasting
Authors: Chenyu Wang, Zongyu Lin, Xiaochen Yang, Jiao Sun, Mingxuan Yue, Cyrus Shahabi4193-4200
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies on two real-world crime datasets show that HAGEN outperformed both state-of-the-art crime forecasting methods (Crime Forecaster (Sun et al. 2020) and Mi ST (Huang et al. 2019)) and generic spatiotemporal GNN methods (MTGNN (Wu et al. 2020) and Graph Wave Net (Wu et al. 2019)). |
| Researcher Affiliation | Academia | 1 Tsinghua University, Beijing, China 2 University of Southern California, Los Angeles, CA, USA |
| Pseudocode | No | The paper describes the model architecture and components through text and mathematical equations, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | The code is released in https://github.com/Rafa-zy/HAGEN. |
| Open Datasets | Yes | We evaluated HAGEN on two real-world benchmarks in Chicago and Los Angeles by Crime Forecaster (Sun et al. 2020). |
| Dataset Splits | Yes | We chronologically split the dataset as 6.5 months for training, 0.5 months for the validation, and 1 month for testing. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions various models and optimizers (e.g., Node2Vec, ARIMA, Adam optimizer) but does not provide specific version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | For the vital hyperparameters in HAGEN, we use two stacked layers of RNNs. Within each RNN layer, we set 64 as the size of the hidden dimension. Moreover, we set the subgraph size of the sparsity operation as 50 and the saturation rate as 3. For the learning objective, we fix the trade-off parameter λ as 0.01, similar to the common practice of other regularizers. |