Cross-Interaction Hierarchical Attention Networks for Urban Anomaly Prediction
Authors: Chao Huang, Chuxu Zhang, Peng Dai, Liefeng Bo
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on the real-world urban anomaly datasets to show that our developed CHAT framework consistently outperforms state-of-the-art methods. |
| Researcher Affiliation | Collaboration | Chao Huang1, Chuxu Zhang2, Peng Dai1, Liefeng Bo1 JD Finance America Corporation1 Brandeis University2 |
| Pseudocode | No | The paper includes a model architecture diagram and mathematical equations, but no structured pseudocode or algorithm blocks are provided. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for its methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | Data Description We carry out experiments on the real-world urban anomaly dataset which is collected from New York City (NYC)1. This data contains different categories of urban anomaly reports from the 311 online platforms2. 1https://data.cityofnewyork.us/ 2https://portal.311.nyc.gov/ |
| Dataset Splits | Yes | In our experiments, the evaluation dataset is divided into into training, validation and test sets with the period of 5.5 month, 0.5 month and 0.5 month, respectively. |
| Hardware Specification | Yes | The methods are trained from scratch without any pre-training on a single NVIDIA Ge Force GTX 1080 Ti GPU with a learning rate and batch size of 1e 3 and 64. |
| Software Dependencies | No | The paper mentions 'Adam' as the optimizer but does not specify any software libraries, frameworks, or their version numbers (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | In our experiments, we set the hidden state dimensionality d and embedding dimension e as 32. Furthermore, the sequence length T in our recurrent neural architecture is set to 10. In the prediction phase of CHAT, we set the number of hidden layers as 3. The representation dimensionality of our attention mechanism is set to 32. The methods are trained from scratch without any pre-training on a single NVIDIA Ge Force GTX 1080 Ti GPU with a learning rate and batch size of 1e 3 and 64. |