Fine-Grained Air Quality Inference via Multi-Channel Attention Model

Authors: Qilong Han, Dan Lu, Rui Chen

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments on real-world benchmark datasets demonstrate that MCAM significantly outperforms the state-of-the-art solutions.
Researcher Affiliation Academia Harbin Engineering University, Harbin, China {hanqilong, ludan, ruichen}@hrbeu.edu.cn
Pseudocode No No explicit pseudocode block or algorithm section labeled 'Pseudocode' or 'Algorithm' was found in the paper.
Open Source Code No The paper does not provide an explicit statement that the source code for the described methodology is being released, nor does it provide a direct link to a code repository.
Open Datasets Yes We utilize the Beijing dataset, the only dataset used in the state-of-the-art solution [Cheng et al., 2018]. In addition, we use another London dataset to draw more convincing conclusions. Both datasets are widely used in extensive literature. ... We collect air quality data, including air quality index (AQI), PM2.5, PM10, O3, NO2, CO, SO2, from all 35 ground-based air quality monitoring stations in Beijing1 and PM2.5, PM10, and NO2 from all 13 ground-based monitoring stations in London2. (Footnote 1: http://beijingair.sinaapp.com; Footnote 2: https://github.com/for-competition/KDD CUP 2018). Meteorological data. ...Global Data Assimilation System (GDAS)3 [Zhang et al., 2019]. (Footnote 3: https://www.ncdc.noaa.gov/data-access/model-data/modeldatasets/global-data-assimilation-system-gdas). ...POIs from Amap of Beijing and London4. (Footnote 4: https://lbs.amap.com/api/webservice/download). Road networks from Open Street Map (OSM)5. (Footnote 5: https://www.openstreetmap.org/).
Dataset Splits Yes The portions of training, validation, and test data are split by the ratio 8:1:1.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as CPU or GPU models, or cloud computing instance types.
Software Dependencies No The paper mentions 'implement the model in Py Torch' but does not specify the version number of PyTorch or any other software dependencies.
Experiment Setup Yes We use 300 hidden units in an LSTM cell, and optimize the objective function using the Adam optimizer with learning rate 0.01. All fully connected neural networks have a single hidden layer with 200 neurons. We initialize all the model parameters from the uniform distribution between 0.1 and 0.1, and implement the model in Py Torch.