Refining Coarse-Grained Spatial Data Using Auxiliary Spatial Data Sets with Various Granularities

Authors: Yusuke Tanaka, Tomoharu Iwata, Toshiyuki Tanaka, Takeshi Kurashima, Maya Okawa, Hiroyuki Toda5091-5099

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on real-world spatial data sets demonstrate the effectiveness of the proposed model. and 6 Experiments Data description: We evaluated the proposed model using real-world spatial data sets from NYC Open Data
Researcher Affiliation Collaboration 1NTT Service Evolution Laboratories, NTT Corporation 2NTT Communication Science Laboratories, NTT Corporation 3Graduate School of Informatics, Kyoto University
Pseudocode Yes Algorithm 1: Bayesian inference procedure of the fine-grained target data z
Open Source Code No The paper refers to “NYC Open Data” as the source for datasets but does not state that the authors’ own code for the methodology is open-source or available.
Open Datasets Yes We evaluated the proposed model using real-world spatial data sets from NYC Open Data 1. https://opendata.cityofnewyork.us
Dataset Splits No The paper describes the data used and how it was processed (e.g., aggregated for coarse-grained data), but it does not specify explicit training, validation, or test dataset splits (e.g., percentages, counts, or cross-validation setup) for their experimental setup.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, processor types, or memory specifications used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers (e.g., Python, PyTorch, or other packages with their versions).
Experiment Setup No The paper mentions using the BFGS method for optimization and specific kernel functions for Gaussian processes, but it does not provide concrete experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings.