Predicting Livelihood Indicators from Community-Generated Street-Level Imagery

Authors: Jihyeon Lee, Dylan Grosz, Burak Uzkent, Sicheng Zeng, Marshall Burke, David Lobell, Stefano Ermon268-276

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the performance of our approach in accurately predicting indicators of poverty, population, and health and its scalability by testing in two different countries, India and Kenya. Our code is available at https://github.com/sustainlab-group/mapillarygcn.
Researcher Affiliation Academia Jihyeon Lee, 1 Dylan Grosz, 1 Burak Uzkent, 1 Sicheng Zeng, 1 Marshall Burke, 2 David Lobell, 2 Stefano Ermon 1 1Department of Computer Science, Stanford University 2Department of Earth Science, Stanford University {jihyeon,dgrosz,buzkent}@cs.stanford.edu
Pseudocode No The paper does not contain any sections or figures explicitly labeled 'Pseudocode' or 'Algorithm', nor any structured code blocks.
Open Source Code Yes Our code is available at https://github.com/sustainlab-group/mapillarygcn.
Open Datasets Yes We obtained wealth index values from the most recently completed surveys of the Demographic and Health Survey (DHS), 2015-16 for India and 2014 for Kenya. DHS data is clustered; households within a 5km-radius contribute datapoints individually but share the same geographic coordinates to preserve privacy.
Dataset Splits Yes For each country, we randomly sample 80% of the clusters as the training set and the remainder is the validation set.
Hardware Specification Yes We train with batch size 128 and learning rate 0.001 (after trying 0.1, 0.01, and 0.0001) for 50 epochs on a NVIDIA 1080TI GPU with 40G of memory.
Software Dependencies No The paper mentions various models (e.g., ResNet34, Mask-RCNN) and an optimizer (Adam), but it does not specify any software dependencies with version numbers (e.g., 'PyTorch 1.9', 'TensorFlow 2.0', 'scikit-learn 0.24').
Experiment Setup Yes We train with batch size 128 and learning rate 0.001 (after trying 0.1, 0.01, and 0.0001) for 50 epochs on a NVIDIA 1080TI GPU with 40G of memory. We use Adam optimizer (Kingma and Ba 2014) for all the experiments in this study.