Human-Machine Collaboration for Fast Land Cover Mapping

Authors: Caleb Robinson, Anthony Ortiz, Kolya Malkin, Blake Elias, Andi Peng, Dan Morris, Bistra Dilkina, Nebojsa Jojic2509-2517

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement this framework for fine-tuning high-resolution land cover segmentation models and compare human-selected points to points selected using standard active learning methods.
Researcher Affiliation Collaboration Caleb Robinson,1, Anthony Ortiz,2, Kolya Malkin,3 Blake Elias,6 Andi Peng,6 Dan Morris,4 Bistra Dilkina,5 Nebojsa Jojic6, 1Georgia Institute of Technology, 2University of Texas at El Paso, 3Yale University, 4Microsoft AI for Earth, 5University of Southern California, 6Microsoft Research
Pseudocode No The paper describes the architecture and various methods in detail within the text, but it does not include any structured pseudocode or algorithm blocks.
Open Source Code No The supplemental material for this paper can be found at https://aka.ms/human-machine-2020-si.
Open Datasets Yes The default training label datasets are from (Chesapeake Conservancy 2017).
Dataset Splits No The paper mentions training on “90000 randomly selected image patches” and evaluating performance on “the entirety of the target areas” in New York, but it does not provide specific training/validation/test splits (percentages or counts) for its main experimental dataset.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions optimizers (Adam) and network architectures (U-Net) but does not provide specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes We trained the network for 100 epochs on 90000 randomly selected image patches of size 240 x 240 sampled from the state of Maryland. We used the Adam optimizer (Kingma and Ba 2014) with cross-entropy as segmentation loss and an initial learning rate of 0.001 decaying to 0.0001 after 60 epochs.