Detecting and Tracking Communal Bird Roosts in Weather Radar Data

Authors: Zezhou Cheng, Saadia Gabriel, Pankaj Bhambhani, Daniel Sheldon, Subhransu Maji, Andrew Laughlin, David Winkler378-385

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper describes a machine learning system to detect and track roost signatures in weather radar data. We contribute a latent-variable model and EM algorithm to learn a detection model together with models of labeling styles for individual annotators. We divided the 88972 radar scans from the manually labeled dataset (Sec. 2) into training, validation, and test sets. Tab. 1 shows the performance of various detectors across radar stations.
Researcher Affiliation Academia Zezhou Cheng UMass Amherst zezhoucheng@cs.umass.edu Saadia Gabriel University of Washington skgabrie@cs.washington.edu Pankaj Bhambhani UMass Amherst pankaj@cs.umass.edu Daniel Sheldon UMass Amherst sheldon@cs.umass.edu Subhransu Maji UMass Amherst smaji@cs.umass.edu Andrew Laughlin UNC Asheville alaughli@unca.edu David Winkler Cornell University dww4@cornell.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper mentions "See the supplementary material on the project page for details on these baseline detection models." and includes a footnote "1See: https://people.cs.umass.edu/ zezhoucheng/roosts". This is a personal homepage and not an explicit statement of code release to a repository for the methodology described.
Open Datasets Yes We obtained a data set of manually annotated roosts collected for prior ecological research (Laughlin et al. 2016).
Dataset Splits Yes We divided the 88972 radar scans from the manually labeled dataset (Sec. 2) into training, validation, and test sets. The validation set (not shown) is roughly half the size of the test set and was used to set the hyper-parameters of the detector and the tracker.
Hardware Specification No The paper mentions "This research was supported in part by NSF #1749833, #1749854, #1661259 and the Mass Tech Collaborative for funding the UMass GPU cluster." This indicates GPU usage but lacks specific model numbers or detailed hardware specifications.
Software Dependencies No The paper mentions using "Faster R-CNN" and "VGG-M network" but does not provide specific version numbers for software dependencies like Python, PyTorch, TensorFlow, or other libraries.
Experiment Setup Yes We initialize the Faster R-CNN parameters θ by training for 50K iterations starting from the Image Net pretrained VGG-M model using the original uncorrected labels. The optimization is performed separately to determine the reverse scaling factor φu for each user using Brent s method with search boundary [0.1, 2] and black-box access to Lcnn. then update θ by training Faster R-CNN for 50K iterations using the resampled annotations.