Deep Multi-species Embedding

Authors: Di Chen, Yexiang Xue, Daniel Fink, Shuo Chen, Carla P. Gomes

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Applied to bird observational data from the citizen science project e Bird, we demonstrate how the DMSE model discovers inter-species relationships to outperform single-species distribution models (random forests and SVMs) as well as competing multi-label models. and 4 Experiments We work with crowd-sourced bird observation data collected from the successful citizen science project e Bird [Munson et al., 2012].
Researcher Affiliation Academia Di Chen1 Yexiang Xue2 Daniel Fink3 Shuo Chen4 Carla P. Gomes5 1 2 4 5Department of Computer Science, Cornell University, Ithaca, NY, USA 3Cornell Lab of Ornithology, Ithaca, NY, USA
Pseudocode No The paper describes computational methods and algorithms (e.g., stochastic gradient descent, Genz's adaptive algorithm for integration, Markov Chain Monte Carlo sampling for derivatives) but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not mention releasing source code for the methodology, provide a repository link, or state that code is available in supplementary material.
Open Datasets Yes Applied to e Bird bird observational data [Munson et al., 2012], we demonstrate how the DMSE model discovers interspecies relationships to outperform the predictions of singlespecies distribution models (random forests and SVMs). and We work with crowd-sourced bird observation data collected from the successful citizen science project e Bird [Munson et al., 2012].
Dataset Splits Yes In the experiments, we use a 5fold cross validation to validate the multiple choices of hyperparameters as well as evaluate the stability of models and we observe no overfitting between the loss on the validation vs test set during cross-validation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions 'pythonsklearn' for implementing baselines but does not provide specific version numbers for any software dependencies, libraries, or frameworks used.
Experiment Setup Yes In our experiment, we empirically found that a 3-hidden-layer fully connected neural network using tanh as the activation function worked the best. The number of neurons in each hidden layer was 256, 256, 64.