Multi-Entity Dependence Learning With Rich Context via Conditional Variational Auto-Encoder

Authors: Luming Tang, Yexiang Xue, Di Chen, Carla Gomes

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method on two computational sustainability related datasets. The first one is a crowd-sourced bird observation dataset collected from the e Bird citizen science project (Munson et al. 2012). The experiment results on e Bird and Amazon dataset are shown in Table 2 and 3, respectively.
Researcher Affiliation Academia Luming Tang Tsinghua University, China tlm14@mails.tsinghua.edu.cn Yexiang Xue Cornell University, USA yexiang@cs.cornell.edu Di Chen Cornell University, USA dc874@cornell.edu Carla P. Gomes Cornell University, USA gomes@cs.cornell.edu
Pseudocode No The paper does not contain any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code or direct links to a code repository.
Open Datasets Yes The first one is a crowd-sourced bird observation dataset collected from the e Bird citizen science project (Munson et al. 2012). Crossed with the National Land Cover Dataset (NLCD) (Homer et al. 2015), we get a 15-dimension feature vector for each location. Our second application is the Amazon rainforest landscape analysis2, in which we tag satellite images with a few landscape categories. Raw satellite images were derived from Planet s full-frame analytic scene products using 4-band satellites in the sunsynchronous orbit and the International Space Station orbit. The organizers at Kaggle used Planet s visual product processor to transform raw images to 3-band jpg format. Each satellite image sample in this dataset contains an image chip covering a ground area of 0.9 km2. The chips were analyzed using the Crowd Flower3 platform to obtain ground-truth landscape composition.
Dataset Splits Yes We randomly choose 34,431 samples for training, validation and testing. The details of the two datasets are listed in table 1.
Hardware Specification Yes All the training and testing process of our proposed MEDL CVAE are performed on one NVIDIA Quadro 4000 GPU with 8GB memory.
Software Dependencies No The paper mentions "Adam optimizer (Kingma and Ba 2014) with learning rate of 10 4 and utilizing batch normalization (Ioffe and Szegedy 2015)" but does not provide specific version numbers for software libraries or dependencies (e.g., Python, TensorFlow, PyTorch versions).
Experiment Setup Yes The whole training process lasts 300 epochs, using batch size of 512, Adam optimizer (Kingma and Ba 2014) with learning rate of 10 4 and utilizing batch normalization (Ioffe and Szegedy 2015), 0.8 dropout rate (Srivastava et al. 2014) and early stopping to accelerate the training process and to prevent overfitting.