Exploiting Time-Series Image-to-Image Translation to Expand the Range of Wildlife Habitat Analysis
Authors: Ruobing Zheng, Ze Luo, Baoping Yan825-832
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare our model against several baselines and achieve promising results. Quantitative Evaluation We quantitatively compare our model and baselines in three test strategies. We compare the overall performance by randomly selecting test samples from all image pairs. The rest pairs are used as the training and validation set. |
| Researcher Affiliation | Academia | 1University of Chinese Academy of Sciences, Beijing 100049, China 2e-Science Technology and Application Laboratory, Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China {zhengruobing, luoze, ybp}@cnic.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. Figures 1, 2, and 3 illustrate network architectures, but these are not pseudocode. |
| Open Source Code | No | The paper does not provide any explicit statement or link for the release of its source code. |
| Open Datasets | Yes | MODIS (Justice et al. 1998) Land Products (MOD09Q1 and MOD09A1 8-days L3) provide the basic reflectance bands in our experiments. |
| Dataset Splits | Yes | We compare the overall performance by randomly selecting test samples from all image pairs. The rest pairs are used as the training and validation set. |
| Hardware Specification | No | The paper mentions "We appreciate the computing resources provided by HTCondor Team in UW-Madison," but this refers to a workload management system and does not specify any hardware details like CPU, GPU models, or memory. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or solvers used in the experiments. |
| Experiment Setup | No | The paper describes general aspects of the model architecture (e.g., "deep convolutional encoder-decoder with U-Net..." and "Convolution(Transpose) Batch Norm Re LU(Leaky) layers") and data augmentation techniques ("Mirroring, rotation and random jitter on input image pairs"), but it does not specify concrete hyperparameters like learning rate, batch size, number of epochs, or optimizer details (e.g., Adam parameters). |