Deep Gaussian Markov Random Fields
Authors: Per Sidén, Fredrik Lindsten
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the behaviour of our model for an inpainting problem on the two toy datasets in Figure 2... We compare our method against some popular methods for large data sets in spatial statistics, by considering the satellite data of daytime land surface temperatures, used in the competition by Heaton et al. (2018). Table 1 compares different instances of our model with the methods in the competition... |
| Researcher Affiliation | Academia | 1Division of Statistics and Machine Learning, Department of Computer and Information Science, Link oping University, Link oping, Sweden. |
| Pseudocode | Yes | Algorithm 1 Inference algorithm |
| Open Source Code | Yes | Code for our methods and experiments are available at https://bitbucket.org/psiden/deepgmrf. |
| Open Datasets | Yes | We compare our method against some popular methods for large data sets in spatial statistics, by considering the satellite data of daytime land surface temperatures, used in the competition by Heaton et al. (2018). The data and code for some of the methods can be found at https://github. com/finnlindgren/heatoncomparison. |
| Dataset Splits | No | The data are on a 500 x 300 grid, with 105,569 non-missing observations as training set. The test set consists of 42,740 observations and have been selected as the pixels that were missing due to cloud cover on a different date. No explicit validation set or split information is provided. |
| Hardware Specification | Yes | our method takes roughly 2.5h for the seq5x5,L=5 model using a Tesla K40 GPU |
| Software Dependencies | No | We have implemented DGMRF in Tensor Flow (Abadi et al., 2016), taking advantage of autodiff and GPU computations. No specific version number for TensorFlow or other software dependencies is provided. |
| Experiment Setup | No | The paper mentions using Adam for optimization, the reparameterization trick, and details regarding convolution padding ('same'). However, specific hyperparameter values such as learning rate, batch size, or number of epochs are not provided in the main text. |