Deconditional Downscaling with Gaussian Processes
Authors: Siu Lun Chau, Shahine Bouabid, Dino Sejdinovic
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Lastly, we demonstrate its proficiency in a synthetic and a real-world atmospheric field downscaling problem, showing substantial improvements over existing methods. and We demonstrate and evaluate our CMP-based downscaling approaches on both synthetic experiments and a challenging atmospheric temperature field downscaling problem with unmatched multi-resolution data. |
| Researcher Affiliation | Academia | Siu Lun Chau University of Oxford Shahine Bouabid University of Oxford Dino Sejdinovic University of Oxford |
| Pseudocode | No | The paper describes methods and processes in text and mathematical formulas but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Experiments are implemented in Py Torch [37, 38], all code and datasets are made available5 https://github.com/shahineb/deconditional-downscaling |
| Open Datasets | Yes | We collect monthly mean 2D atmospheric fields simulation from CMIP6 data [42, 43] and citations [42] Malcolm Roberts. MOHC Had GEM3-GC31-HM model output prepared for CMIP6 High Res MIP hist-1950. Earth System Grid Federation, 2018 v20180730. doi: 10.22033/ESGF/ CMIP6.6040. [43] Aurore Voldoire. CNRM-CERFACS CNRM-CM6-1-HR model output prepared for CMIP6 High Res MIP hist-1950. Earth System Grid Federation, 2019 v20190221. doi: 10.22033/ESGF/ CMIP6.4040. |
| Dataset Splits | No | The paper describes how data is split for direct and indirect matching experiments (e.g., 'We randomly select N = B/2 bags...') and uses 'unobserved groundtruth' for evaluation, but does not provide explicit training, validation, and test dataset splits or percentages. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions software like PyTorch, scikit-learn, and Adam optimizer, but it does not provide specific version numbers for these software components. |
| Experiment Setup | No | The paper provides general experimental setup details such as the use of K-means++ for initializing inducing points and the Adam optimizer, and mentions that kernel hyperparameters and noise variance are learned. However, it does not provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or other specific system-level training settings. |