Gaussian Process Priors for View-Aware Inference

Authors: Yuxin Hou, Ari Heljakka, Arno Solin7762-7770

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In the experiments, we show examples of real-world applications of the view kernel in probabilistic view synthesis. Our quantitative experiments showed that the view prior can encode authentic movement and provide a soft-prior for view synthesis.
Researcher Affiliation Collaboration Yuxin Hou1, , Ari Heljakka1,2,*, Arno Solin1 1Aalto University, Espoo, Finland, 2Gen Mind Ltd., Finland
Pseudocode No No pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes Code and material related to this paper is available at https: //aaltoml.github.io/view-aware-inference.
Open Datasets Yes We carried out an experiment with Shape Net (Chang et al. 2015) 3D chair models at 128 128 resolution.
Dataset Splits Yes We randomly selected 80% images for training (81,312 images), 10% for validation (10,164 images) and 10% for testing (10,164 images).
Hardware Specification No The paper mentions 'computational resources provided by the Aalto Science-IT project' but does not specify any particular hardware details such as GPU models, CPU types, or memory amounts used for the experiments.
Software Dependencies No The paper mentions software libraries like GPflow (Matthews et al. 2017) and GPy Torch (Gardner et al. 2018), and Style GAN (Karras, Laine, and Aila 2019), but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The values for the three hyperparameters were chosen to σ2 = 0.1, ℓ= 1.098, and σ2 n = 0.0001 (pre-trained on an independent task w.r.t. marginal likelihood).