Geometry-Aware Neural Rendering
Authors: Joshua Tobin, Wojciech Zaremba, Pieter Abbeel
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on datasets from the original GQN paper and three new datasets designed to test the ability to render systems with many degrees of freedom and a wide variety of objects. We find significant improvements in a lower bound on the negative log likelihood (the ELBO), per-pixel mean absolute error, and qualitative performance on most of these datasets. |
| Researcher Affiliation | Collaboration | Josh Tobin Open AI & UC Berkeley josh@openai.com Pieter Abbeel Covariant.AI & UC Berkeley pabbeel@cs.berkeley.edu |
| Pseudocode | No | The paper provides figures illustrating the model architecture and attention mechanism, but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper states 'Our datasets are available here: https://github.com/josh-tobin/egqn-datasets', which provides access to datasets. However, it does not provide an explicit statement or link for the open-source code implementation of the described methodology. |
| Open Datasets | Yes | To evaluate our proposed attention mechanism, we trained GQN with Epipolar Cross-Attention (E-GQN) on four datasets from the GQN paper: Rooms-Ring-Camera (RRC), Rooms-Free-Camera (RFC), Jaco, and Shepard-Metzler-7-Parts (SM7) [7, 35]... To address these limitations, we created three new datasets: Open AI Block (OAB), Disco Humanoid (Disco), and Rooms-Random-Objects (RRO) 4. Our datasets are available here: https://github.com/josh-tobin/egqn-datasets |
| Dataset Splits | No | The paper mentions training on '25M examples' and evaluating 'on the test set', but does not specify the explicit percentages or counts for training, validation, and test dataset splits needed for reproducibility. |
| Hardware Specification | Yes | We train our models on 25M examples on 4 Tesla V-100s (GQN datasets) or 8 Tesla V-100s (our datasets). |
| Software Dependencies | No | The paper mentions 'Tensorflow [1]' and 'Adam optimizer [19]', but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | We train our models using the Adam optimizer [19]. We ran a small hyperparameter sweep to choose the learning rate schedule and found that a learning rate of 1e-4 or 2e-4 linearly ramped up from 2e-5 over 25,000 optimizer steps and then linearly decayed by a factor of 10 over 1.6M optimizer steps performs best in our experiments. We use a batch size of 36 in experiments on the GQN datasets and 32 on our datasets. We train our models on 25M examples on 4 Tesla V-100s (GQN datasets) or 8 Tesla V-100s (our datasets). |