Deep Convolutional Inverse Graphics Network

Authors: Tejas D. Kulkarni, William F. Whitney, Pushmeet Kohli, Josh Tenenbaum

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present qualitative and quantitative tests of the model s efficacy at learning a 3D rendering engine for varied object classes including faces and chairs.
Researcher Affiliation Collaboration 1,2,4Massachusetts Institute of Technology, Cambridge, USA 3Microsoft Research, Cambridge, UK
Pseudocode No The paper describes the training procedure using numbered steps in Section 3.1 and references Figure 3, but it does not present a formal pseudocode or algorithm block labeled as such.
Open Source Code No The paper does not include any statement about releasing source code or a link to a code repository.
Open Datasets Yes We trained our model on about 12,000 batches of faces generated from a 3D face model obtained from Paysan et al. [17]... images of widely varied 3D chairs from many perspectives derived from the Pascal Visual Object Classes dataset as extracted by Aubry et al. [16, 1].
Dataset Splits Yes We used approximately 1200 of these chairs in the training set and the remaining 150 in the test set;
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions using the 'rmsprop [22] learning algorithm' but does not specify version numbers for any software libraries or dependencies (e.g., deep learning frameworks like TensorFlow or PyTorch, or Python versions).
Experiment Setup Yes We used the rmsprop [22] learning algorithm during training and set the meta learning rate equal to 0.0005, the momentum decay to 0.1 and weight decay to 0.01.