General Neural Gauge Fields

Authors: Fangneng Zhan, Lingjie Liu, Adam Kortylewski, Christian Theobalt

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 EXPERIMENTS4.3 EVALUATION RESULTS
Researcher Affiliation Academia Max Planck Institute for Informatics, 66123, Germany {fzhan,lliu,akortyle,theobalt}@mpi-inf.mpg.de
Pseudocode Yes The pseudo code of the forward & backward propagation of discrete cases is given in Algorithm 1.Algorithm 1 Pseudo code of forward & backward propagation in learning discrete gauge transformation.Algorithm 2 Pseudo code of differentiable top-k operation.
Open Source Code Yes We attach the source code of neural gauge fields in the supplementary material.
Open Datasets Yes All datasets used in our experiments are publicly accessible.
Dataset Splits No The paper does not explicitly provide specific train/validation/test dataset split percentages or sample counts.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) that would be needed to replicate the experiment environment.
Experiment Setup Yes All models are optimized for 150k steps with a batch size of 1024 pixel rays.By default, the codebook has two layers and each layer contains 256 vectors with 128 dimensions. In line with Instant-NGP (M uller et al., 2022), the 3D space is also divided into two-level 3D grids with size 16 16 16 and 32 32 32 for discrete gauge transformation.The gauge network M for learning gauge transformations can be a MLP network or a transformation matrix, depending on the specific downstream application. For the mapping from 3D space to 2D plane to get (view-dependent) textures, the neural field is modeled by a MLP-based network which takes predicted 2D coordinates, i.e., output of the gauge network, and a certain view to predict color and density. For the mapping from 3D space to discrete codebooks, the neural field is modeled by looking up features from the codebook, followed by a small MLP of two layers to predict color and density.