Neural Fields with Hard Constraints of Arbitrary Differential Order

Authors: Fangcheng Zhong, Kyle Fogarty, Param Hanji, Tianhao Wu, Alejandro Sztrajman, Andrew Spielberg, Andrea Tagliasacchi, Petra Bosilj, Cengiz Oztireli

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our approaches are demonstrated in a wide range of real-world applications. Our framework is effective across a wide range of real-world problems; in particular, CNF: achieves state-of-the-art performance in learning material appearance representations.
Researcher Affiliation Academia Fangcheng Zhong University of Cambridge Kyle Fogarty University of Cambridge Param Hanji University of Cambridge Tianhao Wu University of Cambridge Alejandro Sztrajman University of Cambridge Andrew Spielberg Harvard University Andrea Tagliasacchi Simon Fraser University Petra Bosilj University of Lincoln Cengiz Oztireli University of Cambridge
Pseudocode Yes Algorithm 1 Training Algorithm 2 Inference
Open Source Code Yes Source code is publicly available at https://zfc946.github.io/CNF.github.io/.
Open Datasets Yes We perform this evaluation on highly specular materials from the MERL dataset [21]
Dataset Splits No The paper mentions training on '640k (ωi, ωo) samples' but does not provide explicit training, validation, or test dataset splits or their sizes.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions a 'Py Torch framework' but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes In the experiments on material appearance fitting, all neural networks have a single hidden layer and we controlled the network width so that all methods have roughly 200k parameters for a fair comparison. Both baseline FFN and kernel FFN use a frequency of 16, meaning that the inputs are mapped to encodings of size 38 before feeding to the MLP. The kernel FFN model uses FFN as an encoder which has a width of 248 and outputs a latent vector of dim 512. The baseline FFN MLP has a width of 718. The kernel SIREN model has a width of 256 and also outputs a latent vector of size 512. The baseline SIREN and NBRDF have a width of 442. We use Adam optimizer with a learning rate of 5 10 4 for all methods except for SIREN baseline and kernel SIREN, which use a learning rate of 1 10 4 for more stable performance. For constraint points, we sample the angles θh, θd and ϕd as in Rusinkiewicz s parameterization [33]. A figure of those angles is shown in Fig. 6. We sample θd and ϕd uniformly at random within range [0, π/2] and [0, 2π] respectively. Half of the θh are also sampled uniformly at random within [0, π/2], whereas the other half are sampled from a Gaussian distribution with a mean of 0 and a standard deviation of 0.1. Those angles are then converted to 6D in-going and out-going directions as inputs to the networks. We train Φθ on 640k (ωi, ωo) samples, minimizing L1 loss in the logarithmic domain to account for the large variation of BRDF values due to the specular highlight, while enforcing the aforementioned hard constraints on (ˆωi, ˆωo):