Fiber Monte Carlo
Authors: Nick Richardson, Deniz Oktay, Yaniv Ovadia, James C Bowden, Ryan P Adams
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We introduce a differentiable variant of the simple Monte Carlo estimator which samples line segments rather than points from the domain. We justify our estimator analytically as conditional Monte Carlo and demonstrate the diverse functionality of the method as applied to image stylization, topology optimization, and computational geometry. |
| Researcher Affiliation | Academia | Nick Richardson Department of Computer Science Princeton University Princeton, NJ 08544 njkrichardson@princeton.edu Deniz Oktay Department of Computer Science Princeton University Princeton, NJ 08544 doktay@princeton.edu Yaniv Ovadia Department of Computer Science Princeton University Princeton, NJ 08544 ovadia@princeton.edu James C. Bowden EECS Department UC Berkeley Berkeley, CA 94720 jcbowden@berkeley.edu Ryan P. Adams Department of Computer Science Princeton University Princeton, NJ 08544 rpa@princeton.edu |
| Pseudocode | No | The paper does not contain any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | We will release our generic utilities via a public Python package. In the effort of minimizing the difficulty of reproducing the results detailed in this work, we will also publish an associated Dockerfile that can be used to build a cross-platform container image to reproduce any of the figures displayed in this paper. |
| Open Datasets | Yes | We evaluate our method on a subset of the Model Net40 dataset (Wu et al., 2015), which consists of several thousand CAD models partitioned into 40 object categories. |
| Dataset Splits | No | The paper describes training on 'approximately 200 point cloud/convex hull pairs' and evaluation on '40 out-of-sample point clouds', but does not specify a separate validation dataset split from the test set. |
| Hardware Specification | Yes | We used a single Nvidia RTX 3080Ti GPU for differentiable rendering, with the wall clock time to optimize against a single image ranging from 5-15 minutes, depending on the number of rendering primitives used. [...] We trained on a single Nvidia RTX 3080Ti GPU using Adam with step size .001: training took 2 hours wall clock time. |
| Software Dependencies | No | The paper mentions using JAX and SIREN neural networks, but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | For all rendering experiments we optimized using Adam with a step size of .001. [...] We sampled m = 40 halfspace directions at initialization, and then use Fiber Monte Carlo to locally optimize the objective in (19) with respect to the parameters of the hypernetwork. After 200 stochastic gradient updates, we achieve 97.8% accuracy on the out of sample validation set: several examples are displayed in fig. 4. The hypernetwork used a pointwise MLP with three hidden layers of size 200, dimension-wise maximum as the symmetric aggregation, and an output MLP with three hidden layers of size 200. |