Implicit Neural Representations with Periodic Activation Functions

Authors: Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, Gordon Wetzstein

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that these networks, dubbed sinusoidal representation networks or SIRENs, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, three-dimensional shapes, and their derivatives. Further, we show how SIRENs can be leveraged to solve challenging boundary value problems... (Abstract) and In this section, we leverage SIRENs to solve challenging boundary value problems using different types of supervision of the derivatives of Φ. We first solve the Poisson equation via direct supervision of its derivatives. (Section 4, Experiments)
Researcher Affiliation Academia Vincent Sitzmann sitzmann@cs.stanford.edu Julien N. P. Martel jnmartel@stanford.edu Alexander W. Bergman awb@stanford.edu David B. Lindell lindell@stanford.edu Gordon Wetzstein gordon.wetzstein@stanford.edu Stanford University
Pseudocode No The paper describes the architecture and mathematical formulations but does not present any pseudocode or algorithm blocks.
Open Source Code Yes All code and data is publicly available on the project webpage3. 3https://vsitzmann.github.io/siren/
Open Datasets Yes We replicated the experiment from [57] on the Celeb A dataset [56] using a set encoder.
Dataset Splits No The paper mentions a 'test set' for CelebA but does not provide specific details on the train/validation splits (percentages, counts, or explicit reference to predefined splits).
Hardware Specification No The paper mentions using a 'modern GPU' for fitting images but does not provide specific details such as the model (e.g., NVIDIA A100, RTX 3090) or other hardware specifications like CPU or memory.
Software Dependencies No The paper mentions the ADAM optimizer and implicitly uses PyTorch (via citation), but it does not specify any software components with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x) that are needed to replicate the experiments.
Experiment Setup No While some training details like the use of ADAM optimizer, initialization parameter omega0=30, and minibatch sampling strategy are mentioned, the paper does not provide a comprehensive set of experimental setup details such as learning rate, batch size, number of epochs, or other typical hyperparameters.