Generalised Implicit Neural Representations

Authors: Daniele Grattarola, Pierre Vandergheynst

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show experiments with our method on various real-world signals on non-Euclidean domains. Results In the experiments section, we show concrete examples of learning INRs for signals on graphs and manifolds, using real-world data from biology and social networks.
Researcher Affiliation Academia Daniele Grattarola EPFL Lausanne, Switzerland daniele.grattarola@epfl.ch Pierre Vandergheynst EPFL Lausanne, Switzerland pierre.vandergheynst@epfl.ch
Pseudocode No The paper describes its method in prose and equations, but does not contain a structured pseudocode or algorithm block.
Open Source Code Yes The code to reproduce our results and the high-resolution version of all figures are available at https://github.com/danielegrattarola/GINR.
Open Datasets Yes Bunny We generate a texture on the Stanford bunny mesh1 using the Gray-Scott reaction-diffusion model... 1Available at https://graphics.stanford.edu/data/3Dscanrep/ Protein As a real-world domain, we consider the solvent excluded surface of a protein structure.3 The continuous signal is the value of the electrostatic field generated by the amino acid residues at the surface. ...3Protein Data Bank identifier: 1AA7... Data We collected data from the National Oceanic and Atmospheric Administration (NOAA) Operational Model Archive and Distribution System, specifically from the Global Forecast System (GFS).
Dataset Splits Yes Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? [Yes] At the beginning of Section 4 and in Section 4.1. We report in Tab. 2 the R2 for a held-out set of nodes (to evaluate whether the INRs are overfitting instead of learning a meaningful representation).
Hardware Specification Yes We ran all experiments on an Nvidia Tesla V100 GPU.
Software Dependencies No The paper mentions 'SIREN multi-layer perceptron' and 'Adam' as software components, but does not provide specific version numbers for these or any other libraries or frameworks.
Experiment Setup Yes Setting We implement the generalised INR as a SIREN multi-layer perceptron [45]. The model has 6 layers with 512 hidden neurons and a skip connection from the input to the middle layer. We use the same hyperparameters and initialisation scheme suggested by Sitzmann et al. [45]. We train the model using Adam [27] with a learning rate of 10 4 and an annealing schedule that halves the learning rate if the loss does not improve for 1000 steps. At each step, we sample 5000 nodes randomly from the graph as a mini-batch. We use spectral embeddings of size k = 100...