On the Frequency-bias of Coordinate-MLPs

Authors: Sameera Ramasinghe, Lachlan E. MacDonald, Simon Lucey

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 8 Experiments In this section, we will show that the insights developed thus far extend well to deep networks in practice. 8.1 Encoding signals with uneven sampling ... Fig. 3 shows an example for encoding a 1D signal. Fig. 4 illustrates a qualitative example in encoding a 2D image. Table 1 depicts quantitative results on the natural dataset by Tancik ... Table 2: Encoding images with sparse sampling. ... Table 3: Quantitative comparison in novel view synthesis on the real synthetic dataset [Mildenhall et al., 2020].
Researcher Affiliation Academia Sameera Ramasinghe Lachlan Macdonald {firstname.lastname}@adelaide.edu.au University of Adelaide Simon Lucey
Pseudocode No The paper does not contain any blocks explicitly labeled as "Pseudocode" or "Algorithm".
Open Source Code No 3. If you ran experiments... (a) Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No]
Open Datasets Yes Table 1 depicts quantitative results on the natural dataset by Tancik et al. [2020]... Table 2: ... over the STL dataset Coates et al. [2011] and a sub-sampled version of Image Net with 10% sampling.
Dataset Splits No No explicit percentages or counts for train/validation/test splits are provided, nor is a specific validation set mentioned or described for reproducibility.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models or types used for running experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes This example uses a 4-layer sinusoid-MLP trained with 33% of the total samples. ... The total loss function for the coordinate-MLP then becomes Ltotal = LMSE + εLr where ε is a small scalar coefficient and LMSE is the ususal mean squared error loss. ... We use 4-layer networks for this experiment.