A Learnable Radial Basis Positional Embedding for Coordinate-MLPs

Authors: Sameera Ramasinghe, Simon Lucey

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of our approach across a number of signal reconstruction tasks in comparison to leading hard coded methods such as Random Fourier Frequencies (RFF) made popular in (Mildenhall et al. 2020). We validate the efficacy of our embedder over the popular RFF embedder across various tasks, and show that ours yield better fidelity and stability in different training conditions. Figures 1, 2, 3, 4, 5, 6, 7 and Tables 1, 2, 3 present empirical results including PSNR and SSIM values, comparing different methods and conditions.
Researcher Affiliation Collaboration Sameera Ramasinghe 1 and Simon Lucey 2 1Amazon, 2University of Adelaide ramasisa@amazon.com
Pseudocode No The paper describes its methods through prose and mathematical equations but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the methodology is openly available.
Open Datasets Yes First, we pick 25 random rows from the natural 2D images released by (Tancik et al. 2020) to create a small dataset of 1D signals, each with 512 pixels. For evaluating the embedders on 3D signals, we utilize Ne RF-style 3D scenes. The quantitative results are shown in Table 3. RFF (matched) uses log linear sampling of frequencies with the maximum frequency 210. RFF (unmatched ) and RFF (unmatched ) use 215 and 28 as the maximum frequency component, respectively. The Realistic Synthetic dataset (Mildenhall et al. 2020) is also used.
Dataset Splits Yes For each signal, we sample 256 points with an interval of one as the training set, and the rest of the points as the testing set. Table 2 mentions '25% tr. data (regular)', '10% tr. data (regular)', '25% tr. data (random)', and '10% tr. data (random)' for training conditions, implying the remaining percentage is used for testing. While a separate 'validation' split is not explicitly mentioned, the train/test splits are detailed.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions software components like 'MLPs use ReLU activations' and 'Adam optimizer' but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Further, all the MLPs use ReLU activations, and are trained using the Adam optimizer with a learning rate of 1e-4 and a weight decay of 1e-8. We also observe that L = 10 is enough to provide adequate results.