Implicit Neural Representations and the Algebra of Complex Wavelets

Authors: T Mitchell Roddenberry, Vishwanath Saragadam, Maarten V. de Hoop, Richard Baraniuk

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate the difference in performance for INRs initialized at random and INRs initialized in accordance with the singularities in the target signal. We evaluate this empirically on the Kodak Lossless True Color Image Suite (kod, 1999).
Researcher Affiliation Academia T. Mitchell Roddenberry, Vishwanath Saragadam , Maarten V. de Hoop, Richard G. Baraniuk Rice University Houston, TX, USA {mitch,mvd2,richb}@rice.edu, vishwanath.saragadam@ucr.edu. Now affiliated with UC Riverside.
Pseudocode No The paper provides mathematical formulations and architectural descriptions, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include any statement about releasing source code, nor does it provide a link to a code repository or mention code in supplementary materials.
Open Datasets Yes We evaluate this empirically on the Kodak Lossless True Color Image Suite (kod, 1999). Kodak lossless true color image suite. http://r0k.us/graphics/kodak/, 1999. Accessed: 2022-11-09.
Dataset Splits No The paper describes fitting models to a 1D test signal or images from the Kodak dataset, but it does not specify explicit training, validation, and test dataset splits with percentages, sample counts, or predefined split references.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running the experiments.
Software Dependencies No The paper mentions optimizers and other techniques but does not specify software dependencies like libraries or frameworks with their version numbers required for reproduction.
Experiment Setup Yes All architectures are trained for a total of 4000 epochs using AMSgrad (Reddi et al., 2018) to minimize the mean-squared error between the real part of the INR and the target signal on n = 512 uniformly spaced points in the interval [−2, 2]. Split architectures are trained by first fitting the scaling network to the target signal for 2000 epochs, then fitting the scaling and wavelet networks simultaneously for 2000 additional epochs. Architectures that do not use a scaling network are trained for 4000 epochs. (See also Table 1 for F1, L, etc. and Appendix E for details).