Implicit Representations via Operator Learning

Authors: Sourav Pal, Harshavardhan Adepu, Clinton Wang, Polina Golland, Vikas Singh

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first check the representation capability of O-INRs relative to standard INRs. We evaluate performance on 2D images as well as 3D volumes. Additionally we show that our proposed model can handle inverse problems such as image denoising. For 2D images, we use images from several sources including Agustsson & Timofte (2017) Kodak Image Suite, scikit-image, etc.
Researcher Affiliation Academia 1University of Wisconsin Madison 2Massachusetts Institute of Technology. Correspondence to: Sourav Pal <spal9@wisc.edu>.
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code Yes Our code is available at https://github.com/vsingh-group/oinr.
Open Datasets Yes For 2D images, we use images from several sources including Agustsson & Timofte (2017) Kodak Image Suite, scikit-image, etc. For 3D volumes, we use data from the Stanford 3D Scanning Repository and Saragadam et al. (2022; 2023). We trained O-INR on 100 randomly sampled videos from the UCF-101 (Soomro et al., 2012) dataset and 300 randomly sampled GIFs from the TGIF dataset (Li et al., 2016). For both MNIST and Fashion-MNIST... We consider MRI data from Alzheimer s Disease Neuroimaging Initiative (ADNI) (Mueller et al., 2005; Jack Jr et al., 2008). Celeb A dataset (Liu et al., 2015).
Dataset Splits No The paper describes training on various data (e.g., lower resolution images, sampled frames, missing slices) and evaluating performance, often with "train" and "test" results (e.g., Table 3 'Train MSE' and 'Test MSE'), but it does not explicitly define a separate 'validation' dataset split with percentages or counts in the context of reproducibility.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments.
Software Dependencies No The paper mentions software tools like 'Free Surfer (Fischl, 2012)' and 'Sn PM (Statistical Non Parametric Mapping) toolbox (Ashburner, 2010)' but does not provide specific version numbers for these or any other software dependencies crucial for reproducibility.
Experiment Setup Yes For training O-INR, we used a learning rate of 0.0005 for 1000 epochs and the number of sinusoidal frequencies used for each dimension was 20, 10 coming from sin and 10 coming from cos. The number of parameters for O-INR model to achieve comparable performance was 100k whereas baseline methods required 130k parameters. (Appendix B). An initial learning rate of 0.001 was used alongside a Cosine Annealing scheduler with a minimum learning rate of 5 10 4 and maximum steps of 10000. (Appendix I).