CoordX: Accelerating Implicit Neural Representation with a Split MLP Architecture
Authors: Ruofan Liang, Hongyi Sun, Nandita Vijaykumar
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we evaluate the performance and efficiency of Coord X for several signal fitting tasks, including images, videos and 3D shapes. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Toronto 2Vector Institute, Canada |
| Pseudocode | No | No pseudocode or clearly labeled algorithm block is present. |
| Open Source Code | No | No explicit statement or link regarding open-source code for the methodology described in this paper. |
| Open Datasets | Yes | We randomly select 12 center-cropped images from DIV2K dataset (Agustsson & Timofte, 2017) to report the average PSNR. |
| Dataset Splits | No | No explicit training/test/validation dataset splits (e.g., percentages or sample counts) are provided. |
| Hardware Specification | Yes | All models are implemented on Py Torch (Paszke et al., 2019) and evaluated using an NVIDIA RTX3090 GPU. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam optimizer' but does not specify their version numbers or versions for other software dependencies. |
| Experiment Setup | Yes | Unless otherwise specified, our experiments use 5-layer MLPs. The Coord X models have two FC layers after the fusion operation (Df = 2). Model hyperparameters such as learning rate, batch size, number of epochs, etc., are the same as those used in the corresponding baseline coord MLPs. ... We use the Adam optimizer (Kingma & Ba, 2014) during training. |