Coordinate-Aware Modulation for Neural Fields
Authors: Joo Chan Lee, Daniel Rho, Seungtae Nam, Jong Hwan Ko, Eunbyung Park
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that CAM enhances the performance of neural representation and improves learning stability across a range of signals. |
| Researcher Affiliation | Collaboration | Joo Chan Lee1, Daniel Rho2, Seungtae Nam1, Jong Hwan Ko1B, Eunbyung Park1B 1Sungkyunkwan University, 2KT |
| Pseudocode | No | The paper describes its method mathematically and visually, but it does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides a link to a 'project page' (https://maincold2.github.io/cam/), but does not explicitly state that this page hosts the source code or provide a direct link to a code repository. |
| Open Datasets | Yes | We used Natural and Text image datasets (Tancik et al., 2020), synthetic (Ne RF (Mildenhall et al., 2020), NSVF (Liu et al., 2020)), forward-facing (LLFF (Mildenhall et al., 2019)), real-world unbounded (360 (Barron et al., 2022)), D-Ne RF dataset (Pumarola et al., 2021), and UVG dataset (Mercat et al., 2020). |
| Dataset Splits | No | The paper mentions using specific datasets for training and evaluation, and describes scenarios like training on a smaller image and evaluating on a larger one for image generalization. However, it does not provide explicit training/validation/test split percentages, sample counts, or direct references to standard split definitions within the paper's text. |
| Hardware Specification | No | The paper mentions 'GPU memory' in Table 11 but does not provide specific details on the hardware used, such as GPU models, CPU types, or other computer specifications for running experiments. |
| Software Dependencies | No | The paper mentions using the 'Pytorch framework' for implementation but does not specify its version number or any other software dependencies with their respective versions. |
| Experiment Setup | Yes | The learning rate was set to 10 3 and we trained for 1500 iterations using the Adam optimizer. We used a 4-layer MLP with 64 channels as the baseline and also set the grid resolution of 64. The resolution of the grids (dx and dy) was set to 32 32. Each model was trained for 2000 iterations using the Adam optimizer. The learning rate was initially set to 10 3 and 10 2 for neural networks and grids, respectively, multiplied by 0.1 at 1000 and 1500 iterations. The gaussian scale factor was set to 10 and 14 for Natural and Text, respectively. |