Hybrid Neural Representations for Spherical Data

Authors: Hyomin Kim, Yunhui Jang, Jaeho Lee, Sungsoo Ahn

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively verify the effectiveness of our HNe R-S for regression, super-resolution, temporal interpolation, and compression tasks.
Researcher Affiliation Academia 1Pohang University of Science and Technology. Correspondence to: Hyomin Kim <hyomin126@postech.ac.kr>, Sungsoo Ahn <sungsoo.ahn@postech.ac.kr>.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No We will release our code upon acceptance.
Open Datasets Yes Weather and climate data is gathered from ECMWF reanalysis 5th generation (Hersbach et al., 2018, ERA5) archive where data can be directly downloaded by using Climate Data Storage API (Buontempo et al., 2020). Next, we use the CMB temperature data from Planck Public Data Release 1 (PR1) Mission Science Maps data at NASA/IPAC Infrared Science Archive (IRSA)1. 1https://irsa.ipac.caltech.edu/data/Planck/release_1/all-sky-maps/
Dataset Splits Yes Regression. We first evaluate our framework for the regression task where we split the train, valid, and test dataset in the portion of 6:2:2.
Hardware Specification Yes We conduct all experiments using a single RTX 3090 GPU.
Software Dependencies No The paper mentions software like Adam W optimizer (Loshchilov & Hutter, 2019), Adam optimizer (Kingma & Ba, 2015), and the C3 (Kim et al., 2024) framework, but it does not specify version numbers for these or other general software libraries like Python, PyTorch, or CUDA, which are necessary for reproducible setup.
Experiment Setup Yes We employ a consistent architecture, consisting of a 4-layer multi-layer perceptron (MLP) with 256 units in each hidden layer. Detailed settings for the compression experiment are provided in Section 4.1. We train our model using the weighted RMSE... with a learning rate of 1e-5. For the compression task... with a learning rate of 1e-2. Tables 6 and 7 provide detailed hyperparameters for different tasks, including Level (L), Parameter dim. (d), Scaling factor (γ), and Base resolution.