Geographic Location Encoding with Spherical Harmonics and Sinusoidal Representation Networks

Authors: Marc Rußwurm, Konstantin Klemmer, Esther Rolf, Robin Zbinden, Devis Tuia

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We systematically evaluate positional embeddings and neural network architectures across various benchmarks and synthetic evaluation datasets.
Researcher Affiliation Collaboration Marc Rußwurm Wageningen University Laboratory of Geo-information Science and Remote Sensing Konstantin Klemmer Microsoft Research New England Esther Rolf Harvard Data Science Initiative and Center for Research on Computation and Society University of Colorado, Boulder Robin Zbinden & Devis Tuia Environmental Computational Science and Earth Observation Laboratory (ECEO) Ecole Polytechnique F ed erale de Lausanne (EPFL)
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The model code and experiments are available at https://github.com/marccoru/locationencoder.
Open Datasets Yes For training and validation data, we sample 5000 points uniformly on the sphere s surface and assign a positive label for land and a negative label for water depending on whether they are within landmasses of the Natural Earth Low Resolution shapefile1. 1https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/physical/ne_50m_land.zip We downloaded globally distributed climate variables at Jan. 1, 2018, at 23:00 UTC from the fifth-generation atmospheric reanalysis of the global climate (ERA5) product of the European Centre for Medium-Range Weather Forecasts. Here, we use the i Naturalist (i Nat2018) (Van Horn et al., 2018) dataset.
Dataset Splits Yes For training and validation sets, we uniformly sample 10 000 points on the sphere and similarly assign the label of the closest labeled point. Another 5% of the data is reserved for the validation set. partitioned it randomly into training and validation datasets in an 80:20 ratio.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions 'Optuna framework' and 'Pytorch Lightning' but does not provide specific version numbers for these software components.
Experiment Setup Yes For the DFS-based positional embeddings (GRID, THEORY, SPHEREC, SPHEREC+, SPHEREM, and SPHEREM+), we tune the minimum radius rmin between 1 and 90 degrees in 9-degree steps. We keep the maximum radius rmax fixed at 360 degrees, as all problems have global or continental scale (sea-ice thickness) scale. We further tune the number of frequencies S between 16 and 64 with steps of size 16 (32 for ERA5). For the spherical harmonics embeddings, we tune the number of Legendre polynomials L between 10 and 30 in steps of 5 polynomials. In terms of neural networks, we vary the number of hidden dimensions between 32 and 128 in 32-dimension steps both for SIREN (Sitzmann et al., 2020) and FCNET (Mac Aodha et al., 2019) and vary the number of layers between one and three for Siren. For all combinations, we tune the learning rate on a logarithmic scale between 10 4 and 10 1 and the weight decay between 10 8 and 10 1.