DeepSphere: a graph-based spherical CNN

Authors: Michaël Defferrard, Martino Milani, Frédérick Gusset, Nathanaël Perraudin

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show state-of-the-art performance and demonstrates the efficiency and flexibility of this formulation.
Researcher Affiliation Academia Ecole Polytechnique F ed erale de Lausanne (EPFL), Switzerland {michael.defferrard,martino.milani,frederick.gusset}@epfl.ch Nathana el Perraudin Swiss Data Science Center (SDSC), Switzerland nathanael.perraudin@sdsc.ethz.ch
Pseudocode No The paper does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https: //github.com/deepsphere.
Open Datasets Yes The SHREC 17 shape retrieval contest (Savva et al., 2017) contains 51,300 randomly oriented 3D models from Shape Net (Chang et al., 2015)... and We used the pre-processed dataset from (Jiang et al., 2019).11 Available at http://island.me.berkeley.edu/ugscnn/data. and we collected historical measurements from n 10, 000 weather stations scattered across the Earth.12 https://www.ncdc.noaa.gov/ghcn-daily-description
Dataset Splits No The paper mentions training on datasets and reports performance metrics but does not explicitly state the specific train/validation/test dataset splits (e.g., percentages or sample counts) within the main text, nor does it explicitly mention a dedicated 'validation set' split.
Hardware Specification No The paper does not explicitly describe the specific hardware used to run its experiments, such as particular GPU or CPU models. It only vaguely refers to 'GPU memory limit' in the supplementary material.
Software Dependencies No The following software packages were used for computation and plotting: Py GSP (Defferrard et al.), healpy (Zonca et al., 2019), matplotlib (Hunter, 2007), Sci Py (Virtanen et al., 2020), Num Py (Walt et al., 2011), Tensor Flow (Abadi et al., 2015). While software packages are listed with publication years for some, explicit version numbers for all key components are not provided.
Experiment Setup Yes The NN is made of 5 graph convolutional layers, each followed by a max pooling layer which down-samples by 4. A GAP and a fully connected layer with softmax follow. The polynomials are all of order P = 3 and the number of channels per layer is 16, 32, 64, 128, 256, respectively. Following Esteves et al. (2018), the cross-entropy plus a triplet loss is optimized with Adam for 30 epochs on the dataset augmented by 3 random translations. The learning rate is 5 10 2 and the batch size is 32.