Spherical CNNs on Unstructured Grids
Authors: Chiyu Max Jiang, Jingwei Huang, Karthik Kashinath, Prabhat, Philip Marcus, Matthias Niessner
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our algorithm in an extensive series of experiments on a variety of computer vision and climate science tasks, including shape classification, climate pattern segmentation, and omnidirectional image semantic segmentation. |
| Researcher Affiliation | Academia | Chiyu Max Jiang UC Berkeley Jingwei Huang Stanford University Karthik Kashinath Lawrence Berkeley Nat l Lab Prabhat Lawrence Berkeley Nat l Lab Philip Marcus UC Berkeley Matthias Nießner Technical University of Munich |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We release and open-source the codes developed and used in this study for other potential extended applications1. Our codes are available on Github: https://github.com/maxjiang93/ugscnn |
| Open Datasets | Yes | To validate the use of parameterized differential operators to replace conventional convolution operators, we implemented such neural networks towards solving the classic computer vision benchmark task: the MNIST digit recognition problem (Le Cun, 1998). We use the Model Net40 benchmark (Wu et al., 2015), a 40-class 3D classification problem... We use the Stanford 2D3DS dataset (Armeni et al., 2017) for this task. We follow Mudigonda et al. (2017) for preprocessing the data and acquiring the ground-truth labels for this task. |
| Dataset Splits | Yes | We use the official 3-fold cross validation to train and evaluate our results. |
| Hardware Specification | Yes | Inference is performed on a single NVIDIA GTX 1080 Ti GPU. |
| Software Dependencies | No | The paper mentions using Adam optimizer and implies PyTorch/TensorFlow via linked GitHub repositories, but does not specify version numbers for any software dependencies. |
| Experiment Setup | Yes | Training details We train our network with a batch size of 16, initial learning rate of 1 10 2, step decay of 0.5 per 10 epochs, and use the Adam optimizer. We use the cross-entropy loss for training the classification network. |