Scalable and Equivariant Spherical CNNs by Discrete-Continuous (DISCO) Convolutions
Authors: Jeremy Ocampo, Matthew Alexander Price, Jason McEwen
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We apply the DISCO spherical CNN framework to a number of benchmark dense-prediction problems on the sphere, such as semantic segmentation and depth estimation, on all of which we achieve the state-of-the-art performance. |
| Researcher Affiliation | Collaboration | Jeremy Ocampo1,2, Matthew A. Price1,2, Jason D. Mc Ewen1,2 1Kagenova Limited, 2University College London (UCL) |
| Pseudocode | Yes | Algorithm 1 Function to compute custom sparse gradients in Tensor Flow. |
| Open Source Code | No | Section 5 mentions "implemented in the Copernic AI 3 code" with a footnote link to "https://www.kagenova.com/products/copernicAI/", which is a product page and not a direct open-source code repository for the methodology described in the paper. |
| Open Datasets | Yes | We project the MNIST digits onto the sphere at resolution L = 1024, using the same projection as in Cohen et al. (2018).The 2D3DS dataset (Armeni et al., 2017) consists of 1413 equirectangular RGB-Depth indoor 360 images...The Omni-SYNTHIA dataset (Ros et al., 2016) consists of 2269 panoramic RGB images...The Matterport3D dataset (Chang et al., 2017) contains 7907 spherical RGB images... |
| Dataset Splits | Yes | We use the same 3-fold split for cross-validation as in Jiang et al. (2019).We use the same train/test/validation split as in Albanis et al. (2021). |
| Hardware Specification | Yes | On an NVIDIA RTX 3090 GPU we observe a wall-clock compute time of 0.0302 0.0018, 0.0898 0.0025, and 0.3255 0.0043 seconds for resolutions of L = 1024, L = 2048, and L = 4096, respectively, when averaged over 10 experiments. |
| Software Dependencies | No | The paper mentions the use of TensorFlow, PyTorch, ADAM optimizer (Kingma & Ba, 2015), and Group Normalization (Wu & He, 2018), but does not specify version numbers for any of these software dependencies. |
| Experiment Setup | Yes | We train for 10 epochs usign the ADAM optimizer (Kingma & Ba, 2015), with a learning rate of 0.001 and a batch size of 8. |