Gaussian Mixture Convolution Networks
Authors: Adam Celarek, Pedro Hermosilla, Bernhard Kerbl, Timo Ropinski, Michael Wimmer
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | A thorough evaluation of GMCNs, including ablation studies. and To evaluate our architecture, we use the proposed GMCN architecture to train classification networks on a series of well-known tasks. |
| Researcher Affiliation | Academia | TU Wien Ulm University |
| Pseudocode | No | The paper describes methods and equations, but does not include explicit pseudocode blocks or algorithms labeled as such. |
| Open Source Code | Yes | The source code for a proof-of-concept implementation, instructions, and benchmark datasets are provided in our Git Hub repository (https://github.com/cg-tuwien/ Gaussian-Mixture-Convolution-Networks). |
| Open Datasets | Yes | The MNIST data set (Lecun et al., 1998) and Model Net10 (Zhirong Wu et al., 2015) |
| Dataset Splits | Yes | We trained our GMCN architecture using the standard train/test splits by fitting 64 Gaussians to each image and processing them with our model. and We trained our network on Model Net10 using the standard train/test splits without data augmentation. |
| Hardware Specification | Yes | We used an NVIDIA Ge Force 2080Ti for training. |
| Software Dependencies | No | The paper mentions software like Adam optimizer, PyTorch, Tensor Board, and autodiff C++ library (Leal, 2018), but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | All models are trained using the Adam optimizer, with an initial learning rate of 0.001. The learning rate is reduced by a scheduler once the accuracy plateaus. Moreover, we apply weight decay scaled by 0.1 of the learning rate to avoid overfitting, as outlined in Section 4.2. |