HexaConv
Authors: Emiel Hoogeboom, Jorn W.T. Peters, Taco S. Cohen, Max Welling
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on the CIFAR-10 benchmark and on the Aerial Image Dataset (AID) (Xia et al., 2017). The CIFAR-10 results are presented in Table 1, obtained by taking the average of 10 experiments with different random weight initializations. |
| Researcher Affiliation | Academia | Emiel Hoogeboom , Jorn W.T. Peters & Taco S. Cohen University of Amsterdam {e.hoogeboom,j.w.t.peters,t.s.cohen}@uva.nl Max Welling University of Amsterdam & CIFAR m.welling@uva.nl |
| Pseudocode | No | No explicit pseudocode or algorithm blocks are provided in the paper. |
| Open Source Code | Yes | Source code of G-Hexa Convs is available on Github: https://github.com/ehoogeboom/hexaconv. |
| Open Datasets | Yes | We evaluate our method on the CIFAR-10 benchmark and on the Aerial Image Dataset (AID) (Xia et al., 2017). |
| Dataset Splits | No | The paper mentions splitting data into '80% train/20% test sets' for AID but does not specify a validation split or its proportion for either dataset. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU models, CPU types) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.x, TensorFlow 1.x). |
| Experiment Setup | No | The paper describes network architectures (e.g., '3 stages, with 4 blocks per stage', 'first convolution layer has stride two') but does not provide specific hyperparameter values like learning rate, batch size, or number of epochs for training. |