Color Equivariant Convolutional Networks
Authors: Attila Lengyel, Ombretta Strafforello, Robert-Jan Bruintjes, Alexander Gielisse, Jan van Gemert
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We rigorously validate the properties of CEConvs empirically through precisely controlled synthetic experiments, and evaluate the performance of color invariant and equivariant Res Nets on various more realistic classification benchmarks. |
| Researcher Affiliation | Academia | Attila Lengyel Ombretta Strafforello Robert-Jan Bruintjes Alexander Gielisse Jan van Gemert Computer Vision Lab Delft University of Technology Delft, The Netherlands |
| Pseudocode | No | The paper describes the mathematical formulation and implementation details of CEConvs (Section 3.1-3.3), but it does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | All code and experiments are made publicly available on https://github.com/Attila94/CEConv. |
| Open Datasets | Yes | We evaluate our method for robustness to color variations on several natural image classification datasets, including CIFAR-10 and CIFAR-100 [27], Flowers-102 [35], STL-10 [6], Oxford-IIIT Pet [40], Caltech-101 [31], Stanford Cars [26] and Image Net [10]. |
| Dataset Splits | No | The paper mentions training and testing on various datasets and describes data augmentations, but it does not explicitly specify the proportions or sample counts for training, validation, and test splits needed for reproduction. For Color MNIST, it mentions train/test but no validation. |
| Hardware Specification | Yes | All our experiments use Py Torch and run on a single NVIDIA A40 GPU. |
| Software Dependencies | No | The paper states 'All our experiments use Py Torch', but it does not provide a specific version number for PyTorch or any other software libraries or dependencies. |
| Experiment Setup | Yes | Training is performed for 200 epochs using the Adam [25] optimizer with a learning rate of 0.001 and the One Cycle learning rate scheduler. |