Unsupervised Representation Learning by Predicting Image Rotations
Authors: Spyros Gidaris, Praveer Singh, Nikos Komodakis
ICLR 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We exhaustively evaluate our method in various unsupervised feature learning benchmarks and we exhibit in all of them state-of-the-art performance. |
| Researcher Affiliation | Academia | Spyros Gidaris, Praveer Singh, Nikos Komodakis University Paris-Est, LIGM Ecole des Ponts Paris Tech {spyros.gidaris,praveer.singh,nikos.komodakis}@enpc.fr |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The code and models of our paper will be published on: https://github.com/gidariss/Feature Learning Rot Net. This indicates a future release, not immediate concrete access. |
| Open Datasets | Yes | In this section we conduct an extensive evaluation of our approach on the most commonly used image datasets, such as CIFAR-10 (Krizhevsky & Hinton, 2009), Image Net (Russakovsky et al., 2015), PASCAL (Everingham et al., 2010), and Places205 (Zhou et al., 2014) |
| Dataset Splits | No | The paper mentions 'training' and 'test' sets for datasets like CIFAR-10 and ImageNet, but it does not explicitly describe a 'validation split' or its size/methodology from the overall dataset for hyperparameter tuning or model selection. |
| Hardware Specification | Yes | our Alex Net model trains in around 2 days using a single Titan X GPU |
| Software Dependencies | No | The paper describes the neural network architectures and training optimizers (e.g., SGD, batch-norm, relu units) but does not list specific software libraries or their version numbers (e.g., PyTorch, TensorFlow, etc.). |
| Experiment Setup | Yes | In order to train them on the rotation prediction task, we use SGD with batch size 128, momentum 0.9, weight decay 5e 4 and lr of 0.1. We drop the learning rates by a factor of 5 after epochs 30, 60, and 80. We train in total for 100 epochs. |