Self-Supervised Generalisation with Meta Auxiliary Learning

Authors: Shikun Liu, Andrew Davison, Edward Johns

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments on image classification, we show three key results. First, MAXL outperforms single-task learning across seven image datasets, without requiring any additional data. We also show that MAXL outperforms several other baselines for generating auxiliary labels, and is even competitive when compared with human-defined auxiliary labels.
Researcher Affiliation Academia Shikun Liu Andrew J. Davison Edward Johns Department of Computing, Imperial College London {shikun.liu17, a.davison, e.johns}@imperial.ac.uk
Pseudocode Yes Algorithm 1: The MAXL algorithm
Open Source Code Yes Source code can be found at https://github.com/lorenmt/maxl.
Open Datasets Yes We evaluated on seven different datasets, with varying sizes and complexities. One of these, CIFAR-100 [18]... For the other six datasets: MNIST [19], SVHN [12], CIFAR-10 [18], Image Net [7], CINIC-10 [6] and UCF-101 [32]
Dataset Splits No The paper mentions training data and test accuracy but does not specify details about a validation dataset split or its size/usage for hyperparameter tuning, only that 'We used hyper-parameter search'.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch versions) needed to replicate the experiment.
Experiment Setup Yes For both the primary and auxiliary tasks, we apply the focal loss [22] with a focusing parameter γ = 2, defined as: L(ˆy, y) = y(1 ˆy)γ log(ˆy),... where α is the learning rate... β is the learning rate; Entropy weighting: λ