Task Discovery: Finding the Tasks that Neural Networks Generalize on
Authors: Andrei Atanov, Andrei Filatov, Teresa Yeo, Ajay Sohmshetty, Amir Zamir
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose a task discovery framework that automatically finds examples of such tasks via optimizing a generalization-based quantity called agreement score. We demonstrate that one set of images can give rise to many tasks on which neural networks generalize well. These are the questions we address in this paper. |
| Researcher Affiliation | Academia | Andrei Atanov Andrei Filatov Teresa Yeo Ajay Sohmshetty Amir Zamir Swiss Federal Institute of Technology (EPFL) |
| Pseudocode | No | The paper includes a diagram illustrating the meta-optimization process (Fig. 3-left) but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | https://taskdiscovery.epfl.ch (on page 1). Additionally, in Section A, item 3a states: 'Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes]' |
| Open Datasets | Yes | We take the set of images X from the CIFAR-10 dataset [39] |
| Dataset Splits | No | The paper mentions splitting the original training set into 45K images for Xtr and 5K for Xte, referring to a train and test set, but does not specify a separate validation split. |
| Hardware Specification | Yes | This allows us to run the discovery process for the Res Net-18 model on the CIFAR-10 dataset using a single 40GB A100. |
| Software Dependencies | No | The paper mentions 'PyTorch [63]' and 'Adam [34] optimizer' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We use Res Net-18 [24] architecture and Adam [34] optimizer as the learning algorithm A, unless otherwise speciļ¬ed. We measure the AS by training two networks for 100 epochs, which is enough to achieve zero training error for all considered tasks. To make it feasible, we limit the number of inner-loop optimization steps to 50. |