On the Existence of Universal Lottery Tickets
Authors: Rebekka Burkholz, Nilanjana Laha, Rajarshi Mukherjee, Alkis Gotovos
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To showcase the practical relevance of our main theorems, we conduct two types of experiments on a machine with Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz processor and GPU NVIDIA Ge Force RTX 3080 Ti. |
| Researcher Affiliation | Academia | Rebekka Burkholz CISPA Helmholtz Center for Information Security burkholz@cispa.de Nilanjana Laha, Rajarshi Mukherjee Harvard T.H. Chan School of Public Health rmukherj@hsph.harvard.edu Alkis Gotovos MIT CSAIL alkisg@mit.edu |
| Pseudocode | No | The paper describes methods and proofs but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Code for the experiments is publicly available in the Github repository Universal LT, which can be accessed with the following url https://github.com/Relational ML/ Universal LT. |
| Open Datasets | Yes | In the second type of experiments, we train our mother networks with edge-popup (Ramanujan et al., 2020) on MNIST (Le Cun & Cortes, 2010) for 100 epochs based on SGD with momentum 0.9, weight decay 0.0001, batch size 128, and target sparsity 0.5. |
| Dataset Splits | No | The paper mentions training on MNIST but does not specify how the dataset was split into training, validation, and test sets, or provide percentages/counts for these splits. |
| Hardware Specification | Yes | To showcase the practical relevance of our main theorems, we conduct two types of experiments on a machine with Intel(R) Core(TM) i9-10850K CPU @ 3.60GHz processor and GPU NVIDIA Ge Force RTX 3080 Ti. |
| Software Dependencies | No | The paper mentions training on MNIST with SGD and edge-popup but does not provide specific version numbers for any software libraries or dependencies (e.g., Python, PyTorch, TensorFlow, CUDA versions). |
| Experiment Setup | Yes | In the second type of experiments, we train our mother networks with edge-popup (Ramanujan et al., 2020) on MNIST (Le Cun & Cortes, 2010) for 100 epochs based on SGD with momentum 0.9, weight decay 0.0001, batch size 128, and target sparsity 0.5. |