A Study on the Ramanujan Graph Property of Winning Lottery Tickets
Authors: Bithika Pal, Arindam Biswas, Sudeshna Kolay, Pabitra Mitra, Biswajit Basu
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | It is empirically observed that the winning ticket networks preserve the Ramanujan graph property and achieve a high accuracy even when the layers are sparse. Experimental results are presented for the Lenet architecture on the MNIST dataset and the Conv4 architecture on the CIFAR10 dataset. |
| Researcher Affiliation | Academia | 1Department of Computer Science and Engineering, Indian Institute of Technology, Kharagpur, India 2Department of Mathematical Sciences, University of Copenhagen, Denmark 3School of Civil, Structural and Environmental Engineering, Trinity College, Dublin, Ireland. |
| Pseudocode | Yes | Algorithm 1 Layer-wise Connectivity Based Pruning |
| Open Source Code | No | The paper states: 'We have used the code from https://github.com/facebookresearch/open_lth to generate all the results for LTH explanation. For the experiment of comparison of different pruning algorithms, we have modified the code from https://github.com/ganguli-lab/Synaptic-Flow.' This indicates they used/modified existing codebases, but does not explicitly state that the code for their own proposed methodology (Algorithm 1) is released. |
| Open Datasets | Yes | We have used the MNIST and CIFAR10 datasets in our study. |
| Dataset Splits | No | The paper mentions using MNIST and CIFAR10 datasets and reports hyper-parameter values like 'Train-Iterations' and 'Batch size' (Table 2), and discusses pruning percentiles. However, it does not explicitly provide specific train/validation/test dataset splits (e.g., 80/10/10 split or absolute sample counts for each partition) to reproduce the data partitioning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments, only general statements about experimental setup. |
| Software Dependencies | No | The paper mentions using and modifying code from 'https://github.com/facebookresearch/open_lth' and 'https://github.com/ganguli-lab/Synaptic-Flow', but does not provide specific version numbers for software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | Hyper-parameter values are reported in Table 2. Table 2 provides specific values for Optimizer (Adam), Train-Iterations (50000/25000), Batch size (60/60), Learning Rate (0.0012/0.0003), Pruning epochs (50/50), and Initialization (Kaiming Normal). |