AutoReCon: Neural Architecture Search-based Reconstruction for Data-free Compression
Authors: Baozhou Zhu, Peter Hofstee, Johan Peltenburg, Jinho Lee, Zaid Alars
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that using generators discovered by the Auto Recon method always improve the performance of data-free compression. |
| Researcher Affiliation | Collaboration | 1Delft University of Technology, Delft, The Netherlands 2National University of Defense Technology, Changsha, China 3IBM Austin, Austin, TX, USA 4Yonsei University, Seoul, Korea |
| Pseudocode | Yes | Algorithm 1 The Auto Re Conmethod for data-free compression |
| Open Source Code | No | The paper does not include an unambiguous statement or a direct link to the source code for the methodology described. |
| Open Datasets | Yes | We use the same experimental settings as the GDFQ method... for both CIFAR-100 and Image Net classification |
| Dataset Splits | No | The paper mentions 'Lval r and Ltrain r refer to the reconstruction loss function on the reconstructed training dataset and the reconstructed validation dataset, respectively.' but does not provide specific train/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility of the overall experiment. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments (e.g., specific GPU models, CPU models, or memory details). |
| Software Dependencies | No | The paper does not provide specific version numbers for ancillary software components (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | No | The paper states 'We use the same experimental settings as the GDFQ method' but does not explicitly list specific hyperparameter values or training configurations within this paper. |