Model Preserving Compression for Neural Networks

Authors: Jerry Chee, Megan Flynn (née Renz), Anil Damle, Christopher M. De Sa

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the efficacy of our approach with strong empirical performance on a variety of tasks, models, and datasets from simple one-hiddenlayer networks to deep networks on Image Net.
Researcher Affiliation Academia Jerry Chee Department of Computer Science Cornell University jerrychee@cs.cornell.edu Megan Flynn (née Renz) Department of Physics Cornell University mr2268@cornell.edu Anil Damle Department of Computer Science Cornell University damle@cs.cornell.edu Christopher De Sa Department of Computer Science Cornell University cdesa@cs.cornell.edu
Pseudocode Yes Algorithm 1 Pruning a multilayer network with interpolative decompositions
Open Source Code Yes Our code is available at https://github.com/jerry-chee/Model Preserve Compression NN
Open Datasets Yes To complement our algorithmic developments and theoretical contributions, in Section 7 we demonstrate the efficacy of our method on Atom3D [72], CIFAR-10 [43], and Image Net [19].
Dataset Splits Yes We then remove a class from the pruning set to simulate an under-represented class (but leave it in the train and test sets).
Hardware Specification No The paper mentions general compute aspects and computational feasibility (e.g., "computational complexity", "computationally feasible") but does not specify the hardware (e.g., specific GPU or CPU models) used for experiments.
Software Dependencies No The paper mentions general software like LAPACK and TensorFlow in references but does not specify version numbers for any software dependencies relevant to reproducing the experiments.
Experiment Setup No Full hyper-parameter details can be found in the Appendix and code.