Robust Quantization: One Model to Rule Them All

Authors: moran shkolnik, Brian Chmiel, Ron Banner, Gil Shomron, Yury Nahshan, Alex Bronstein, Uri Weiser

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method s effectiveness on different Image Net models. A reference implementation accompanies the paper.
Researcher Affiliation Collaboration Habana Labs An Intel company, Caesarea, Israel, Department of Electrical Engineering Technion, Haifa, Israel
Pseudocode No The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes A reference implementation accompanies the paper.
Open Datasets Yes All experiments are conducted using Distiller [Zmora et al., 2019], using Image Net dataset [Deng et al., 2009] on CNN architectures for image classification (Res Net-18/50 [He et al., 2015] and Mobile Net-V2 [Sandler et al., 2018]).
Dataset Splits No The paper mentions using the ImageNet dataset but does not explicitly state the specific train/validation/test dataset splits used for the experiments.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions using 'Distiller [Zmora et al., 2019]' but does not provide specific version numbers for this or any other key software components, libraries, or programming languages.
Experiment Setup No The paper describes the KURE regularization term and the methods used (LAPQ, Do Re Fa, LSQ) but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings.