OSOA: One-Shot Online Adaptation of Deep Generative Models for Lossless Compression
Authors: Chen Zhang, Shifeng Zhang, Fabio Maria Carlucci, Zhenguo Li
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that vanilla OSOA can save significant time versus training bespoke models and space versus using one model for all targets. |
| Researcher Affiliation | Industry | Chen Zhang Shifeng Zhang Fabio M. Carlucci Zhenguo Li Huawei Noah s Ark Lab {chenzhang10, zhangshifeng4, li.zhenguo}@huawei.com |
| Pseudocode | Yes | Algorithm 1 One Shot Online Adaptation: Encoding and Decoding and Algorithm 2 The encode_or_cache method in OSOA Encoding |
| Open Source Code | No | The paper does not contain an explicit statement or link providing access to open-source code for the described methodology. |
| Open Datasets | Yes | The datasets for base model pretraining are the renowned natural image datasets CIFAR10 [28] and Image Net32 [7], including images of size 32 32. We obtain three target datasets randomly sampled from the large image dataset Yahoo Flickr Creative Commons 100 Million (YFCC100m) [46] to test the compression performance. |
| Dataset Splits | Yes | The data splitting strategy is the same as Stage 2. For Fine Tune v1, we fine tune the pretrained model for 2 epochs... For Fine Tune v2, we fine tune the pretrained model for 4 epochs for Hi LLo C (and IAF RVAE) and 3 epochs for IDF++... For Fine Tune v3, we fine tune the pretrained model for 20 epochs... We quadruple the batch size as the image size decreases, i.e., batch size 256/64/16 in Hi LLo C and batch size 48/12/3 in IDF++, for SET32/64/128 respectively. |
| Hardware Specification | Yes | We use an Nvidia V100 32GB GPU for Hi LLo C (and IAF RVAE) and an Nvidia V100 16 GB GPU for IDF++. |
| Software Dependencies | Yes | The time ratio we measured with/without the determinism is 1.98 (Hi LLo C) in Tensor Flow 1.14 [4] with tensorflow-determinism 0.3.0 [3] and 1.34 (IDF++) in Pytorch 1.6 [2]. |
| Experiment Setup | Yes | For Fine Tune v1, we fine tune the pretrained model for 2 epochs, as the whole OSOA Encoding & Decoding procedures involve 2 epochs of adaptations in total. For Fine Tune v2, we fine tune the pretrained model for 4 epochs for Hi LLo C (and IAF RVAE) and 3 epochs for IDF++... For Fine Tune v3, we fine tune the pretrained model for 20 epochs... We quadruple the batch size as the image size decreases, i.e., batch size 256/64/16 in Hi LLo C and batch size 48/12/3 in IDF++, for SET32/64/128 respectively. |