LOSSY COMPRESSION WITH DISTRIBUTION SHIFT AS ENTROPY CONSTRAINED OPTIMAL TRANSPORT
Authors: Huan Liu, George Zhang, Jun Chen, Ashish J Khisti
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide experimental results by training deep learning end-to-end compression systems for performing denoising on SVHN and super-resolution on MNIST suggesting consistency with our theoretical results. We train deep learning end-to-end compression systems for performing super-resolution on MNIST and denoising on SVHN. Our setup is unsupervised and to the best of our knowledge the first to integrate both compression and restoration at once using deep learning. |
| Researcher Affiliation | Academia | Huan Liu1 , George Zhang2 , Jun Chen1, Ashish Khisti2 1Mc Master University, 2University of Toronto {liuh127, chenjun}@mcmaster.ca gq.zhang@mail.utoronto.ca, akhisti@ece.utoronto.ca |
| Pseudocode | No | The paper describes theoretical formulations and experimental setups but does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code for the methodology or a link to a code repository. |
| Open Datasets | Yes | Image super-resolution is conducted on MNIST (Le Cun et al., 1998). Image denoising is conducted on SVHN (Netzer et al., 2011). |
| Dataset Splits | No | The paper mentions training (e.g., 'The training for end-to-end network lasts for 50 epochs') and testing ('test time evaluation') but does not explicitly provide specific numerical training, validation, or test dataset splits (e.g., percentages or sample counts), nor does it reference predefined splits with citations for reproducibility beyond the general dataset names. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used to run its experiments, such as GPU models, CPU models, or cloud computing instance types. |
| Software Dependencies | No | The paper mentions the use of 'Adam (Kingma & Ba, 2014) optimizer' and refers to neural network architectures like 'U-Nets' and frameworks like 'Wasserstein GAN' but does not specify versions for any programming languages, libraries, or other software components (e.g., Python, PyTorch, TensorFlow versions) that would be needed for reproducibility. |
| Experiment Setup | Yes | The training for end-to-end network lasts for 50 epochs. λ in (15) is fixed at 1e 3 across all rates. The learning rate initialized to be 0.0001 and is decayed by a factor of 5 after 30 epochs. The Adam (Kingma & Ba, 2014) optimizer is used. Table 3 illustrates the detailed training setting. Tables 3 and 4 can be reused to reproduce the experiments on image denoising. |