Defects of Convolutional Decoder Networks in Frequency Representation
Authors: Ling Tang, Wen Shen, Zhanpeng Zhou, Yuefeng Chen, Quanshi Zhang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have conducted experiments, which have successfully veriļ¬ed such defects in different multi-layer decoder networks with Re LU layers. This proves the trustworthiness of our theorems. |
| Researcher Affiliation | Collaboration | 1Shanghai Jiao Tong University. 2Alibaba Group. 3Quanshi Zhang is the corresponding author. He is with the Department of Computer Science and Engineering, the John Hopcroft Center, at the Shanghai Jiao Tong University, China. |
| Pseudocode | No | The paper contains mathematical derivations and theoretical proofs, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | No | The paper does not contain any explicit statement about providing open-source code for the described methodology, nor does it include any links to a code repository. |
| Open Datasets | Yes | The autoencoder was trained on the Tiny-Image Net dataset (Le & Yang, 2015) using the mean squared error (MSE) loss for image reconstruction. Results on more datasets in Appendix C.1 yielded similar conclusions. We also used CIFAR10 (Krizhevsky et al., 2009) and Broden (Bau et al., 2017) datasets. |
| Dataset Splits | No | The paper states that models were trained on datasets like Tiny-Image Net, CIFAR-10, and Broden, but it does not explicitly provide details about specific training, validation, and test data splits (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing instances used for running the experiments. |
| Software Dependencies | No | The paper mentions software components like 'VGG-16' and 'Re LU layers' but does not specify any version numbers for these or other software dependencies. |
| Experiment Setup | Yes | Each convolutional layer applied zero-paddings and was followed by an Re LU layer. Each convolutional layer contained 16 convolutional kernels (kernel size was 3 3) with 16 bias terms. We set the stride size of the convolution operation to 1. The auto-encoder was trained on the Tiny-Image Net dataset (Le & Yang, 2015) using the mean squared error (MSE) loss for image reconstruction. |