Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Quantum Implicit Neural Representations
Authors: Jiaming Zhao, Wenbo Qiao, Peng Zhang, Hui Gao
ICML 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Lastly, we conducted experiments in signal representation, image superresolution, and image generation tasks to show the superior performance of QIREN compared to state-of-the-art (SOTA) models. |
| Researcher Affiliation | Academia | 1School of New Media and Communication, Tianjin University, China 2College of Intelligence and Computing, Tianjin University, China. |
| Pseudocode | No | The paper includes architectural diagrams (e.g., Figure 3) and mathematical expressions but does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https://github.com/GGor MM1/QIREN. |
| Open Datasets | Yes | In the image representation task, we utilize three popular images: Astronaut, Camera, and Coffee (Van der Walt et al., 2014). [...] FFHQ is a high-resolution dataset of 70k human faces (Karras et al., 2019) and Celeb AHQ is a high-quality version of Celeb A that consists of 30k images (Karras et al., 2017). |
| Dataset Splits | No | The paper describes the datasets used and training details (e.g., number of epochs) but does not explicitly specify the division of data into training, validation, and test sets with percentages, sample counts, or specific citations to predefined splits for reproduction. For instance, it mentions downsampling images for experimentation but not the precise splitting strategy. |
| Hardware Specification | No | The paper states, "All our experiments are conducted on a simulation platform, utilizing the Pennylane (Bergholm et al., 2018) and Torch Qauntum (Wang et al., 2022)." It does not provide any specific details about the underlying hardware (e.g., CPU, GPU models, memory) used for this simulation platform. |
| Software Dependencies | No | The paper mentions using "Pennylane (Bergholm et al., 2018) and Torch Qauntum (Wang et al., 2022)". However, it does not provide specific version numbers for these software components, which is required for a reproducible description of software dependencies. |
| Experiment Setup | Yes | We use MSE as the loss function and use Adam optimizers with the parameters β1 = 0.9, β2 = 0.999 and ϵ = 1e 8. The models are trained for 600 epochs for Astronaut, 300 epochs for Camera, Coffee or sound. [...] QIREN includes three Hybrid layers and one output linear layer. Each Hybrid Layer consists of a linear layer with a hidden dimension of 8, a Batch Norm layer and a quantum circuit with 8 qubits. |