Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Sensing Theorems for Unsupervised Learning in Linear Inverse Problems
Authors: Julián Tachella, Dongdong Chen, Mike Davies
JMLR 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform a series of numerical experiments to illustrate the theoretical bounds presented in Sections 3 and 4. ... Figure 8 shows the probability of recovery over 25 Monte Carlo trials for different numbers of measurements m and operators |G|. ... Figure 11a shows the average test peak-signal-to-noise ratio (PSNR) achieved by the trained model for |G| = 1, 10, 20, 30, 40 and m = 1, 100, 200, 300, 400. ... We use the standard MNIST dataset... |
| Researcher Affiliation | Academia | Juli an Tachella EMAIL Laboratoire de Physique CNRS, ENSL Lyon, F-69364, France; Dongdong Chen EMAIL School of Engineering University of Edinburgh Edinburgh, EH9 3FB, UK; Mike Davies EMAIL School of Engineering University of Edinburgh Edinburgh, EH9 3FB, UK |
| Pseudocode | No | The paper describes algorithms in section 6 'Algorithms' using paragraph text, but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about the release of source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We use the standard MNIST dataset which has an approximate box-counting dimension k = 12 (Hein and Audibert, 2005). The dataset contains N = 60000 training samples... |
| Dataset Splits | Yes | The dataset contains N = 60000 training samples, and these are partitioned such that N/|G| different samples are observed via each operator. The test set consists of 10000 samples, which are also randomly divided into |G| parts, one per operator. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used to run its experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper does not provide specific software dependencies or version numbers for the tools and libraries used in the implementation. |
| Experiment Setup | Yes | The networks are trained using the Adam optimizer. ... we use an autoencoder architecture with 3 hidden layers with 1000, 32 and 1000 neurons, as shown in Figure 10. We use relu non-linearities between layers, except at the output of the last layer. |