Training Image Estimators without Image Ground Truth

Authors: Zhihao Xia, Ayan Chakrabarti

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our method for training networks for compressive-sensing and blind deconvolution, considering both non-blind and blind training for the latter. Our unsupervised framework yields models that are nearly as accurate as those from fully supervised training, despite not having access to any ground-truth images. We validate our method with experiments on image reconstruction from compressive measurements and on blind deblurring of face images, with blind and non-blind training for the latter, and compare to fully-supervised baselines with state-of-the-art performance.
Researcher Affiliation Academia Zhihao Xia Washington University in St. Louis 1 Brookings Dr., St. Louis, MO 63130 zhihao.xia@wustl.edu Ayan Chakrabarti Washington University in St. Louis 1 Brookings Dr., St. Louis, MO 63130 ayan@wustl.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The source code of our implementation is available at https://projects.ayanc.org/unsupimg/.
Open Datasets Yes We generate a training and validation set, of 100k and 256 images respectively, by taking 363 × 363 crops from images in the Image Net database [26]. We use all 160k images in the Celeb A training set [17] and 1.8k images from Helen training set [13] to construct our training set, and 2k images from Celeb A val and 200 from the Helen training set for our validation set.
Dataset Splits Yes We generate a training and validation set, of 100k and 256 images respectively, by taking 363 × 363 crops from images in the Image Net database [26]. We use all 160k images in the Celeb A training set [17] and 1.8k images from Helen training set [13] to construct our training set, and 2k images from Celeb A val and 200 from the Helen training set for our validation set.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper does not explicitly list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes The weight γ for the self-measurement loss is set to 0.05 based on the validation set. The weights α, β, γ are all set to one in this case. We use a CNN architecture that stacks two UNets [24], with a residual connection between the two (see supplementary). Then, for unsupervised training with our approach, we choose two kernels for each training image to form a training set of measurement pairs, that are kept fixed (including the added Gaussian noise) across all epochs of training.