Dictionary Learning Based on Sparse Distribution Tomography

Authors: Pedram Pad, Farnood Salehi, Elisa Celis, Patrick Thiran, Michael Unser

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our algorithm by performing two types of experiments: image inpainting and image denoising. In both cases, we find that our approach is competitive with stateof-the-art dictionary learning techniques.
Researcher Affiliation Academia 1Biomedical Imaging Group, EPFL, Lausanne, Switzerland 2Computer Communications and Applications Laboratory 3, EPFL, Lausanne, Switzerland.
Pseudocode Yes The pseudocode of our dictionary learning method is given in Algorithm 1.
Open Source Code No The paper mentions using a 'Python package SPAMS' and provides its URL, but does not state that the code for the method described in *this* paper is open-source or provide a link to their own implementation.
Open Datasets Yes We use a database of face images provided by AT&T4 and crop them to have size 112 91 so we can chop each image to 208 patches of size 7 7, which correspond to yi in our model. [4] www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
Dataset Splits No The paper describes how data is used for training and testing but does not explicitly mention a dedicated validation set or specific train/validation/test split percentages/counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper mentions using 'Python package SPAMS' but does not specify a version number for SPAMS or other software dependencies with their versions.
Experiment Setup Yes Algorithm 1 (Sparse DT) describes the initialization, iteration process, and adaptive step size parameters (η, κ+, κ-). It also mentions how the cost function E(B) is iteratively changed and how u vectors are regenerated randomly.