Learned Extragradient ISTA with Interpretable Residual Structures for Sparse Coding

Authors: Yangyang Li, Lin Kong, Fanhua Shang, Yuanyuan Liu, Hongying Liu, Zhouchen Lin8501-8509

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through ablation experiments, the improvements of our method on the network structure and the thresholding function are verified in practice. Extensive empirical results verify the advantages of our method.
Researcher Affiliation Academia 1Key Lab. of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University 2Peng Cheng Lab 3Key Lab. of Machine Perception (Mo E), School of EECS, Peking University
Pseudocode No The paper presents mathematical update rules and descriptions of processes, but it does not include a clearly labeled "Pseudocode" or "Algorithm" block.
Open Source Code No The paper does not provide any specific links to source code or explicitly state that the code for the described methodology is publicly available.
Open Datasets Yes The training dataset is BSDS500 and the test dataset is Set 11. ... We sample the elements of the dictionary matrix A randomly from a standard Gaussian distribution in simulations, the ground-truth x is also generated by the standard Gaussian distribution and we use Bernoulli distribution with a probability of 0.1 to ensure the sparsity.
Dataset Splits No The paper states "All training follows (Chen et al. 2018b)" and "All experimental results are on the test set", but does not explicitly provide specific details on validation dataset splits within the main text.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used for running the experiments.
Software Dependencies No The paper mentions various algorithms and deep learning concepts but does not list specific software dependencies with their version numbers required for reproduction.
Experiment Setup Yes For all our methods, αt 1 and αt 2 are initialized as 1.0. θt 1 and θt 2 are initialized as λ L when using ST, while θt 1 and θt 2 are initialized as λ L 0.1, θt 1 and θt 2 are initialized as λ L when using MT. All training follows (Chen et al. 2018b) (The details are provided in the Supplementary Material).