Predictive Coding Machine for Compressed Sensing and Image Denoising

Authors: Jun Li, Hongfu Liu, Yun Fu

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical results verify the promising advantages of PCM in terms of effectiveness, efficiency and robustness. In this section, we apply sparse PCM (11) on compressed sensing and image denoising by using our GDg APG algorithm. The experiments verify that PCM is faster and more robust than the traditional sparse coding methods.
Researcher Affiliation Academia 1Department of Electrical and Computer Engineering, Northeastern University, Boston, MA, 02115, USA. 2College of Computer and Information Science, Northeastern University, Boston, MA, 02115, USA.
Pseudocode Yes Algorithm 1 PCM via GD guided by APG.
Open Source Code No The paper provides links to third-party baseline implementations (e.g., 'http://users.ece.gatech.edu/justin/l1magic/'), but there is no explicit statement or link indicating that the authors' own code for PCM is publicly available.
Open Datasets No For compressed sensing, the paper describes a custom data generation process: 'The 20 signals x are used to construct 500 training examples, which are created by y = Ax + z with σ = 0.001.'. For image denoising, it states 'We use the ten 256 256 images shown in the top line of Fig. S2 in supplementary materials' but does not provide a specific link, DOI, repository name, or formal citation to access these base images or the generated training data.
Dataset Splits No The paper describes the creation of training examples and test datasets for both compressed sensing and image denoising, but it does not specify a separate validation dataset or explicit training/validation/test splits (e.g., percentages or counts) for model development and hyperparameter tuning.
Hardware Specification Yes All algorithms were run on Matlab 2015b and Windows 7 with an Intel Core i5 2.40 GHz CPU and 24GB memory.
Software Dependencies Yes All algorithms were run on Matlab 2015b and Windows 7 with an Intel Core i5 2.40 GHz CPU and 24GB memory.
Experiment Setup Yes In the first experiment we study the parameter analysis. Actually, ℓ1 regularization parameter λ, learning rate η and the number of hidden units h are the typical parameters in PCM. η is definitely set as 0.1. We consider λ {1, 0.5, 0.1, 0.05, 0.01} and the three layers DNN encoder: 64-h-256 (h {400, 300, 200, 100, 50}) for the parameter analysis. η is easily set as e 5. We consider λ {e 5, e 6, e 7, e 8} and the three layers DNN encoder: 65536-h-65536 (h {100, 300, 500, 800, 1000}) for the parameter analysis.