Nonlinear Information-Theoretic Compressive Measurement Design

Authors: Liming Wang, Abolfazl Razi, Miguel Rodrigues, Robert Calderbank, Lawrence Carin

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Encouraging results are presented on multiple datasets, for both signal recovery and classification. The nonlinear approach is shown to be particularly valuable in high-noise scenarios. and 5. Experiments We evaluate the proposed nonlinear measurement design for both signal recovery and classification applications.
Researcher Affiliation Academia Liming Wang1 LIMING.W@DUKE.EDU Abolfazl Razi1 ABOLFAZL.RAZI@DUKE.EDU Miguel Dias Rodrigues2 M.RODRIGUES@UCL.AC.UK Robert Calderbank1 ROBERT.CALDERBANK@DUKE.EDU Lawrence Carin1 LCARIN@DUKE.EDU 1Department of Electrical & Computer Engineering, Duke University, Durham, NC 27708, USA 2Department of Electronic and Electrical Engineering, University College London, London, UK
Pseudocode Yes 3.3. Gradient-Based Numerical Design A numerical solution to the optimization problems in (5) and (7) can be realized via a gradient-descent method. The MMSE matrices involved in Theorems 1 and 2 can be readily calculated by Monte Carlo integration, as in (Carson et al., 2012; Chen et al., 2012); we elaborate on this calculation when presenting experimental results. We summarize the algorithm as follows: 1. Select suitable basis vector ψ(X) and initialize A. 2. Use Monte Carlo integration to calculate the MMSE matrices involved in Theorems 1 and 2. Update the A matrix as Anew = proj(Aold + δ AI( , Y ), where δ is the step size, I( , Y ) is the mutual information of interest and proj( ) projects the matrix to the feasible set defined by the energy constraint in (8), i.e., renormalize A to satisfy the energy constraint. 3. Repeat previous step until convergence. In general the mutual information is not a concave function of A, and therefore we cannot guarantee a global-optimal solution. In all experiments the solution converged to a useful/effective solution from a random start.
Open Source Code No The paper does not provide any explicit statements or links indicating the availability of open-source code for the described methodology.
Open Datasets Yes We use a training set consisting of 500 jpeg images from the Berkeley Segmentation Datasets (Carson et al., 2012)... and We consider three datasets including Satellite data, Letter and UPSP digit data, following (Chen et al., 2012), wherein an explanation of the datasets is provided.
Dataset Splits No The paper mentions 'training data' and 'test data' with specific sizes (e.g., 'training size 4435, testing size 2000' for Satellite data, 'training size 16000, test size 4000' for Letter data), but does not explicitly describe or specify the use of a separate validation dataset split.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details with version numbers.
Experiment Setup Yes The gradient descent step size and the number of iterations are set to 0.01 and 2000, respectively. The same energy constraint (E = 1, arbitrarily) has been applied to all methods considered here.