Equivariant Priors for compressed sensing with unknown orientation
Authors: Anna Kuzina, Kumar Pratik, Fabio Valerio Massoli, Arash Behboodi
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We consider two different setups. The conventional compressed sensing discussed in section 2 (no rotation) and compressed sensing with unknown orientation discussed in section 3.3 (unknown rotation). Datasets We conduct experiments on two different datasets. We start with benchmarking experiments on MNIST. Subsequently, concerning a real-world application of the proposed approach, we conduct experiments on the Low Dose CT Image and Projection Data (MAYO) dataset (Moen et al., 2021)... |
| Researcher Affiliation | Collaboration | 1Vrije Universiteit Amsterdam, Netherlands. Work done during the internship in Qualcomm AI research. 2Qualcomm AI Research, Qualcomm Technologies Netherlands B.V. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.. |
| Pseudocode | Yes | Algorithm 1 Forward pass through equivariant VAE |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | Datasets We conduct experiments on two different datasets. We start with benchmarking experiments on MNIST. Subsequently, concerning a real-world application of the proposed approach, we conduct experiments on the Low Dose CT Image and Projection Data (MAYO) dataset (Moen et al., 2021), which consist of three types of data: DICOM-CT-PD projection data, DICOM image data, and Excel clinical data reports. To our aim, we use the DICOM subset only. Images are divided into three sets labelled N for neuro, C for chest, and L for liver each of which comprises 512x512 images from 50 different patients. To train the generative priors, we consider the L subset which is made of 7K samples that we divide into train, validation and test sets comprising 80%, 10%, and 10% of the images, respectively. |
| Dataset Splits | Yes | To train the generative priors, we consider the L subset which is made of 7K samples that we divide into train, validation and test sets comprising 80%, 10%, and 10% of the images, respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper describes model architectures and parameters in detail in 'D. Experimental Setup' but does not specify any software dependencies like programming languages, libraries, or frameworks with their version numbers. |
| Experiment Setup | Yes | Architecture We train a VAE model, where both encoder and decoder are fully convolutional neural networks with the Re LU activations. The representation space size is set to zdim = 128. For MNIST, the Conv-VAE encoder is fully convolutional architecture with kernel size, number of filters, stride, and padding as follows: Input signal (1) [(3, 32, 2, 1) (3, 64, 2, 1) (3, 96, 2, 1) (3, 128, 2, 1) (3, 256, 2, 1) flatten(.) (µz, log σ2 z)]. The decoder comprises of transpose convolutional layers. |