Sparse Bayesian Generative Modeling for Compressive Sensing
Authors: Benedikt Böck, Sadaf Syed, Wolfgang Utschick
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We support our approach theoretically through the concept of variational inference and validate it empirically using different types of compressible signals. and 4 Experiments |
| Researcher Affiliation | Academia | Benedikt Böck, Sadaf Syed, Wolfgang Utschick TUM School of Computation, Information and Technology Technical University of Munich {benedikt.boeck,sadaf.syed,utschick}@tum.de |
| Pseudocode | Yes | N Pseudo-Code for the Training and Inference of the CSVAE and CSGMM |
| Open Source Code | Yes | 1Source code is available at https://github.com/beneboeck/sparse-bayesian-gen-mod. |
| Open Datasets | Yes | We use the MNIST dataset (N = 784) for evaluation [42, (CC-BY-SA 3.0 license)]. and We also use a dataset of 64 × 64 cropped celeb A images (N = 3 × 642 = 12288) [44] and evaluate on the Fashion MNIST dataset (N = 784) in Appendix L [45, (MIT license)]. |
| Dataset Splits | Yes | We once reduce the learning rate by a factor of 2 during training and stop the training, when the modified ELBO in (15) for a validation set of 5000 samples does not increase. |
| Hardware Specification | Yes | All models have been simulated on an NVIDIA A40 GPU except for the proposed CSGMM, whose experiments have been conducted on an Intel(R) Xeon(R) Gold 6134 CPU @ 3.20GHz. |
| Software Dependencies | No | The paper mentions software like 'Adam' for optimization and 'Pywavelets' for wavelet analysis, but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | For CSGMM, we set the number K of components to 32... The CSVAE encoders and decoders contain two fully-connected layers with Re LU activation... The latent dimension is set to 16, the learning rate is set to 2 × 10−5, and the batch size is set to 64. We use the Adam optimizer for optimization [47]. |