Random mesh projectors for inverse problems
Authors: Konik Kothari*, Sidharth Gupta*, Maarten v. de Hoop, Ivan Dokmanic
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show experimentally that the proposed method is more robust to measurement noise and corruptions not seen during training than a directly learned inverse. |
| Researcher Affiliation | Academia | Konik Kothari University of Illinois at Urbana-Champaign kkothar3@illinois.edu Sidharth Gupta University of Illinois at Urbana-Champaign gupta67@illinois.edu Maarten V. de Hoop Rice University mdehoop@rice.edu Ivan Dokmani c University of Illinois at Urbana-Champaign dokmanic@illinois.edu |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at https://github.com/swing-research/deepmesh under the MIT License. |
| Open Datasets | Yes | The direct baseline and Sub Net are trained on a set of 20,000 images from the arbitrarily chosen LSUN bridges dataset (Yu et al. (2015)) and tested with the geophysics and x-ray images. Proj Nets are trained with 10,000 images from the LSUN dataset. |
| Dataset Splits | No | The paper mentions that the regularization parameter λ was 'determined on five held-out images', but it does not provide explicit training/validation/test dataset splits or their sizes for general model validation. |
| Hardware Specification | No | The paper acknowledges 'the support of NVIDIA Corporation with the donation of one of the GPUs used for this research' but does not specify the exact model of the GPU or other hardware components used for experiments. |
| Software Dependencies | No | The paper mentions 'convolutional neural networks' and 'Adam optimizer' but does not specify any software versions for libraries like TensorFlow, PyTorch, or Python itself. |
| Experiment Setup | No | The paper states 'All networks are trained with the Adam optimizer' but does not provide specific hyperparameters such as learning rate, batch size, or number of epochs, nor does it detail other training configurations. |