Deeply Learned Spectral Total Variation Decomposition
Authors: Tamara G. Grossmann, Yury Korolev, Guy Gilboa, Carola Schoenlieb
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we describe the experiments we conducted to evaluate the performance of our proposed TVspec NET. |
| Researcher Affiliation | Academia | Tamara G. Grossmann DAMTP University of Cambridge Cambridge, UK tg410@cam.ac.uk Yury Korolev DAMTP University of Cambridge Cambridge, UK yk362@cam.ac.uk Guy Gilboa Department of Electrical Engineering Technion Israel Institute of Technology Haifa, Israel guy.gilboa@ee.technion.ac.il Carola-Bibiane Schönlieb DAMTP University of Cambridge Cambridge, UK cbs31@cam.ac.uk |
| Pseudocode | No | The paper describes the network architecture and the loss function but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code for TVspec NET is publicly available on Github2. 2https://github.com/Tamara Grossmann/TVspec NET |
| Open Datasets | Yes | For training and testing our neural network we use the MS COCO dataset [32] that contains a large number of natural images. [32] T.-Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollár. Microsoft COCO: Common Objects in Context. ar Xiv preprint ar Xiv:1405.0312v3, 2014. |
| Dataset Splits | No | We take 2000 images for training and 1000 for testing. The paper does not explicitly state a separate validation set split or how it was handled (e.g., if a portion of the training set was used for validation). |
| Hardware Specification | Yes | on an NVIDIA Quadro P6000 GPU with 24 GB RAM. We also acknowledge the support of NVIDIA Corporation with the donation of two Quadro P6000, a Tesla K40c and a Titan Xp GPU used for this research. |
| Software Dependencies | No | The paper mentions software components like Matlab, C++/Python, Adam optimiser, ReLU, Dn CNN, FFDnet, and U-Net, but it does not specify any version numbers for these software dependencies. |
| Experiment Setup | Yes | Each image is turned to greyscale and randomly cropped to a 64 64 pixel window. For the purpose of data augmentation, we also take 128 128 crops for some images and downsample them by a factor of 2, obtaining again images of size 64 64. After standardising the dataset to have zero mean and a standard deviation of 1, we generate K = 50 ground truth bands (7) using the model driven approach in Section 2.2. The bands are then combined dyadically to form 6 spectral bands. ... We use the Adam optimiser [27] with an initial learning rate of 10 3 and multi step learning rate decay. Our neural network is trained with a batch size of 8 and for 5000 epochs... |