Learning to solve TV regularised problems with unrolled algorithms

Authors: Hamza Cherkaoui, Jeremias Sulam, Thomas Moreau

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate those findings with experiments on synthetic and real data.
Researcher Affiliation Academia Hamza Cherkaoui Université Paris-Saclay, CEA, Inria Gif-sur-Yvette, 91190, France hamza.cherkaoui@cea.fr Jeremias Sulam Johns Hopkins University jsulam1@jhu.edu Thomas Moreau Université Paris-Saclay, Inria, CEA, Palaiseau, 91120, France thomas.moreau@inria.fr
Pseudocode No No structured pseudocode or algorithm blocks were found. The paper includes definitions of operators but not in an algorithm format.
Open Source Code No The paper states 'We used the implementation1 of Barbero and Sra (2018) to compute TV proximal operator using taut-string algorithm' and provides a link to this third-party code. It does not state that the authors are releasing their own code for their methodology.
Open Datasets Yes We choose two subjects from the UK Bio Bank (UKBB) dataset (Sudlow et al., 2015)...
Dataset Splits No The paper mentions using 'half for training and other half for testing' for synthetic data but does not specify a validation split or its size, nor does it refer to a standard split that includes validation. For fMRI data, no split information beyond training on one subject and testing on another is provided.
Hardware Specification No The paper does not provide any specific details about the hardware used, such as GPU/CPU models, processor types, or memory amounts for running its experiments.
Software Dependencies No The paper mentions 'All experiments are performed in Python using Py Torch (Paszke et al., 2019)' but does not provide specific version numbers for Python, PyTorch, or any other critical software dependencies.
Experiment Setup Yes The full training process is described in Appendix A. (Appendix A states: 'We set the learning rate to 10e-3 for the first 500 epochs and then to 10e-4 for the remaining 500 epochs for a total of 1000 epochs. We use Adam optimizer (Kingma and Ba, 2014) with β1 = 0.9 and β2 = 0.999. We set the number of layers T for LPGD-Taut and LPGD-LISTA to 40 for all experiments. For LPGD-LISTA, the number of inner layers Tin is set to 50.')