Optimization-Derived Learning with Essential Convergence Analysis of Training and Hyper-training
Authors: Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang, Yixuan Zhang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the efficiency of BMO with competitive performance on sparse coding and real-world applications such as image deconvolution and rain streak removal. |
| Researcher Affiliation | Academia | 1DUT-RU International School of Information Science and Engineering, Dalian University of Technology, Dalian, Liaoning, China. 2Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian, Liaoning, China. 3Peng Cheng Laboratory, Shenzhen, Guangdong, China. 4Department of Mathematics and Statistics, University of Victoria, Victoria, British Columbia, Canada. 5Department of Mathematics, SUSTech International Center for Mathematics, Southern University of Science and Technology, Shenzhen, Guangdong, China. 6National Center for Applied Mathematics Shenzhen, Shenzhen, Guangdong, China. |
| Pseudocode | Yes | Algorithm 1 The Solution Strategy of BMO |
| Open Source Code | No | The paper does not provide a direct link to open-source code or explicitly state its availability. |
| Open Datasets | Yes | Following the setting in (Chen et al., 2018), we experiment on the classic Set14 dataset... We test the performance of BMO on three classical testing images in Table 2. and compare our method with representative methods including numerically designed method EPLL... We use a large dataset containing 400 images from Berkeley Segmentation Dataset, 4744 images from Waterloo Exploration Database, 900 images from DIV2K dataset, and 2750 images from Flick2K dataset. ... We carry out the rain streak removal experiment on synthesized rain datasets, including Rain100L and Rain100H. |
| Dataset Splits | No | The paper specifies training and testing sets, but does not explicitly mention a separate validation set split (e.g., Rain100L contains 200 rainy/clean image pairs for training and another 100 pairs for testing). |
| Hardware Specification | Yes | Our experiments were mainly conducted on a PC with Intel Core i9-10900KF CPU (3.70GHz), 128GB RAM and two NVIDIA Ge Force RTX 3090 24GB GPUs. |
| Software Dependencies | No | Appendix C mentions using the Adam optimizer and setting a random seed, but does not provide specific version numbers for software components like Python, PyTorch, TensorFlow, or CUDA. |
| Experiment Setup | Yes | More detailed parameter setting and network architectures can be found in Appendix C. ... Table 5. Values for hyper-parameters of sparse coding. ... Table 6. Values for hyper-parameters of image deconvolution. ... Table 7. Values for hyper-parameters of rain streak removal. |