Learning Proximal Operators to Discover Multiple Optima

Authors: Lingxiao Li, Noam Aigerman, Vladimir Kim, Jiajin Li, Kristjan Greenewald, Mikhail Yurochkin, Justin Solomon

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further present an exhaustive benchmark for multi-solution optimization to demonstrate the effectiveness of our method.
Researcher Affiliation Collaboration Lingxiao Li MIT CSAIL lingxiao@mit.edu Noam Aigerman Adobe Research aigerman@adobe.com Vladimir G. Kim Adobe Research vokim@adobe.com Jiajin Li Stanford University jiajinli@stanford.edu Kristjan Greenewald IBM Research, MIT-IBM Watson AI Lab kristjan.h.greenewald@ibm.com Mikhail Yurochkin IBM Research, MIT-IBM Watson AI Lab mikhail.yurochkin@ibm.com Justin Solomon MIT CSAIL jsolomon@mit.edu
Pseudocode No The paper describes the proposed methods but does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes The complete source code for all experiments can be found at https://github.com/lingxiaoli94/POL.
Open Datasets Yes We apply the above MSO formulation to the COCO2017 dataset (Lin et al., 2014).
Dataset Splits Yes We use the training and validation split of COCO2017 (Lin et al., 2014) as the training and test dataset, keeping only images with at most 10 ground truth bounding boxes.
Hardware Specification Yes All training is done on a single NVIDIA RTX 3090 GPU.
Software Dependencies No The paper mentions software like 'PyTorch' and 'DGCNN' but does not specify their version numbers or other key software dependencies with specific versions.
Experiment Setup Yes In each training iteration of POL and GOL, we sample 32 problem parameters from the training dataset of T , and 256 of x s from unif(X)... The learning rate of the operator is kept at 10 4 for both POL and GOL, and by default we train the operator network for 2 105 iterations.