Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Learning Proximal Operators to Discover Multiple Optima

Authors: Lingxiao Li, Noam Aigerman, Vladimir Kim, Jiajin Li, Kristjan Greenewald, Mikhail Yurochkin, Justin Solomon

ICLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We further present an exhaustive benchmark for multi-solution optimization to demonstrate the effectiveness of our method.
Researcher Affiliation Collaboration Lingxiao Li MIT CSAIL EMAIL Noam Aigerman Adobe Research EMAIL Vladimir G. Kim Adobe Research EMAIL Jiajin Li Stanford University EMAIL Kristjan Greenewald IBM Research, MIT-IBM Watson AI Lab EMAIL Mikhail Yurochkin IBM Research, MIT-IBM Watson AI Lab EMAIL Justin Solomon MIT CSAIL EMAIL
Pseudocode No The paper describes the proposed methods but does not include any formal pseudocode or algorithm blocks.
Open Source Code Yes The complete source code for all experiments can be found at https://github.com/lingxiaoli94/POL.
Open Datasets Yes We apply the above MSO formulation to the COCO2017 dataset (Lin et al., 2014).
Dataset Splits Yes We use the training and validation split of COCO2017 (Lin et al., 2014) as the training and test dataset, keeping only images with at most 10 ground truth bounding boxes.
Hardware Specification Yes All training is done on a single NVIDIA RTX 3090 GPU.
Software Dependencies No The paper mentions software like 'PyTorch' and 'DGCNN' but does not specify their version numbers or other key software dependencies with specific versions.
Experiment Setup Yes In each training iteration of POL and GOL, we sample 32 problem parameters from the training dataset of T , and 256 of x s from unif(X)... The learning rate of the operator is kept at 10 4 for both POL and GOL, and by default we train the operator network for 2 105 iterations.