Global Optimization Networks

Authors: Sen Zhao, Erez Louidor, Maya Gupta

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show the GON maximizers are statistically significantly better predictions than those produced by convex fits, GPR, or DNNs, and form more reasonable predictions for real-world problems.
Researcher Affiliation Industry 1Google Research, Mountain View, CA 94043 USA. Correspondence to: Sen Zhao <senzhao@google.com>.
Pseudocode No The paper includes block diagrams and mathematical proofs, but no explicitly labeled "Pseudocode" or "Algorithm" blocks with structured steps.
Open Source Code Yes Code for some experiments can be found at https://github.com/google-research/googleresearch/tree/master/gon.
Open Datasets Yes The data can be downloaded at www.kaggle.com/senzhaogoogle/kingsreign. / publicly available at www.kaggle.com/senzhaogoogle/puzzlesales. / Using Kaggle data from Wine Enthusiast Magazine3...3www.kaggle.com/dbahri/wine-ratings / CIFAR10/100 (Krizhevsky, 2009), Fashion MNIST (Xiao et al., 2017), MNIST (Le Cun et al., 2010) and cropped SVHN (Netzer et al., 2011) datasets
Dataset Splits Yes For each experiment and for each method, we train a set of models with different hyperparameter choices, select the best model according to a validation or cross-validation metric (metric described below) / non-IID train/validation/test sets had 36/32/27 puzzles / 84,642 train samples, 12,092 validation samples, and 24,185 test samples, all IID. / default train/test splits, and use 10% of the train set as validation.
Hardware Specification No The paper mentions "our machines with 128GB of memory" but does not specify any particular GPU models, CPU models, or other detailed hardware components used for running experiments.
Software Dependencies No The paper mentions software like sklearn, TensorFlow, Keras, and ADAM, but it does not provide specific version numbers for these libraries or packages.
Experiment Setup Yes used ADAM (Kingma & Ba, 2015) with a default learning rate of .001 (preliminary experiments with learning rates of 0.0003 as suggested in Liu et al. (2020) yielded similar results). Batch size was N for N < 100, 1000 for the larger wine experiment in Sec 5.4, and 100 otherwise. / For GON and CGON, we use an ensemble of D unimodal lattices. All methods are trained for 250 epochs. / For both GON and CGON, we first use D PLFs with K keypoints to calibrate the D inputs for optimization. The unimodal function consists of an enesemble of D unimodal lattices, each fuses 3 inputs with V keypoints.