Learning to Approximate a Bregman Divergence

Authors: Ali Siahkamari, XIDE XIA, Venkatesh Saligrama, David Castañón, Brian Kulis

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically validate our approach problems of ranking and clustering, showing that our method tends to outperform a wide range of linear and non-linear metric learning baselines. In this experiment we implement PBDL on four standard UCI classification data sets that have previously been used for metric learning benchmarking. See the supplementary material for additional data sets. We apply the learned divergences to the tasks of semi-supervised clustering and similarity ranking.
Researcher Affiliation Academia 1 Department of Electrical and Computer Engineering 2 Department of Computer Science Boston University Boston, MA, 02215 {siaa, xidexia, srv, dac, bkulis}@bu.edu
Pseudocode No The paper describes a linear program (LP) and refers to "the above algorithm" but does not provide a structured pseudocode or algorithm block.
Open Source Code Yes Code for all experiments is available on our github page2. 2https://github.com/Siahkamari/Learning-to-Approximate-a-Bregman-Divergence.git
Open Datasets Yes In this experiment we implement PBDL on four standard UCI classification data sets that have previously been used for metric learning benchmarking. See the supplementary material for additional data sets.
Dataset Splits Yes To learn a Bregman divergence we use a cross-validation scheme with 3 folds. The λ in our algorithm (PBDL) were both chosen by 3-fold cross validation on training data on a grid 10 8:1:4.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments (e.g., GPU/CPU models, memory specifications, or cloud instance types).
Software Dependencies No The paper mentions "Gurobi solvers [13]" but does not specify a version number for Gurobi or other key software dependencies with their versions.
Experiment Setup Yes The λ in our algorithm (PBDL) were both chosen by 3-fold cross validation on training data on a grid 10 8:1:4. The number of inequalities provided was 2000 for each case.