Pairwise Conditional Gradients without Swap Steps and Sparser Kernel Herding

Authors: Kazuma K Tsuji, Ken’Ichiro Tanaka, Sebastian Pokutta

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Moreover, we observe in the numerical experiments that BPCG s solutions are much sparser than those of PCG. We apply BPCG to the kernel herding setting, where we derive nice quadrature rules and provide numerical results demonstrating the performance of our method.
Researcher Affiliation Collaboration 1MUFG Bank, Ltd., Tokyo, Japan 2Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan 3PRESTO Japan Science and Technological Agency (JST) Tokyo, Japan 4 AISST, Zuse Institute Berlin and Institute of Mathematics, Technische Universität Berlin, Berlin, Germany.
Pseudocode Yes Algorithm 1 Blended Pairwise Conditional Gradients (BPCG)
Open Source Code Yes The BPCG algorithm has been also integrated into the Frank Wolfe.jl Julia package (Besançon et al., 2022) and is now the de facto default choice for active set based FW variants.
Open Datasets Yes We used the data set Movie Lens Latest Datasets http://files.grouplens.org/datasets/ movielens/ml-latest-small.zip.
Dataset Splits No The paper mentions several datasets and problem types but does not explicitly provide details about training, validation, and test splits (percentages, counts, or references to predefined splits).
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory used for running the experiments.
Software Dependencies No The paper mentions the "Frank Wolfe.jl Julia package" but does not specify its version number or the version of Julia used, which is required for reproducibility.
Experiment Setup Yes For the lazified BPCG, we use the accuracy parameter J = 2.