Simple Stochastic and Online Gradient Descent Algorithms for Pairwise Learning

Authors: ZHENHUAN YANG, Yunwen Lei, Puyu Wang, Tianbao Yang, Yiming Ying

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experimental Validation We now report some preliminary experiments2 on AUC maximization with f(w; (x, y), (x0, y0)) = (w>(x x0))I[y=1 y0= 1] where is a surrogate loss function, e.g., the hinge loss (t) = (1 t)+. The purpose of our first experiment is to compare our algorithm, i.e. Algorithm 1, against four existing algorithms for pairwise learning in terms of generalization and CPU running time on several datasets available from the LIBSVM website [9].
Researcher Affiliation Academia Zhenhuan Yang1 Yunwen Lei2 Puyu Wang3 Tianbao Yang4 Yiming Ying1 1University at Albany, SUNY, Albany, NY 2University of Birmingham, Birmingham 3City University of Hong Kong, Hong Kong 4University of Iowa City, IA
Pseudocode Yes Algorithm 1 SGD for Pairwise Learning
Open Source Code Yes 2The source codes are available at https://github.com/zhenhuan-yang/simple-pairwise.
Open Datasets Yes on several datasets available from the LIBSVM website [9].
Dataset Splits No The paper mentions 'cross validation' for tuning step sizes ('which is tuned by cross validation') but does not provide specific details on train/validation/test splits, percentages, or methodology for the overall experimental data partitioning.
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments.
Software Dependencies No The paper mentions software like LIBSVM but does not provide specific version numbers for any ancillary software dependencies.
Experiment Setup Yes To fairly compare the CPU running time, we apply the following uniform setting across all algorithms: 1) W is an 2 ball with the same diameter; 2) the step sizes t = which is tuned by cross validation.