Optimizing Multivariate Performance Measures from Multi-View Data

Authors: Jim Jing-Yan Wang, Ivor Tsang, Xin Gao

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four benchmark data sets show that it not only outperforms traditional single-view based multivariate performance optimization methods, but also achieves better results than ordinary multi-view learning methods.
Researcher Affiliation Academia 1 King Abdullah University of Science and Technology (KAUST), Computational Bioscience Research Center (CBRC), Computer, Electrical and Mathematical Sciences and Engineering Division (CEMSE), Thuwal, 23955-6900, Saudi Arabia 2Center for Quantum Computation and Intelligent Systems, University of Technology Sydney, Australia
Pseudocode Yes Algorithm 1 Iterative multi-view learning algorithm for multivariate performance measure optimization (MVPO).
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The four benchmark data sets used include the handwritten digit data set (Van Breukelen et al. 1998), the Cite Seer scientific publication data set (Sen et al. 2008), the PASCAL VOC 07 image data set (Everingham et al. 2007), and the Web KB web page data set (Craven et al. 1998).
Dataset Splits No The paper states: "To conduct the experiment, we equally split a data set to two subsets randomly, and used them as training and test sets respectively." It mentions "cross-validation over the training sets" for baseline methods' parameter tuning, but does not provide specific details on a validation split or cross-validation for its own proposed method during training.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies No The paper does not provide specific names of ancillary software or libraries with version numbers required to replicate the experiment.
Experiment Setup Yes The proposed iterative algorithm is given in Algorithm 1. The convergence condition of this algorithm is reached when the most violated y makes Δ(y , y) w πy ξ+ϵ, where ξ is the most current upper bound of the loss function, and ϵ is a convergence threshold. In most cases of our experiments, the convergence is reached within a maximum number of iterations (100 iterations in our experiments). Input: Tradeoff parameters C1, C2.