Rank Ordering Constraints Elimination with Application for Kernel Learning

Authors: Ying Xie, Chris Ding, Yihong Gong, Zongze Wu

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On seven datasets, our approach reduces the computational time by orders of magnitudes as compared to the current standard quadratically constrained quadratic programming(QCQP) optimization approach. Experiments show that the proposed transformation combined the new efficient algorithm speeds up the computation by orders of magnitudes.
Researcher Affiliation Academia Anhui University, Hefei, China; University of Texas Arlington, Texas, USA; Xian Jiaotong University, Xian, China; Guangdong University of Technology, Guangzhou, China
Pseudocode Yes Here we present an efficient algorithm to solve the optimization problem Eq.(24) and Eq.(26). The algorithm for solving Eq.(29) is the following: (1) Initialize α = (1 1)T ; set α = α/(αT b) . (2) For every i, update αi using αi αibi(αT Fα)/(Fα)i (28) (3) Set α α/(αT b). Repeat (2,3) until α converges.
Open Source Code No The paper does not provide any statements about open-sourcing the code for the methodology or include links to a code repository.
Open Datasets Yes We run the above rank order constrained Kernel Alignment on several data sets(some of them are used in (Zhu et al. 2005)). These datasets are shown in Table 1.
Dataset Splits No The paper mentions using labeled and unlabeled data, and that 'The rest of the data are used for testing,' but it does not specify any validation dataset splits or cross-validation methodology.
Hardware Specification Yes Figure 1 show the average computational time, based on a PC with a 2.3GHz Intel Core Processor, 6GB memory running Windows 7, with the Matlab implementation.
Software Dependencies No The paper only mentions 'Matlab implementation' without specifying a version number or other required software dependencies with their versions.
Experiment Setup No The paper mentions using 'different number of data points as labeled data' and averaging results over '30 runs each time using a different random set as labeled data points', but it does not provide specific hyperparameter values (e.g., learning rate, batch size) or detailed training configurations.