Large Margin Metric Learning for Multi-Label Prediction
Authors: Weiwei Liu, Ivor Tsang
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments demonstrate that our proposed method is significantly faster than CCA and MMOC in terms of both training and testing complexities. Moreover, our method achieves superior prediction performance compared with state-of-the-art methods. |
| Researcher Affiliation | Academia | Center for Quantum Computation and Intelligent Systems University of Technology, Sydney, Australia liuweiwei863@gmail.com, ivor.tsang@uts.edu.au |
| Pseudocode | Yes | Algorithm 1 Accelerated Proximal Gradient Algorithm for Solving Eq. (4) Input: η (0, 1) is a constant. Choose Q0 = Q 1 S+q . t0 = t 1 = 1 and κ = 0. Choose the Lipschitz constant Lf and set τ0 = Lf Output: The optimal solution to Eq. (4) 1: Set Zκ = Qκ + tκ 1 1 tκ (Qκ Qκ 1) 2: Set τ = ητκ 3: for j = 0, 1, 2, . . . , do 4: Set G = Zκ 1 τ f(Zκ), compute Sτ(G) 5: if F(Sτ(G)) Aτ(Sτ(G), Zκ), then 6: set τκ = τ, stop 7: else 8: τ = 1 ητ 9: end if 10: end for 11: Set Qκ+1 = Sτ(G) 12: Compute tκ+1 = 2 . Let κ = κ + 1 13: Quit if stopping condition is achieved. Otherwise, go to |
| Open Source Code | No | The paper mentions using third-party tools like LIBLINEAR and CVX, but does not provide any specific link or explicit statement about the availability of the source code for their own proposed methodology. |
| Open Datasets | Yes | Data Sets We conduct experiments on a variety of realworld data sets from different domains2 (Table 2). scene (Boutell et al. 2004): Collects images of outdoor... cal500 (Turnbull et al. 2008): Contains songs by differ-ent artists... corel5k (Duygulu et al. 2002): Contains images from Stock Photo CDs... delicious (Tsoumakas, Katakis, and Vlahavas 2008): Con-tains textual data of web pages... Eur-Lex (Menc ıa and F urnkranz 2008): Collects docu-ments on European Union law. 2http://mulan.sourceforge.net |
| Dataset Splits | Yes | We perform 10-fold cross-validation on each data set and report the mean and standard error of each evaluation measurement. |
| Hardware Specification | Yes | All experiments are conducted on a workstation with a 3.4GHZ Intel CPU and 32GB main memory running Linux platform. |
| Software Dependencies | No | The paper mentions 'Mat Lab', 'LIBLINEAR', and 'CVX' but does not specify any version numbers for these software components or any other dependencies. |
| Experiment Setup | Yes | As with the experimental settings in Zhang and Schneider (2012), the number of output projections d is set to the number of original labels q (d = q) for PLST, CCA and MMOC and the decoding parameter is set as λ = 1 for CCA and MMOC. In our experiment, we find that the performance of k NN or ML-k NN have no significant difference on most datasets with varying k. Following the setting in (Zhang and Zhou 2007), we set k = 10 for k NN, ML-k NN and our method. η = 0.4 is set for the APG algorithm and C = 10 is set for MMOC and our method. We set the stopping condition threshold as ϵ = 0.01 for EUR-Lex (ed) data set. |