Binary Linear Compression for Multi-label Classification
Authors: Wen-Ji Zhou, Yang Yu, Min-Ling Zhang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on several multi-label datasets show that, employing classification in the embedded space results in much simpler models than regression, leading to smaller structure risk. The proposed methods are also shown to be superior to some state-of-the-art approaches. |
| Researcher Affiliation | Academia | National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Collaborative Innovation Center of Novel Software Technology and Industrialization, Nanjing, China School of Computer Science and Engineering, Southeast University, Nanjing, China Key Laboratory of Computer Network and Information Integration (SEU), Ministry of Education, China yuy@nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 BILC |
| Open Source Code | No | The paper mentions a package used: "For BILC optimization, we use RACOS [Yu et al., 2016] as the derivative-free optimization method through the ZOOpt package (https://github.com/eyounx/ZOOpt)." However, this is a tool/package used by the authors, not an explicit statement that the source code for their specific BILC implementation or methodology is provided. |
| Open Datasets | Yes | We employ 5 datasets in our experiments, including core5k [Duygulu et al., 2002], bibtex [Katakis et al., 2008], bookmarks [Katakis et al., 2008], NUS-Wide [Chua et al., 2009], Delicious [Tsoumakas et al., 2008]. All the datasets are publicly available. |
| Dataset Splits | No | The paper provides '#training instances' and '#test instances' in Table 1 for each dataset, but does not explicitly mention a separate validation set split or the methodology for such a split (e.g., percentages, counts, or cross-validation setup). |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU/GPU models, processor types, or memory used for running the experiments. It only discusses the experimental setup at a high level. |
| Software Dependencies | No | The paper mentions using 'Adaboost', 'LSBoost', 'RACOS', and 'ZOOpt package' (with a URL), but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | Adaboost [Freund and Schapire, 1997] is employed as the base classifier in BILC and 1-Bit Compressed Sensing. It is configured with pruned decision tree, 200 iterations. Correspondingly, LSBoost [Jerome et al., 2001] is employed as the base regressor of Compressed Sensing, PLST and SLEEC. It is also configured with pruned decision tree, 200 iterations, and learning rate 0.1. For BILC optimization, we use RACOS [Yu et al., 2016] as the derivative-free optimization method through the ZOOpt package (https://github.com/eyounx/ZOOpt). As for RACOS, we use 0.4 L ˆL evaluation budget. |