Learning with Feature Network and Label Network Simultaneously
Authors: Yingming Li, Ming Yang, Zenglin Xu, Zhongfei (Mark) Zhang
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations on three benchmark data sets demonstrate that DRML outstands with a superior performance in comparison with some existing multi-label learning methods. |
| Researcher Affiliation | Academia | College of Information Science & Electronic Engineering, Zhejiang University, China School of Computer Science and Engineering, Big Data Research Center University of Electronic Science and Technology of China |
| Pseudocode | No | The paper describes algorithms and derivations in text and equations, but it does not include a distinct pseudocode block or algorithm box. |
| Open Source Code | No | The paper does not provide any link or explicit statement about the availability of its source code. |
| Open Datasets | Yes | All data sets are obtained from http://mulan.sourceforge.net/datasets-mlc.html. |
| Dataset Splits | Yes | On the Medical and Yeast data sets, we follow the experimental setup used in Mulan. Since there is no fixed split in the Bookmarks data set in Mulan, we use a fixed training set of 60% of the data, and evaluate the performance of our predictions on the fixed test set of 40% of the data. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types). |
| Software Dependencies | No | The paper does not specify the version numbers of any software libraries, frameworks, or programming languages used in the experiments. |
| Experiment Setup | No | The paper mentions regularization parameters λ, γ, η and explores 'dropout level p' but does not provide specific values for these or other hyperparameters (e.g., learning rate, optimizer settings) used for the main results presented in tables. |