From Common to Special: When Multi-Attribute Learning Meets Personalized Opinions
Authors: Zhiyong Yang, Qianqian Xu, Xiaochun Cao, Qingming Huang
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Eventually, the empirical study carried out in this paper demonstrates the effectiveness of our proposed method. |
| Researcher Affiliation | Academia | 1SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China 3University of Chinese Academy of Sciences, Beijing, China 4Key Lab of Intell. Info. Process., Inst. of Comput. Tech., CAS, Beijing, China |
| Pseudocode | Yes | Algorithm 1: The accelerated proximal gradient method for solving (P1) |
| Open Source Code | Yes | 2https://github.com/joshuaas/AAAI-18-Personalized-Multi Attribute-Learning |
| Open Datasets | Yes | For attribute learning, we use the shoes Dataset (Kovashka and Grauman 2013; Kovashka, Parikh, and Grauman 2015) |
| Dataset Splits | Yes | For each involved algorithm, the hyper-parameters are tuned based on a 3 fold cross validation on the training set, and the average performance of the test set on 5 different splits are recorded. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for the experiments, such as CPU or GPU models, memory, or cloud computing instances. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries/solvers with their versions) that would be needed for replication. |
| Experiment Setup | Yes | For each involved algorithm, the hyper-parameters are tuned based on a 3 fold cross validation on the training set... According to theorem 1, we set t = 10, λ1 = 2nuα, λ2 = 2.5 nα, λ3 = 32α. |