Diversified Bayesian Nonnegative Matrix Factorization
Authors: Qiao Maoying, Yu Jun, Liu Tongliang, Wang Xinchao, Tao Dacheng5420-5427
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on a synthetic dataset and a real-world MULAN dataset for multilabel learning (MLL) task demonstrate the superiority of the proposed method. |
| Researcher Affiliation | Academia | Maoying Qiao,1 Jun Yu,2 Tongliang Liu,3 Xinchao Wang,4 Dacheng Tao3 1The Commonwealth Scientific and Industrial Research Organisation, Australia 2Hangzhou Dianzi University, Hangzhou, China 3UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering, The University of Sydney, Darlington, NSW 2008, Australia 4Stevens Institute of Technology, Hoboken, New Jersey 07030 maoying.qiao@csiro.au, yujun@hdu.edu.cn, {tongliang.liu, dacheng.tao}@sydney.au., xinchao.wang@stevens.edu |
| Pseudocode | Yes | Algorithm 1 Gibbs Sampling for Div BNMF |
| Open Source Code | No | The paper does not provide an explicit statement or link to the open-source code for the methodology described. |
| Open Datasets | Yes | We evaluated the performance of the proposed Div BNMF regarding MLL on one nonnegative featured benchmark dataset: the MULAN scene dataset 1. It contained 2047 images with six labels, each of which was represented by a 294-dimensional nonnegative feature vector. It was split into a training set containing 1211 images and a test set containing 1196 images. 1http://mulan.sourceforge.net/ |
| Dataset Splits | No | The paper states the MULAN dataset 'was split into a training set containing 1211 images and a test set containing 1196 images.' It does not explicitly mention a validation split. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or versions used for the experiments. |
| Experiment Setup | Yes | The results for LANMF and LVNMF were optimized by varying the trade-off parameter λ among {0.01, 0.1, 1, 10, 100}. All reconstruction errors were averaged over 10 runs. The first 2000 iterations were omitted as burn-in period... the first 10000 iterations were dumped as burn-in period. The thinning interval was set to 500... |