A Probabilistic Hierarchical Model for Multi-View and Multi-Feature Classification
Authors: Jinxing Li, Hongwei Yong, Bob Zhang, Mu Li, Lei Zhang, David Zhang
AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on the extensive synthetic and two real-world datasets substantiate the effectiveness and superiority of our approach as compared with state-of-the-art. |
| Researcher Affiliation | Academia | Jinxing Li, Hongwei Yong, Bob Zhang, Mu Li, Lei Zhang, David Zhang (email: {csjxli, cshyong, csmuli, cslzhang, csdzhang}@comp.polyu.edu.hk) Department of Computing, Hong Kong Polytechnic University, Hung Hom, Hong Kong, China Department of Computer and Information Science, University of Macau, Macau, China (email: bobzhang@umac.mo) |
| Pseudocode | Yes | Algorithm 1 [HMMF]Hierarchical Multi-view Multifeature Fusion |
| Open Source Code | No | The paper does not explicitly state that source code for the described methodology is available, nor does it provide a link to a code repository. |
| Open Datasets | Yes | The synthetic data is generated according to the assumption of the proposed method. Particularly, given the values of Dj and Dkj, the parameters Ajkj, Σjkj, Σjp and μjp are randomly generated. ... We also select the biomedical dataset (Li et al. 2016) to evaluate the performance of the proposed method. ... The third one is the Wiki Text-Image dataset (Rasiwasia et al. 2010) collected from the Wiki Pedia s featured articles. |
| Dataset Splits | Yes | Additionally, 40, 50, 60, and 70 instances in each category are randomly selected for training with five independent times, and the rest of sample are exploited for testing. ... For the parameter tuning on synthetic and Wiki Text Image datasets, we tune the dimension Dj of the latent variable through 5-fold cross-validation using training data. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper mentions general algorithms and models like EM algorithm, SVM, Alexnet, and PCA, but does not provide specific version numbers for any software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | In this experiment, we set Dj and Djkj to be 10 and 20, respectively. ... Additionally, 40, 50, 60, and 70 instances in each category are randomly selected for training with five independent times, and the rest of sample are exploited for testing. ... For the parameter tuning on synthetic and Wiki Text Image datasets, we tune the dimension Dj of the latent variable through 5-fold cross-validation using training data. ... we set the number of iterations to 100 in our experiments. |