Multi-Graph-View Learning for Complicated Object Classification
Authors: Jia Wu, Shirui Pan, Xingquan Zhu, Zhihua Cai, Chengqi Zhang
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world learning tasks demonstrate the performance of MGVBL for complicated object classification. |
| Researcher Affiliation | Academia | Quantum Computation & Intelligent Systems Centre, University of Technology, Sydney, Australia Dept. of Computer & Electrical Engineering and Computer Science, Florida Atlantic University, USA Dept. of Computer Science, China University of Geosciences, Wuhan 430074, China |
| Pseudocode | Yes | Algorithm 1 Discriminative Subgraph Exploration and Algorithm 2 MGVBL: Multi-Graph-View Bag Learning |
| Open Source Code | No | The paper does not provide any statement or link indicating the availability of open-source code for the described methodology. |
| Open Datasets | Yes | The Digital Bibliography & Library Project (DBLP) data set 1 consists of bibliography in computer science. 1http://dblp.uni-trier.de/xml/ and The original images [Li and Wang, 2008] from Corel data set2 are preprocessed by using VLFeat segmentation [Vedaldi and Fulkerson, 2008], with each image being segmented into multiple regions and each region corresponding to one graph. 2https://sites.google.com/site/dctresearch/Home/content-basedimage-retrieval |
| Dataset Splits | Yes | All reported results are based on 10 times 10-fold crossvalidation. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware (e.g., GPU/CPU models, memory, or cloud resources) used to run the experiments. |
| Software Dependencies | No | The paper mentions software tools like VLFeat, SLIC, and gSpan, but does not specify their version numbers or list any other software dependencies with version information. |
| Experiment Setup | Yes | Unless specified otherwise, we set minimum support threshold min sup = 3% for scientific publication data (Section 4.3) and min sup = 2% for content-based image retrieval (Section 4.4). we set ϵ = 0.05 in our experiments. |