Co-Regularized PLSA for Multi-Modal Learning
Authors: Xin Wang, MingChing Chang, Yiming Ying, Siwei Lyu
AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the performance of the co PLSA algorithms on text/image cross-modal retrieval tasks, on which they show competitive performance with state-of-the-art methods. |
| Researcher Affiliation | Academia | 1Department of Computer Science, 2Department of Mathematics and Statistics University at Albany, State University of New York Albany, NY 12222 |
| Pseudocode | Yes | The right panel of Fig.1 provides the pseudo-code of the overall algorithm. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We use two benchmark text/image datasets in our experiments: TVGraz (Khan, Saffari, and Bischof 2009) and Wikipedia (Rasiwasia et al. 2010) |
| Dataset Splits | No | The paper mentions 'choose the balance parameter λ in co PLSA algorithms with cross-validation on a subset of the training data,' but it does not specify a distinct validation split (e.g., 80/10/10) for evaluating the model during training. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions 'Lambert W function... is provided in popular numerical tools such as MATLAB (function lambertw) or Sci Py (function scipy.special.lambertw),' but it does not provide specific version numbers for MATLAB or SciPy. |
| Experiment Setup | Yes | We extract 50 topics from the TVGraz dataset and 100 topics from the Wikipedia dataset, and choose the balance parameter λ in co PLSA algorithms with cross-validation on a subset of the training data. |