Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Cross-Modal Similarity Learning via Pairs, Preferences, and Active Supervision
Authors: Yi Zhen, Piyush Rai, Hongyuan Zha, Lawrence Carin
AAAI 2015 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of the proposed approach on two problems that require computing pairwise similarities between cross-modal object pairs: cross-modal link prediction in bipartite graphs, and hashing based cross-modal similarity search.We perform experiments using the CSLP framework on two tasks: (i) cross-modal link prediction, and (ii) cross-modal hashing based image vs. text retrieval. |
| Researcher Affiliation | Academia | Yi Zhen Georgia Institute of Technology Atlanta, GA 30332 EMAIL Piyush Rai Duke University Durham, NC 27708 EMAIL Hongyuan Zha Georgia Institute of Technology Atlanta, GA 30332 EMAIL Lawrence Carin Duke University Durham, NC 27708 EMAIL |
| Pseudocode | No | The paper describes inference steps and sampling procedures in narrative text (e.g., 'Sampling U and V:', 'Sampling F and G:', 'Sampling a and b:'), but does not include any formally labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | No | The paper does not provide any concrete statement about releasing source code for the methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | (i) Drug is a drug-protein interaction network... 3http://www.genome.jp/tools/dinies/help.html (ii) Wiki is generated from Wikipedia featured articles... 4http://www.svcl.ucsd.edu/projects/cross-modal/ (iii) Flickr consists of 186,577 image-tag pairs pruned from the NUS data set... 5http://lms.comp.nus.edu.sg/research/NUS-WIDE.htm |
| Dataset Splits | Yes | For Wiki, we use 20% data as the query set and 80% the database set; for Flickr, 1% data are chosen to form the query set and the remaining 99% the database set. All the methods are trained on the same 5 random training sets, and then applied to the same query and database sets. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running experiments (e.g., GPU/CPU models, memory, or specific computing environments with specifications). |
| Software Dependencies | No | The paper does not mention any specific software dependencies or their version numbers (e.g., programming languages, libraries, or frameworks with version details). |
| Experiment Setup | No | The paper describes aspects of the experimental procedure, such as the initial number of constraints and how new constraints were added during active sampling, and that 'All the methods are trained on the same 5 random training sets'. However, it does not provide specific hyperparameters (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings for the models themselves. |