Multi-Level Cross-Modal Alignment for Image Clustering

Authors: Liping Qiu, Qin Zhang, Xiaojun Chen, Shaotian Cai

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on five benchmark datasets clearly show the superiority of our new method.
Researcher Affiliation Academia Liping Qiu*, Qin Zhang*, Xiaojun Chen , Shaotian Cai Shenzhen University, Shenzhen, China qiuliping2021@email.szu.edu.cn, {qinzhang, xjchen}@szu.edu.cn, cai.st@foxmail.com
Pseudocode No The paper describes methods in text and figures but does not provide a formal pseudocode block or algorithm section.
Open Source Code No The paper does not explicitly state that open-source code is provided, nor does it include a link to a code repository.
Open Datasets Yes We used the following five benchmark datasets in our experiment: STL10 (Coates, Ng, and Lee 2011), Cifar10 (Krizhevsky 2009), Cifar100-20 (Krizhevsky 2009), Image Net Dogs (Chang et al. 2017b) and Tiny-Image Net (Le and Yang 2015).
Dataset Splits No The paper mentions the use of benchmark datasets and repeated training, but it does not specify explicit train/validation/test split percentages or sample counts for reproduction.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, or memory) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup No The paper discusses trade-off parameters and their sensitivity, as well as some hyperparameter names like τia, τpa, ρu, γr, γh. However, it does not provide a complete and specific list of all hyperparameters and system-level training settings (e.g., learning rate, batch size, optimizer) used to obtain the main experimental results, making full reproduction challenging.