Double-Bounded Optimal Transport for Advanced Clustering and Classification

Authors: Liangliang Shi, Zhaoqi Shen, Junchi Yan

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Even with vanilla Softmax trained features, our extensive experimental results show that our method can achieve good results with our improved inference scheme in the testing stage.
Researcher Affiliation Academia Liangliang Shi, Zhaoqi Shen, Junchi Yan Department of Computer Science and Engineering, Mo E Key Lab of Artificial Intelligence, Shanghai Jiao Tong University {shiliangliang, shenzhaoqi2271621336, yanjunchi}@sjtu.edu.cn
Pseudocode No The paper presents algorithms through mathematical equations (e.g., Eq. 11, Eq. 14) describing iterative steps. However, these are not formatted as clearly labeled "Pseudocode" or "Algorithm" blocks with structured input, output, and control flow statements typically associated with pseudocode.
Open Source Code No The paper does not contain any explicit statement or link to an open-source code repository for the methodology described in this work. It mentions that more details are given in an "online Appendix" but does not specify code availability there.
Open Datasets Yes We do the experiments on CIFAR10-LT, CIFAR100-LT (Krizhevsky, Hinton et al. 2009), Image Net-LT (Liu et al. 2019) for image classification.
Dataset Splits Yes We use the corresponding balanced testing dataset for evaluation, where its labels are uniformly distributed. For CIFAR00-LT and Image Net LT, we report accuracy on three sets: Many-shot (more than 100 images), Medium-shot (20 100 images), and Fewshot (less than 20 images).
Hardware Specification No The paper states that "More details about the experimental settings is given in the online Appendix." It does not provide any specific hardware details such as GPU models, CPU types, or memory specifications in the main text of the paper.
Software Dependencies No The paper mentions that "More details about the experimental settings is given in the online Appendix." However, it does not specify any software names with their version numbers (e.g., Python, PyTorch, CUDA versions) in the main body of the paper.
Experiment Setup No The paper mentions that "For a fair comparison, all methods share the same network backbone and hyperparameters. More details about the experimental settings is given in the online Appendix." This indicates that specific hyperparameter values or detailed training configurations are not provided in the main text.