Multi-Proxy Learning from an Entropy Optimization Perspective

Authors: Yunlong Yu, Dingyi Zhang, Yingming Li, Zhongfei Zhang

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate that the proposed approach achieves competitive performances.
Researcher Affiliation Academia 1College of Information Science and Electronic Engineering, Zhejiang University 2 Computer Science Department, Binghamton University
Pseudocode No The paper describes the proposed method mathematically and textually, but does not include structured pseudocode or algorithm blocks.
Open Source Code Yes Codes and an appendix are provided 1. 1https://github.com/yunlongyu/MPL
Open Datasets Yes We conduct experiments on three benchmark datasets: CUB [Wah et al., 2011], Cars196 [Krause et al., 2013], and Stanford Online Products (SOP) [Oh Song et al., 2016].
Dataset Splits No The paper describes training sets and evaluation sets for CUB, Cars196, and SOP datasets by class or image count (e.g., 'first 100 species are used for training and the rest 100 species are used for evaluation'), but it does not explicitly specify a distinct 'validation' dataset split.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., CPU, GPU models, or cloud computing instances) used for running the experiments.
Software Dependencies No The paper mentions using ImageNet pre-trained Resnet50 and Inception with batch normalization, and optimizing with Adam, but does not specify any software names with version numbers for libraries or frameworks used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes The model is optimized with Adam [Kingma and Ba, 2015] for 50 epochs. We adopt the P-K sampling strategy to construct each batch with P=8 and K=4, where P is the class number in each batch and K is the sample number of each class. For both CUB and Cars196 datasets, we set the proxy number of each class to 5 during training. For the SOP dataset, the proxy number of each class is set to 2. The temperature value is set to 1/9 across all the datasets.