Multi-Proxy Wasserstein Classifier for Image Classification
Authors: Benlin Liu, Yongming Rao, Jiwen Lu, Jie Zhou, Cho-Jui Hsieh8618-8626
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies are performed on two widely-used classification datasets, CIFAR, and ILSVRC2012, and the substantial improvements on these two benchmarks demonstrate the effectiveness of our method. |
| Researcher Affiliation | Academia | Benlin Liu1 , Yongming Rao2 , Jiwen Lu2, Jie Zhou2, Cho-jui Hsie1 1 UCLA 2 Tsinghua University |
| Pseudocode | Yes | Algorithm 1 Learning Multi-Proxy Wasserstein Classifier |
| Open Source Code | No | The paper does not provide a link to open-source code or explicitly state that its code is publicly available. |
| Open Datasets | Yes | CIFAR-10 (Krizhevsky, Hinton et al. 2009) dataset contains 60,000 low resolution RGB images... split into a training set of 50,000 images and 10,000 test images. |
| Dataset Splits | No | The paper specifies training and test set sizes for CIFAR and ILSVRC2012 but does not explicitly detail a separate validation set or its split. |
| Hardware Specification | Yes | All experiments are conducted with NVIDIA 1080 Ti GPUs using Py Torch 1.4.0 on Python 3.7.4 platform. |
| Software Dependencies | Yes | All experiments are conducted with NVIDIA 1080 Ti GPUs using Py Torch 1.4.0 on Python 3.7.4 platform. |
| Experiment Setup | Yes | All models are trained by 200 epochs in total with initial learning rate as 0.1. ... We use stochastic gradient descent (SGD) as our optimizer, and we set momentum as 0.9 and weight decay as 0.0005. For the hyper-parameter in Algorithm. 1, we set M = 4 for all different network architectures. ... And we set the weight of entropically regularization term λ as 0.1. Besides, we set threshold as 0.1 and maximum iteration number as 100. |