Logit Perturbation

Authors: Mengyang Li, Fengguang Su, Ou Wu, Ji Zhang1359-1366

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on benchmark image classification data sets and their long-tail versions indicated the competitive performance of our learning method. In addition, existing methods can be further improved by utilizing our method.
Researcher Affiliation Collaboration Mengyang Li1,2, Fengguang Su2, Ou Wu2*, Ji Zhang3 1 Jiuantianxia Inc., China 2 National Center for Applied Mathematics, Tianjin University, China 3 The University of Southern Queensland, Australia
Pseudocode Yes Algorithm 1: Learning to Perturb Logits (LPL) and Algorithm 2: PGD-like Optimization are provided, detailing the steps of the proposed methods.
Open Source Code Yes All the codes are available online2. 2https://github.com/limengyang1992/lpl
Open Datasets Yes In this subsection, two benchmark image classification data sets, namely, CIFAR10 and CIFAR100, are used. Both data consist of 32 32 natural images in 10 classes for CIFAR10 and 100 classes for CIFAR100. There are 50,000 images for training and 10,000 images for testing.
Dataset Splits Yes There are 50,000 images for training and 10,000 images for testing.
Hardware Specification No The paper does not provide specific details regarding the hardware used for running experiments, such as CPU or GPU models, or cloud computing resources.
Software Dependencies No The paper does not list specific software dependencies, libraries, or their version numbers used in the implementation or for conducting experiments.
Experiment Setup Yes The PGD-like optimization in Algorithm 1 contains two hyper-parameters, namely, step size and #steps. Let α be the step size, and Kc be the number of steps(#steps) for category c. On the balanced classification, the α is searched in {0.01, 0.02, 0.03}. the Kc is calculated by Eq. (19).