Gradient Based Activations for Accurate Bias-Free Learning

Authors: Vinod K. Kurmi, Rishabh Sharma, Yash Vardhan Sharma, Vinay P Namboodiri7255-7262

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed model on standard benchmarks. We improve the accuracy of the adversarial methods while maintaining or even improving the unbiasness and also outperform several other recent methods.
Researcher Affiliation Academia 1 KU Leuven, Belgium , 2 IIT Roorkee, India 3 University of Bath, UK
Pseudocode No The paper provides equations and descriptions of the proposed method but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper refers to a 'project page' (https://vinodkkurmi.github.io/GBA/), but this is a general project overview page and does not explicitly state that source code for the methodology described in the paper is provided at this link. It does not contain a direct link to a code repository.
Open Datasets Yes We evaluate the proposed model on the following standard datasets : CIFAR-10S (Wang et al. 2020)... CIFAR-I... Colored MNIST... Celeb A...
Dataset Splits Yes For the CIFAR-10S, CIFAR-I and Celeb A datasets, we use the Resnet-18 (He et al. 2016) model... In the Colored MNIST dataset, we use the multi-layered perceptron... We evaluate the methods on bias aligned and bias conflicting accuracies along with their mean. For the unbiased model both these accuracies must be close. We measure this using bias gap which is the difference between the aligned and conflicting accuracies.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for running the experiments.
Software Dependencies No The paper mentions models like Resnet-18 and multi-layered perceptron but does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For the CIFAR-10S, CIFAR-I and Celeb A datasets, we use the Resnet-18 (He et al. 2016) model, where the last fully connected layer is replaced with two consecutive fully connected layers. In the Colored MNIST dataset, we use the multi-layered perceptron consisting of three hidden layers as the feature extractor.