DeNetDM: Debiasing by Network Depth Modulation

Authors: Silpa Vadakkeeveetil Sreelatha, Adarsh Kappiyath, ABHRA CHAUDHURI, Anjan Dutta

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive experiments and ablation studies on a diverse set of datasets, including synthetic datasets like Colored MNIST and Corrupted CIFAR-10, as well as real-world datasets, Biased FFHQ, BAR and Celeb A, demonstrating an approximate 5% improvement over existing methods.
Researcher Affiliation Collaboration 1 University of Surrey 2 University of Exeter 3 Fujitsu Research of Europe
Pseudocode Yes The pseudocode for the entire training process of De Net DM is provided in Section 7.4.
Open Source Code Yes The project page is available at https://vssilpa.github.io/denetdm/. ... Source code is provided in https://github.com/kadarsh22/De Net DM.
Open Datasets Yes Datasets: We evaluate the performance of De Net DM across diverse domains using two synthetic datasets (Colored MNIST Ahuja et al. (2020), Corrupted CIFAR10 Hendrycks and Dietterich (2019)) and three real-world datasets (Biased FFHQ Kim et al. (2021), BAR Nam et al. (2020)) and Celeb A Liu et al. (2015).
Dataset Splits Yes We perform extensive hyperparameter tuning using a small unbiased validation set with bias annotations to obtain the deep and shallow branches for all the datasets.
Hardware Specification Yes Experimental compute: We utilize RTX 3090 GPUs for all our experiments.
Software Dependencies No The paper mentions Py Torch but does not specify a version number. It also mentions CUDA but without a version.
Experiment Setup Yes Table 12: Optimal hyperparameters for the CMNIST, C-CIFAR10, BAR and BFFHQ datasets determined through extensive experimentation. The tuples represent optimal hyperparameters for Stage 1 and Stage 2, respectively. Parameter: Learning Rate (LR), Batch Size, Momentum, Weight Decay, Epochs.