Transferable Adversarial Robustness for Categorical Data via Universal Robust Embeddings

Authors: Klim Kireev, Maksym Andriushchenko, Carmela Troncoso, Nicolas Flammarion

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we evaluate the performance of our methods. Using the proposed benchmark, we empirically show that our proposed methods provide significantly better robustness than previous works.
Researcher Affiliation Academia Klim Kireev, Maksym Andriushchenko, Carmela Troncoso, Nicolas Flammarion EPFL
Pseudocode Yes Algorithm 1 Cat-PGD. Relaxed projected gradient descent for categorical data. ... Algorithm 2 Bilevel alternating minimization for universal robust embeddings ... Algorithm 3 Embeddings merging algorithm
Open Source Code Yes The code for our method is publicly available at https://github.com/spring-epfl/Transferable-Cat-Robustness.
Open Datasets Yes We select three publicly-available datasets that fit the criteria above. All three datasets are related to real-world financial problems where robustness can be crucially important. For each dataset, we select adversarial capabilities for which we can outline a plausible modification methodology, and we can assign a plausible cost for this transformation. IEEECIS. The IEEECIS fraud detection dataset (Kaggle, 2019) contains information about around 600K financial transactions. ... BAF. The Bank Account Fraud dataset was proposed in Neur IPS 2022 by Jesus et al. (2022) ... Credit. The credit card transaction dataset (Altman, 2021)
Dataset Splits No The paper mentions training, but does not provide specific details on train/validation/test splits (e.g., percentages, sample counts, or explicit standard split references).
Hardware Specification Yes All the final experiments were done on AMD Ryzen 4750G CPU, Nvidia RTX 3070 GPU, and Ubuntu 22.04 OS.
Software Dependencies No The paper mentions 'Ubuntu 22.04 OS' but does not specify software dependencies like programming language versions or library versions (e.g., Python, PyTorch, TensorFlow, scikit-learn).
Experiment Setup Yes We list our evaluation parameters in Table 1. The Tab Net parameters are denoted according to the original paper Arik & Pfister (2019). We set the virtual batch size to 512. Most of the hyperparameters were selected via a grid search.