PRASS: Probabilistic Risk-averse Robust Learning with Stochastic Search
Authors: Tianle Zhang, Yanghao Zhang, Ronghui Mu, Jiaxu Liu, Jonathan Fieldsend, Wenjie Ruan
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical experiments demonstrate that PRASS outperforms existing state-of-the-art baselines. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Liverpool, Liverpool, L69 3BX, UK 2Department of Computer Science, University of Exeter, Exeter, EX4 4QF, UK |
| Pseudocode | Yes | The full algorithm is summarised in Algorithm 1 in Appendix C. |
| Open Source Code | No | The paper does not provide an explicit statement or a link to open-source code for the described methodology. |
| Open Datasets | Yes | We conduct an extensive evaluation of the risk-averse robust learning method on three datasets: MNIST, CIFAR-10 and CIFAR-100. |
| Dataset Splits | No | The paper mentions 'Train/test set evaluations' but does not provide specific percentages or counts for training, validation, and test splits, nor does it explicitly mention a validation split. |
| Hardware Specification | Yes | All the experiments are executed on a system with a 32-Core AMD EPYC 7452 CPU and an NVIDIA A100 40GB GPU. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., PyTorch, TensorFlow, or specific library versions) used for the experiments. |
| Experiment Setup | Yes | For MNIST, we adopt a Re LU network architecture with two convolutional layers, while for CIFAR-10 and CIFAR-100, we utilise an 18-layer residual network architecture. Moreover, the uncertainty set under consideration is a perturbation set, defined as = {δ ∈ Rd : ||δ||∞ ≤ ϵ}, situated within a Gaussian distribution set p(δ) ∈ P. We set ϵ = 0.3 for MNIST and ϵ = 8/255 for CIFAR-10 and CIFAR-100. Full details are provided in Appendix C. |