Adversarial Regression with Multiple Learners
Authors: Liang Tong, Sixie Yu, Scott Alfeld, vorobeychik
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conducted experiments on three datasets: Wine Quality (redwine),PDF malware (PDF), and Boston Housing Market (boston). |
| Researcher Affiliation | Academia | 1Department of EECS, Vanderbilt University, Nashville, TN, USA 2Computer Science Department, Amherst College, Amherst, MA, USA. |
| Pseudocode | No | The paper describes computational methods and proofs, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format. |
| Open Source Code | No | The paper refers to an open-sourced tool 'mimicus' (https://github.com/srndic/mimicus) used for data extraction, but it does not provide a link or explicit statement about the availability of the authors' own source code for the methodology described in the paper. |
| Open Datasets | Yes | We conducted experiments on three datasets: Wine Quality (redwine),PDF malware (PDF), and Boston Housing Market (boston). The Wine Quality dataset (Cortez et al., 2009) |
| Dataset Splits | No | The paper states: 'The dataset is equally divided into a training set (Xtrain, ytrain) and a testing set (Xtest, ytest).' It does not explicitly mention or detail a separate validation set split. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU/GPU models, memory, or specific computing environments with specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions tools like 'mimicus' and 'peepdf' but does not provide specific version numbers for these or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | Remember that in Eq.(11) there are three hyper-parameters in the defender s loss function: λ, β, and z. λ is the regularization coefficient in the attacker s loss function shown in Eq.(4). It is negatively proportional to the attacker s strength. β is the probability of a test data being malicious. z is the predication targets of the attacker. [...] We denote by ˆλ = 0.5 and ˆβ = 0.8 the defender s estimates of the true λ and β. [...] We let = 5σr 1, where 1 is a vector with all elements equal to one. [...] The number of learners is set to 5. [...] The regularization parameters of Lasso and Ridge were selected by cross-validation. |