Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models
Authors: Alkis Kalavasis, Amin Karbasi, Argyris Oikonomou, Katerina Sotiraki, Grigoris Velegkas, Manolis Zampetakis
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our work is theoretical. |
| Researcher Affiliation | Academia | Alkis Kalavasis Yale University alkis.kalavasis@yale.edu Amin Karbasi Yale University amin.karbasi@yale.edu Argyris Oikonomou Yale University argyris.oikonomou@yale.edu Katerina Sotiraki Yale University katerina.sotiraki@yale.edu Grigoris Velegkas Yale University grigoris.velegkas@yale.edu Manolis Zampetakis Yale University manolis.zampetakis@yale.edu |
| Pseudocode | No | The paper describes procedures in descriptive text and numbered steps (e.g., Section 5), but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper is theoretical and does not mention releasing any source code. The NeurIPS Paper Checklist explicitly marks 'NA' for questions related to code availability. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with specific datasets. It provides a generic definition of a dataset 'S = {(xi, yi)}m i=1' for its theoretical framework, but no concrete dataset is specified or made available. |
| Dataset Splits | No | The paper is theoretical and does not describe experimental validation or specific dataset splits (training, validation, testing) for empirical results. |
| Hardware Specification | No | The paper is theoretical and does not mention any specific hardware used for experiments. |
| Software Dependencies | No | The paper is theoretical and does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with specific hyperparameters or system-level training settings. |