Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Is It Harmful When Advisors Only Pretend to Be Honest?
Authors: Dongxia Wang, Tim Muller, Jie Zhang, Yang Liu
AAAI 2016 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We propose random processes to model and measure dynamic attacks. The theory not only allowed us to answer the main question, but also to prove interesting properties about the harm of dynamic attacks. |
| Researcher Affiliation | Academia | Dongxia Wang and Tim Muller and Jie Zhang and Yang liu School of Computer Engineering, Nanyang Technological University, Singapore EMAIL, EMAIL |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | No | The paper does not provide any statement or link indicating that source code for the methodology is openly available. |
| Open Datasets | No | The paper describes a theoretical framework and does not use or reference any datasets for training. |
| Dataset Splits | No | The paper presents theoretical analysis and numerical computations, not empirical experiments with dataset splits for training, validation, or testing. |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., CPU, GPU models, or cloud resources) used for computations or experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | No | The paper details a theoretical model and its mathematical derivations; it does not describe an experimental setup with hyperparameters or training configurations. |