Strategic Representation
Authors: Vineet Nair, Ganesh Ghalme, Inbal Talgam-Cohen, Nir Rosenfeld
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | Our main result is a learning algorithm that minimizes error despite strategic representations, and our theoretical analysis sheds light on the trade-off between learning effort and susceptibility to manipulation. |
| Researcher Affiliation | Academia | 1Technion Israel Institute of Technology. 2Indian Institute of Technology, Hyderabad. |
| Pseudocode | Yes | Algorithm 1 ALG |
| Open Source Code | No | The paper does not contain any explicit statements about releasing source code or provide links to a code repository. |
| Open Datasets | No | The paper refers to using a 'labeled sample set S = {(xi, yi)}m i=1' drawn from an 'unknown distribution D' for learning. However, it does not specify a publicly available dataset by name, provide a link, DOI, or a formal citation to access such a dataset. |
| Dataset Splits | No | The paper discusses theoretical learning and empirical error minimization, referring to a 'training set S'. However, it does not specify any training/validation/test dataset splits, exact percentages, sample counts, or refer to predefined splits with citations necessary for reproduction. |
| Hardware Specification | No | The paper is theoretical in nature, focusing on algorithm design and mathematical analysis. It does not describe any empirical experiments or mention specific hardware used for computations (e.g., CPU, GPU models, or cloud resources). |
| Software Dependencies | No | The paper is theoretical and focuses on algorithm design and mathematical analysis. It does not mention any specific software, libraries, or their version numbers that would be required to reproduce the work. |
| Experiment Setup | No | The paper defines parameters such as `k`, `k1`, `k2`, `n`, `q`, `m`, and constants `a`, `a+` within the theoretical framework and algorithm description. However, these are not 'experimental setup details' in the sense of concrete hyperparameters (like learning rate, batch size) or specific training configurations for an empirical evaluation, which the paper does not present. |