Invariance in Policy Optimisation and Partial Identifiability in Reward Learning
Authors: Joar Max Viktor Skalse, Matthew Farrugia-Roberts, Stuart Russell, Alessandro Abate, Adam Gleave
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this work, we formally characterise the partial identifiability of the reward function given several popular reward learning data sources, including expert demonstrations and trajectory comparisons. We also analyse the impact of this partial identifiability for several downstream tasks, such as policy optimisation. We unify our results in a framework for comparing data sources and downstream tasks by their invariances, with implications for the design and selection of data sources for reward learning. |
| Researcher Affiliation | Collaboration | 1Department of Computer Science, Oxford University 2Future of Humanity Institute, Oxford University 3School of Computing and Information Systems, the University of Melbourne 4Center for Human-Compatible Artificial Intelligence, University of California, Berkeley 5FAR AI, Inc. |
| Pseudocode | No | The paper is theoretical and focuses on characterizations, theorems, and proofs. It does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statements about open-source code availability or links to code repositories for the work described. |
| Open Datasets | No | The paper is theoretical and does not involve experiments with datasets, therefore it does not mention training data availability. |
| Dataset Splits | No | The paper is theoretical and does not involve experiments with datasets, therefore it does not specify training, validation, or test splits. |
| Hardware Specification | No | The paper is theoretical and does not conduct experiments, therefore no hardware specifications are provided. |
| Software Dependencies | No | The paper is theoretical and does not conduct experiments, therefore no software dependencies with version numbers are listed. |
| Experiment Setup | No | The paper is theoretical and does not describe any experiments or their setup, thus no hyperparameters or specific training settings are provided. |