Sample-Efficient Learning of POMDPs with Multiple Observations In Hindsight

Authors: Jiacheng Guo, Minshuo Chen, Huan Wang, Caiming Xiong, Mengdi Wang, Yu Bai

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical This paper studies the sample-efficiency of learning in Partially Observable Markov Decision Processes (POMDPs), a challenging problem in reinforcement learning that is known to be exponentially hard in the worst-case. Motivated by real-world settings such as loading in game playing, we propose an enhanced feedback model called multiple observations in hindsight , where after each episode of interaction with the POMDP, the learner may collect multiple additional observations emitted from the encountered latent states, but may not observe the latent states themselves. We show that sample-efficient learning under this feedback model is possible for two new subclasses of POMDPs: multi-observation revealing POMDPs and tabular distinguishable POMDPs. Both subclasses generalize and substantially relax revealing POMDPs a widely studied subclass for which sample-efficient learning is possible under standard trajectory feedback.
Researcher Affiliation Collaboration Jiacheng Guo Minshuo Chen Huan Wang Caiming Xiong Mengdi Wang Yu Bai Princeton University Salesforce Research
Pseudocode Yes Algorithm 1 k-Optimistic Maximum Likelihood Estimation (k-OMLE) ... Algorithm 2 Optimism with State Testing (OST) ... Algorithm 3 Pseudo state assignment via closeness testing (ASSIGN_PSEUDO_STATES) ... Algorithm 4 Closeness Testing closeness_test({o(i)}i [k], {eo(i)}i [k])
Open Source Code No The paper does not include an explicit statement about releasing open-source code for the described methodology or a link to a code repository.
Open Datasets No The paper is theoretical and focuses on sample complexity; it does not utilize or mention specific datasets for training or evaluation.
Dataset Splits No The paper is theoretical and does not report on empirical experiments, thus it does not describe dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not mention any specific hardware (e.g., GPU/CPU models) used for conducting experiments.
Software Dependencies No The paper is theoretical, presenting algorithms and proofs; it does not specify any software dependencies with version numbers for implementation.
Experiment Setup No The paper is theoretical and focuses on algorithm design and theoretical guarantees. It does not describe an experimental setup with specific hyperparameters or system-level training settings.