Causal Dynamics Learning for Task-Independent State Abstraction

Authors: Zizhao Wang, Xuesu Xiao, Zifan Xu, Yuke Zhu, Peter Stone

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluated on two simulated environments and downstream tasks, both the dynamics model and policies learned by the proposed method generalize well to unseen states and the derived state abstraction improves sample efficiency compared to learning without it.
Researcher Affiliation Collaboration Zizhao Wang 1 Xuesu Xiao 2 Zifan Xu 2 Yuke Zhu 2 Peter Stone 2 3 1Department of Electrical and Computer Engineering, 2Department of Computer Science, The University of Texas at Austin, Austin, USA 3Sony AI.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It describes methods in prose and with diagrams.
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets No The paper mentions using a 'chemical environment modified from Ke et al. (2021)' and a 'manipulation environment implemented with the robosuite simulation framework (Zhu et al., 2020)'. While these are referenced, they are simulation environments or frameworks, not datasets with concrete access information for public availability in the context of a dataset.
Dataset Splits Yes We split the collected transition data D into the training part used to maximize Lθ and the validation part for evaluating CMI.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. It only mentions 'two simulated environments'.
Software Dependencies No The paper mentions 'robosuite simulation framework (Zhu et al., 2020)' and 'PPO (Schulman et al. (2017))' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For all methods, during training, instead of optimizing 1-step prediction loss, we roll out the model for H steps and optimize the sum of H-step prediction loss. Also, we keep 10% of the data as the validation data to select the best model for each method. Moreover, CDL also evaluate the conditional mutual information (CMI) using the validation data rather than training data for more accurate measurements. For Reg whose performance is sensitive to the regularization coefficient, we conduct a grid search for the best coefficients in terms of the causal graph accuracy, and its value for each environment is listed in Table. 6.