Learning Generalizable Agents via Saliency-guided Features Decorrelation

Authors: Sili Huang, Yanchao Sun, Jifeng Hu, Siyuan Guo, Hechang Chen, Yi Chang, Lichao Sun, Bo Yang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental results demonstrate that SGFD can generalize well on a wide range of test environments and significantly outperforms state-of-the-art methods in handling both task-irrelevant variations and task-relevant variations.
Researcher Affiliation Academia 1,8Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education 1,3,4,5,6School of Artificial Intelligence, Jilin University, China 2Department of Computer Science, University of Maryland, College Park, USA 7Lehigh University, Bethlehem, Pennsylvania, USA
Pseudocode Yes Algorithm 1 Saliency-Guided Features Decorrelation
Open Source Code No No explicit statement about releasing the source code or a link to a repository for the described methodology was found.
Open Datasets Yes To carry out a comprehensive investigation of SGFD, we have performed a series of experiments on visual RL tasks, utilizing the Deep Mind Control Suite [41] and Causal World [4].
Dataset Splits No The paper mentions 'training environments' and 'testing environment' but does not specify explicit dataset split percentages (e.g., 80/10/10 split) or absolute sample counts for training, validation, and test sets. It implies a separation between training and testing data but lacks the specific details for reproduction.
Hardware Specification Yes Compute. Experiments are carried out on NVIDIA Ge Force RTX 3090 GPUs and NVIDIA A10 GPUs. Besides, the CPU type is Intel(R) Xeon(R) Gold 6230 CPU @ 2.10GHz.
Software Dependencies No The paper mentions using specific hardware but does not provide specific version numbers for software dependencies like Python, PyTorch, or other libraries, which are necessary for reproducible setup.
Experiment Setup Yes Table 3: Architectures of SGFD. Hyperparameter Value Convolutional layers 4 Latent representation dimension 50 Image size (84,84) Stacked frames 3 kernel size 3 3 Channels 3 stride 2 for the first layer, 4 otherswise Activation Re LU