SNeRL: Semantic-aware Neural Radiance Fields for Reinforcement Learning

Authors: Dongseok Shim, Seungjae Lee, H. Jin Kim

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we demonstrate several experiments on the 3-dimensional environments to explore the effectiveness of SNe RL compared to existing state-of-the-art RL algorithms both in model-free and model-based settings.
Researcher Affiliation Academia 1Interdisciplinary Program in AI, Seoul National University 2Aerospace Engineering, Seoul National University 3ASRI, AIIS, Seoul National University. Correspondence to: H. Jin Kim <hjinkim@snu.ac.kr>.
Pseudocode Yes A.3. Pseudo-code
Open Source Code No The paper does not include an unambiguous statement about releasing the source code for the described methodology or provide a direct link to a code repository.
Open Datasets Yes We refer to Meta-world (Yu et al., 2020) for more details including the reward function and the range of the random positions.
Dataset Splits No The paper describes the total dataset size and collection method ("14400 scenes", "Meta-world (Yu et al., 2020)"), but does not specify explicit training, validation, or test dataset splits (e.g., percentages, sample counts, or predefined split references for reproduction).
Hardware Specification Yes Stage 1 (pre-training encoder) in our experiments has been performed using a single NVIDIA RTX A6000 and AMD Ryzen 2950X, and stage 2 (RL downstream tasks) has been performed using an NVIDIA RTX A5000 and AMD Ryzen 2950X.
Software Dependencies No The paper mentions 'Py Torch-like pseudo-code' and 'Re LU' but does not provide specific version numbers for software dependencies like PyTorch or other libraries used in the experiments.
Experiment Setup Yes Table 1. Hyperparameters for pre-training multi-view encoder; Table 2. Hyperparameters for SAC (for SNe RL and baselines); Table 3. Hyperparameters for Dreamer (for SNe RL and baselines)