Learning Environment-Aware Affordance for 3D Articulated Object Manipulation under Occlusions
Authors: Ruihai Wu, Kai Cheng, Yan Zhao, Chuanruo Ning, Guanqi Zhan, Hao Dong
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the effectiveness of our proposed approach in learning affordance considering environment constraints. |
| Researcher Affiliation | Academia | Ruihai Wu1,4 Kai Cheng2 Yan Shen1,4 Chuanruo Ning 2 Guanqi Zhan 3 Hao Dong 1,4 1CFCS, School of CS, PKU 2School of EECS, PKU 3University of Oxford 4National Key Laboratory for Multimedia Information Processing, School of CS, PKU |
| Pseudocode | No | The paper describes the framework's components and learning process but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about releasing the source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | For simulation and dataset, we use SAPIEN [45] as our simulation environment, equipped with large-scale Partnet-Mobility [27] and Shape Net [1] dataset, with occluder data statistics as shown in Table 4. |
| Dataset Splits | No | The paper explicitly describes "Train-Data" and "Test-Data" with statistics (Table 1, 4) and states "For training, we collect interactions in one-occluder scenes." and "For testing, we use multi-occluder scenes...", but it does not specify a separate validation dataset split. |
| Hardware Specification | Yes | We use Py Torch as our Deep Learning framework, and single RTX Ge Force 3090 (20GB GPU) for training and inference. |
| Software Dependencies | No | The paper mentions "Py Torch" as the Deep Learning framework and "SAPIEN" for simulation, but does not specify version numbers for these software components. |
| Experiment Setup | Yes | We set the batch size to 30, and use Adam Optimizer [15] with 0.001 as the initial learning rate. We use const 2.00 as the boundary constant in α contrastive learning, and 1.00 as the balancing coefficient λCL in the total loss. |