MARS: Multimodal Active Robotic Sensing for Articulated Characterization

Authors: Hongliang Zeng, Ping Zhang, Chengjiong Wu, Jiahua Wang, Tingyu Ye, Fang Li

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In experiments conducted with various articulated object instances from the Part Net-Mobility dataset, our method outperformed current state-of-the-art methods in joint parameter estimation accuracy.
Researcher Affiliation Academia Hongliang Zeng , Ping Zhang , Chengjiong Wu , Jiahua Wang , Tingyu Ye and Fang Li South China University of Technology, Guangzhou, China scutzenghongl@gmail.com, pzhang@scut.edu.cn,
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/robhlzeng/MARS.
Open Datasets Yes For evaluation, we utilized the SAPIEN simulator [Xiang et al., 2020] and Part Net-Mobility dataset [Mo et al., 2019], selecting 14 common articulated objects (10 with revolute and 4 with prismatic joints).
Dataset Splits Yes Post movability prediction module training, immovable parts data was removed, resulting in 10K training, 1K testing, and 1K validation samples for each category for perception network training.
Hardware Specification No The paper does not provide specific hardware details (like exact GPU/CPU models or processor types) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper describes training steps and some settings for the RL environment (e.g., step size limits, success threshold), but it does not provide specific hyperparameters like learning rate, batch size, optimizer details, or the values for the reward coefficients (λs and λn) for the main model training.