Learning Flexible Body Collision Dynamics with Hierarchical Contact Mesh Transformer

Authors: Youn-Yeol Yu, Jeongwhan Choi, Woojin Cho, Kookjin Lee, Nayong Kim, Kiseok Chang, ChangSeung Woo, ILHO KIM, SeokWoo Lee, Joon Young Yang, SOOYOUNG YOON, Noseong Park

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results show that HCMT provides significant performance improvements over existing methods. We also compare the performance of several baselines using well-known benchmark datasets.
Researcher Affiliation Collaboration 1Yonsei University 2Arizona State University 3 LG Display Co., Ltd. 4KAIST
Pseudocode No The paper describes algorithmic steps and equations but does not present them in a structured pseudocode or algorithm block.
Open Source Code Yes Our code is available at https://github.com/yuyudeep/hcmt.
Open Datasets Yes We evaluate our model on three publicly available datasets (Sphere Simple (Pfaff et al., 2020), Deforming Plate (Pfaff et al., 2020), Deformable Plate (Linkerh agner et al., 2023)) and the novel Impact Plate, and our model achieves consistently the best performance.
Dataset Splits Yes These parameters have been randomly varied to generate a dataset consisting of 2,000 trajectories, along with 200 validation and test trajectories.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software like ANSYS, MGN, and GT implementations, and the Adam optimizer, but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes Table 8 shows the hyperparameters in noise, radius, and the number of training steps applied to each dataset. The radius γ is an important hyperparameter for collision detection. Random-walk noises are added to positions in the same way as GNS (Sanchez-Gonzalez et al., 2020) and MGN (Pfaff et al., 2020) for improvements in cumulative errors. The numbers of contact propagation modules and mesh propagation modules are hyperparameters, and the number of blocks L = LC + LH is set to 15. Following MGN, the hidden vector size of the encoder/decoder is set to 128 and the Adam optimizer is used. The batch size is set to 1, and exponential learning rate decay from 10^-4 to 10^-6 is applied. The hidden vector dimensions dz, dh for CMT and HMT are set to 128, and the number of heads H is 4. For reproducibility, we introduce the best hyperparameter configurations for each dataset in Table 8.