Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Discovering Conceptual Knowledge with Analytic Ontology Templates for Articulated Objects

Authors: Jianhua Sun, Yuxuan Li, Longfei Xu, Jiude Wei, Liang Chai, Cewu Lu

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct exhaustive experiments and the results demonstrate the superiority of our approach in understanding and then interacting with articulated objects.
Researcher Affiliation Academia Shanghai Jiao Tong University EMAIL
Pseudocode Yes Figure 3: Specific examples of basic geometric (a,b,d), kinematic (c) and composite (e) ontologies...the definitions of basic and composite AOTs are shown in the form of mathematical expressions and pseudo codes for easy understanding.
Open Source Code No The paper does not contain any explicit statement or link indicating the release of open-source code for the methodology described. It only references third-party tools like PyTorch3D.
Open Datasets Yes To show the superiority of our AOT-driven approach, we conduct exhaustive experiments on Part Net Mobility dataset with SAPIEN environment (Xiang et al. 2020).
Dataset Splits Yes In total, 653 objects with 1152 articulation structures are used as test samples. For each specific task, we set the initial state of the target joint as closed (0% of the joint range), and the final state as fully open (100% of the joint range). ... The training samples for SAC/TQC are selected from Part Net Mobility assets except those in test samples.
Hardware Specification No The paper mentions using a 'flying Franka Panda gripper as the agent' in the SAPIEN environment but does not specify the computing hardware (GPU/CPU models, memory, etc.) used for training or running the simulations.
Software Dependencies Yes To train the estimator, apart from the MSE loss on each parameter, we also introduce a Point2Mesh loss (Py Torch3D 2023b) between the mesh of the AOT instance and the input point cloud. ... mesh rendered by AOT can be deformed to better fit the object point cloud and obtain more refined geometric details according to algorithms like (Hanocka et al. 2020; Py Torch3D 2023a).
Experiment Setup Yes All MLPs used in our experiments are triplelayered with ReLU activation. The point cloud encoders used in both ontology identification and parameter estimation are transformer encoders (Yu et al. 2022), which extract 128 groups of points with size 32 from the input with 2048 points, and send them into a standard transformer encoder with 12 6-headed attention layers. ... We use cosine distance as the axis alignment loss and L2 distance to the ground truth axis as the pivot loss. ... we consider a task successful if more than 80% of the total moving range is achieved.