Semi-Open 3D Object Retrieval via Hierarchical Equilibrium on Hypergraph
Authors: Yang Xu, Yifan Feng, Jun Zhang, Jun-Hai Yong, Yue Gao
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Results demonstrate that the proposed method can effectively generate the hierarchical embeddings of 3D objects and generalize them towards semi-open environments. ... 5 Experiments |
| Researcher Affiliation | Collaboration | Yang Xu1, Yifan Feng1, Jun Zhang2, Jun-Hai Yong1, and Yue Gao1 1BNRist, THUIBCS, KLISS, BLBCI, School of Software, Tsinghua University, China 2Tencent AI Lab |
| Pseudocode | Yes | Algorithm 1 Training the HRE module ... Algorithm 2 Training the SET module |
| Open Source Code | No | We are in the preparation of the codes, models, and datasets, and they will be released at the earliest opportunity. |
| Open Datasets | Yes | We generate four semi-open 3DOR datasets, including SO-ESB, SO-NTU, SO-MN40, and SO-ABO, based on the public datasets ESB [13], NTU [5], Model Net40 [35], and ABO [6], respectively. |
| Dataset Splits | Yes | Then, we split the fine categories into seen categories for training and unseen categories for testing, the training and testing sets share the same coarse label space according to the semi-open environment setting. The statics of the four semi-open 3DOR datasets are shown in Table 5. |
| Hardware Specification | Yes | Our experiments were conducted on a Tesla V100-32G GPU and an Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz. |
| Software Dependencies | No | The paper mentions Blender 3.02 and Open3D 0.13.03 for data processing, but does not provide specific version numbers for core software dependencies like programming languages (e.g., Python) or deep learning frameworks (e.g., PyTorch, TensorFlow) needed to replicate the experimental setup. |
| Experiment Setup | Yes | The HRE and SET modules are trained separately with 40 and 120 epochs. The SGD optimizers are used for both two modules with learning rates of 0.1 and 0.001, respectively. As for the hyper-parameters in HERT, we set λ = 0.5, µ = 0.8, and η = 0.9. Detailed implemental settings for our framework are provided in Appendix C. ... Table 6: The hyper-parameters of the HERT framework. |