Hierarchical Graph Structure Learning for Multi-View 3D Model Retrieval
Authors: Yuting Su, Wenhui Li, Anan Liu, Weizhi Nie
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive evaluation on three popular and challenging datasets. The comparison demonstrates the superiority and effectiveness of the proposed method comparing with the state of the arts. |
| Researcher Affiliation | Academia | School of Electrical and Information Engineering, Tianjin University, China {ytsu,liwenhui,liuanan,weizhinie}@tju.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Three popular 3D model datasets are utilized for evaluation, including ETH [Leibe and Schiele, 2003], MV-RED [Liu et al., 2017] and NTU [Chen et al., 2003]. |
| Dataset Splits | No | The paper mentions datasets used but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using the "Alex Net model" but does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | For visual feature extraction, we adopt the Alex Net model [Krizhevsky et al., 2012], which was pre-trained on the Image Net dataset, and use the output of the second last fullyconnected layers as visual representation. In our experiment, the initialized view number is set with 41, 73, 60 on ETH, MVRED and NTU datasets, respectively. We further analyze the sensitivity caused by s (the view number ), k (the neighbor number) and T (iteration num) in Section 4.3. |