Rethinking Loss Design for Large-scale 3D Shape Retrieval
Authors: Zhaoqun Li, Cheng Xu, Biao Leng
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on the two public 3D object retrieval datasets, Model Net and Shape Net Core 55, demonstrate the effectiveness of our proposal, and our method has achieved state-of-the-art results on both datasets. |
| Researcher Affiliation | Academia | 1School of Computer Science and Engineering, Beihang University, Beijing, 100191 2Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191 3State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191 |
| Pseudocode | No | The paper provides mathematical formulas and descriptions of the loss functions and their gradients but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links or statements about the availability of its source code. |
| Open Datasets | Yes | Extensive experiments conducted on the two public 3D object retrieval datasets, Model Net and Shape Net Core 55, demonstrate the effectiveness of our proposal, and our method has achieved state-of-the-art results on both datasets. |
| Dataset Splits | Yes | In our experiment, we follow the training and testing split as mentioned in [Wu et al., 2015]. |
| Hardware Specification | Yes | The experiments are conducted on Nvidia GTX1080Ti GPU and our methods are implemented by Caffe. |
| Software Dependencies | No | The paper states 'our methods are implemented by Caffe' but does not provide specific version numbers for Caffe or any other software dependencies. |
| Experiment Setup | Yes | We use the stochastic gradient descent (SGD) algorithm with momentum 2e-4 to optimize the loss and the batch size is 100. The initial learning rate is 0.01 and is divided by 5 at the 20th epoch. The total training epoch is 30. |