Angular Triplet-Center Loss for Multi-View 3D Shape Retrieval
Authors: Zhaoqun Li, Cheng Xu, Biao Leng8682-8689
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results on two popular 3D object retrieval benchmarks, Model Net40 and Shape Net Core 55, demonstrate the effectiveness of our proposed loss, and our method has achieved state-of-the-art results on various 3D shape datasets. |
| Researcher Affiliation | Academia | Zhaoqun Li,1 Cheng Xu,1 Biao Leng1,2,3 1School of Computer Science and Engineering, Beihang University, Beijing, 100191 2Research Institute of Beihang University in Shenzhen, Shenzhen, 518057 3Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191 {lizhaoqun, cxu, lengbiao}@buaa.edu.cn |
| Pseudocode | No | The paper includes mathematical formulas and a flowchart but no structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating the release of open-source code for the described methodology. |
| Open Datasets | Yes | To evaluate the performance of the proposed method, we have conducted 3D shape retrieval experiments on the Model Net dataset (Wu et al. 2015) and Shape Net-Core55 dataset (Chang et al. 2015). |
| Dataset Splits | Yes | For our evaluation experiment, we adopt the same method to split training and test set as mentioned in (Wu et al. 2015), i.e. randomly select 100 unique models per category from the subset, where the first 80 models are used for training and the rest for testing. We follow the official training and testing split method to conduct our experiment on perturbed version, where the database is split into three parts, 70% shapes used for training, 10% shapes for validation data and the rest 20% for testing. |
| Hardware Specification | Yes | Our experiments are conducted on a server with 2 Nvidia GTX1080Ti GPUs, an Intel Xeon CPU and 128G RAM. |
| Software Dependencies | No | The proposed method is implemented in Py Torch. However, no specific version numbers for PyTorch or other software dependencies are provided. |
| Experiment Setup | Yes | We use the stochastic gradient descent (SGD) algorithm with momentum 2e-4 to optimize the loss. The batch size for the mini-batch is set to 20. The learning rate for the CNN is 1e-4 and is divided by 10 at the epoch 80. Specially, the learning rate for centers is always 1e-4 in the training process. The total training epochs are 120. The network is pre-trained on Image Net (Deng et al. 2009) and the centers are initialized by a Gaussian distribution of mean value 0 and standard deviation 0.01. The size of each image is 224x224 pixels. The best margin value for the Model Net is 0.7. The hyper-parameter λ indicates the weight of two tasks. |