LatentGNN: Learning Efficient Non-local Relations for Visual Recognition
Authors: Songyang Zhang, Xuming He, Shipeng Yan
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental evaluations on three major visual recognition tasks show that our method outperforms the prior works with a large margin while maintaining a low computation cost. |
| Researcher Affiliation | Academia | 1School of Information Science and Technology, Shanghai Tech University, Shanghai, China. |
| Pseudocode | No | The paper describes the proposed method using text and mathematical equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | For facilitating the future research, code is available: https://github.com/latentgnn/ Latent GNN-V1-Py Torch |
| Open Datasets | Yes | We evaluate our model on three visual recognition tasks, including object detection and instance segmentation on the MSCOCO 2017 dataset (Lin et al., 2014), and point cloud semantic segmentation on the Scan Net dataset (Dai et al., 2017a). |
| Dataset Splits | Yes | MSCOCO dataset is a challenging dataset that contains 115K images over 80 categories for training, 5K for validation and 20K for testing. |
| Hardware Specification | No | The paper mentions running experiments on '8 GPUs' or '4 GPUs' but does not specify the exact GPU models (e.g., NVIDIA V100), CPU types, or other hardware specifications. |
| Software Dependencies | No | The paper mentions utilizing an 'open-source implementation' and its code link includes 'PyTorch' in the name, but it does not specify explicit version numbers for any software dependencies like Python, PyTorch, or CUDA. |
| Experiment Setup | Yes | All experiments are conducted on 8 GPUs with 2 images per GPU (effective minibatch size 16) for 90K iterations, with a learning rate of 0.02 which is decreased by 10 at the 60K and 80K iteration. We use a weight decay of 0.0001 and momentum of 0.9. ...We train our models for 200 epochs with Adam as optimizer, starting from the learning rate at 0.001 and reduce it using exponential schedule with the decay rate at 0.7. |