Image as Set of Points
Authors: Xu Ma, Yuqian Zhou, Huan Wang, Can Qin, Bin Sun, Chang Liu, Yun Fu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate Context Cluster on Image Net-1K (Deng et al., 2009), Scan Object NN (Uy et al., 2019), MS COCO (Lin et al., 2014), and ADE20k (Zhou et al., 2017) datasets for image classification, point cloud classification, object detection, instance segmentation, and semantic segmentation tasks. |
| Researcher Affiliation | Collaboration | 1Northeastern University 2Adobe Inc. |
| Pseudocode | No | The paper describes the proposed algorithm using prose and mathematical equations but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Codes are available at: https://github.com/ma-xu/Context-Cluster. |
| Open Datasets | Yes | We validate Context Cluster on Image Net-1K (Deng et al., 2009), Scan Object NN (Uy et al., 2019), MS COCO (Lin et al., 2014), and ADE20k (Zhou et al., 2017) datasets for image classification, point cloud classification, object detection, instance segmentation, and semantic segmentation tasks. |
| Dataset Splits | Yes | We train Context Clusters on the Image Net-1K training set (about 1.3M images) and evaluate upon the validation set. |
| Hardware Specification | Yes | By default, the models are trained on 8 A100 GPUs with a 128 mini-batch size (that is 1024 in total). ... For a fair comparison, we train all of our models for 80k iterations with a batch size of 16 on four V100 GPUs... |
| Software Dependencies | No | The paper mentions 'Adam W (Loshchilov & Hutter, 2019)' and 'cosine schedular (Loshchilov & Hutter, 2017)' as components used, but it does not specify software versions for programming languages, libraries (e.g., PyTorch, TensorFlow), or other dependencies. |
| Experiment Setup | Yes | Adam W (Loshchilov & Hutter, 2019) is used to train all of our models across 310 epochs with a momentum of 0.9 and a weight decay of 0.05. The learning rate is set to 0.001 by default and adjusted using a cosine schedular (Loshchilov & Hutter, 2017). By default, the models are trained on 8 A100 GPUs with a 128 mini-batch size (that is 1024 in total). |