Exploring Figure-Ground Assignment Mechanism in Perceptual Organization
Authors: Wei Zhai, Yang Cao, Jing Zhang, Zheng-Jun Zha
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive evaluation results demonstrate our proposed FGA mechanism can effectively enhance the capability of perception organization on various baseline models. Nevertheless, the model augmented via our proposed FGA mechanism also outperforms state-of-the-art approaches on four challenging real-world applications. |
| Researcher Affiliation | Academia | Wei Zhai1 Yang Cao1,3 Jing Zhang2 Zheng-Jun Zha1 1University of Science and Technology of China 2The University of Sydney 3Institute of Artiļ¬cial Intelligence, Hefei Comprehensive National Science Center |
| Pseudocode | No | The paper contains mathematical equations and architectural descriptions but no structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See supplemental material. |
| Open Datasets | Yes | We use segmentation labels from the Pascal VOC [8] dataset and the rich texture dataset (DTD [7]) to synthesize our dataset. All of the datasets we used are publicly available for research. |
| Dataset Splits | Yes | Each dataset contains 2,500 unique images with a 224 224 resolution, split into training (2,000) and test (500) sets. We follow [13] to randomly select 45 CT images as training samples, 5 images for validation, and 50 images for testing. |
| Hardware Specification | Yes | We implement our model with Py Torch, and TITAN Xp GPUs are used for training and testing. |
| Software Dependencies | No | We implement our model with Py Torch, and TITAN Xp GPUs are used for training and testing. Our model is trained for 100 epochs using the Adam [23] optimizer with an initial learning rate of 0.0001, decreased by 0.1 at 50 epochs. |
| Experiment Setup | Yes | Each model is trained using the Adam optimizer with a batch size of 16 and a learning rate of 1e 4 for the Figure-Ground Segregation test. ... We train each model for 20,000 iterations. Our model is trained for 100 epochs using the Adam [23] optimizer with an initial learning rate of 0.0001, decreased by 0.1 at 50 epochs. The batch size is 32. |