Learning to Annotate Part Segmentation with Gradient Matching
Authors: Yu Yang, Xiaotian Cheng, Hakan Bilen, Xiangyang Ji
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method is evaluated with semi-supervised part segmentation tasks and significantly outperforms other semi-supervised competitors when the amount of labelled examples is extremely limited. |
| Researcher Affiliation | Academia | Yu Yang Department of Automation Tsinghua University, BNRist yang-yu16@mails.tsinghua.edu.cn Xiaotian Cheng Department of Automation Tsinghua University, BNRist cxt20@mails.tsinghua.edu.cn Hakan Bilen School of Informatics University of Edinburgh hbilen@ed.ac.uk Xiangyang Ji Department of Automation Tsinghua University, BNRist xyji@tsinghua.edu.cn |
| Pseudocode | Yes | Algorithm 1: Learning to annotate with gradient matching. Algorithm 2: Learning to annotate with K-step MAML. |
| Open Source Code | Yes | Code is available at https://github.com/yangyu12/lagm. |
| Open Datasets | Yes | We evaluate our method on six part segmentation datasets: Celeb A, Pascal-Horse, Pascal Aeroplane, Car-20, Cat-16, and Face-34. Celeb A (Liu et al., 2015)... Pascal Part (Chen et al., 2014)... LSUN (Yu et al., 2015)... Car-20, Cat-16, and Face-34, released by Zhang et al. (2021)... |
| Dataset Splits | Yes | We finally obtain 180 training images, 34 validation images, and 225 test images to constitute Pascal-Horse, 180 training images, 78 validation images, and 269 test images to constitute Pascal-Aeroplane. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) are mentioned in the paper, only general terms like 'GPU memory'. |
| Software Dependencies | No | The paper mentions software like PyTorch, Deep Labv3, and U-Net, and links to a GitHub repository for Style GAN models, but does not provide specific version numbers for these software dependencies (e.g., 'PyTorch library' without a version). |
| Experiment Setup | Yes | The annotator and the segmentation network are optimized with an SGD optimizer with learning rate 0.001 and momentum 0.9. By default, we jointly train an annotator and a segmentation network with K = 1 and batch size 2 for 150,000 steps. |