Structured Output Learning with Conditional Generative Flows
Authors: You Lu, Bert Huang5005-5012
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In our experiments, we test c-Glow on five different tasks. C-Glow outperforms the stateof-the-art baselines in some tasks and predicts comparable outputs in the other tasks. The results show that c-Glow is versatile and is applicable to many different structured prediction problems. |
| Researcher Affiliation | Academia | You Lu Department of Computer Science Virginia Tech Blacksburg, VA you.lu@vt.edu Bert Huang Department of Computer Science Virginia Tech Blacksburg, VA bhuang@vt.edu |
| Pseudocode | No | The paper describes the model components and learning process verbally but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of their proposed method (c-Glow). It mentions using code for FCN from a GitHub link in a footnote: '1We use code from https://github.com/wkentaro/pytorch-fcn.' |
| Open Datasets | Yes | We use the Weizmann Horse Image Database (Borenstein and Ullman 2002); We use the Labeled Faces in the Wild (LFW) dataset (Huang, Jain, and Learned-Miller 2007; Kae et al. 2013); we conduct color image denoising on the BSDS500 dataset (Arbelaez et al. 2010); we use the seven scenes dataset (Newcombe et al. 2011); from the Celeb A dataset (Liu et al. 2015) |
| Dataset Splits | Yes | We use the same training, validation, and test split as previous works (Kae et al. 2013; Gygli, Norouzi, and Angelova 2017), and super-pixel accuracy (SPA) as our metric. |
| Hardware Specification | No | The paper mentions 'NVIDIA s GPU Grant Program and Amazon s AWS Cloud Credits for Research program for their support.' but does not specify particular GPU or CPU models, or detailed cloud instance types for the experiments. |
| Software Dependencies | No | The paper mentions using 'Adam (Kingma and Ba 2014) to tune the learning rates' but does not specify version numbers for any software libraries, frameworks, or dependencies used in their implementation. |
| Experiment Setup | Yes | We use Adam (Kingma and Ba 2014) to tune the learning rates, with α = 0.0002, β1 = 0.9, and β2 = 0.999. We set the mini-batch size to be 2. For the experiments on small datasets, i.e., semantic segmentation and image denoising, we run the program for 5 104 iterations to guarantee the algorithms have fully converged. For the experiments on inpainting, the training set is large, so we run the program for 3 105 iterations. For c-Glow, we set L = 3, K = 8, nc = 64, and nw = 128. |