Regularized Frank-Wolfe for Dense CRFs: Generalizing Mean Field and Beyond
Authors: Đ.Khuê Lê-Huu, Karteek Alahari
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate this in our empirical results on standard semantic segmentation datasets, where several instantiations of our regularized Frank-Wolfe outperform mean field inference, both as a standalone component and as an end-to-end trainable layer in a neural network. |
| Researcher Affiliation | Academia | Ð.Khuê Lê-Huu Karteek Alahari Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK 38000 Grenoble, France {khue.le,karteek.alahari}@inria.fr |
| Pseudocode | Yes | Algorithm 1 Generic regularized Frank-Wolfe for (approximately) solving MAP inference (6). |
| Open Source Code | Yes | Our source code is made publicly available under the GNU general public license for this purpose.1 1https://github.com/netw0rkf10w/CRF |
| Open Datasets | Yes | We first pretrain Deep Labv3 and Deep Labv3+ on the COCO dataset [46] and then finetune them on PASCAL VOC (trainaug) and Cityscapes (train) to obtain similar results to previous work [16, 17] (Table 1, CNN column). |
| Dataset Splits | Yes | We first pretrain Deep Labv3 and Deep Labv3+ on the COCO dataset [46] and then finetune them on PASCAL VOC (trainaug) and Cityscapes (train) to obtain similar results to previous work [16, 17] (Table 1, CNN column). ...Table 1 shows the performance on the validation sets of PASCAL VOC and Cityscapes... |
| Hardware Specification | No | The paper states: 'The experiments were performed using HPC resources from GENCI-IDRIS (Grants 2020-AD011011321 and 2020AD011011881).' However, it does not specify concrete hardware details such as specific GPU or CPU models, memory sizes, or detailed cloud instance types used for the experiments. |
| Software Dependencies | Yes | Our implementation builds on PyTorch 1.7.0 and mmsegmentation [2]. |
| Experiment Setup | Yes | We train the model for 20 epochs with 5 CRF iterations, using the same poly schedule as before. ...We set its learning rate to a small value of 0.0001. For the CRF, we tried 4 different values of initial learning rates 0 2 {1.0, 0.1, 0.01, 0.001}... |