Proximal Deep Structured Models
Authors: Shenlong Wang, Sanja Fidler, Raquel Urtasun
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the effectiveness of our approach in the tasks of image denoising, depth refinement and optical flow estimation. |
| Researcher Affiliation | Academia | Shenlong Wang University of Toronto slwang@cs.toronto.edu Sanja Fidler University of Toronto fidler@cs.toronto.edu Raquel Urtasun University of Toronto urtasun@cs.toronto.edu |
| Pseudocode | Yes | Figure 2: Algorithm for learning proximal deep structured models. (Contains a numbered list of steps) |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the described methodology. |
| Open Datasets | Yes | We use the BSDS image dataset [23]. (for image denoising) We conduct the depth refinement experiment on the 7 Scenes dataset [25]. (for depth refinement) We evaluate the task of optical flow estimation on the Flying Chairs dataset [11]. (for optical flow) |
| Dataset Splits | No | The paper mentions using training and testing subsets but does not explicitly provide specific details for a validation dataset split. |
| Hardware Specification | Yes | Our experiments are conducted on a Xeon 3.2 Ghz machine with a Titan X GPU. |
| Software Dependencies | Yes | We employ mxnet [4] with CUDNNv4 acceleration to implement the networks |
| Experiment Setup | Yes | We use mean square error as the loss function and set a weight decay strength of 0.0004 for all settings. MSRA initialization [16] is used for the convolution parameters and the initial gradient step for each iteration is set to be 0.02. We use adam [19] with a learning rate of t = 0.02 and hyper-parameters β1 = 0.9 and β2 = 0.999 as in Kingma et al. [19]. The learning rate is divided by 2 every 50 epoch, and we use a mini-batch size of 32. |