Implicit Representations for Constrained Image Segmentation
Authors: Jan Philipp Schneider, Mishal Fatima, Jovita Lukasik, Andreas Kolb, Margret Keuper, Michael Moeller
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Several numerical examples demonstrate that challenging segmentation scenarios can benefit from the inclusion of application-specific constraints, e.g. when occlusions prevent a faithful segmentation with classical approaches. [...] 3. Numerical Experiments We analyze the possible advantages of the proposed framework for two of the above constraints, namely convexity and path-connectedness, more extensively in the numerical experiments presented below. All details of our numerical experiments can be found in the appendix (B, C). |
| Researcher Affiliation | Academia | 1University of Siegen 2University of Mannheim 3Max-Planck Institute for Informatics, Saarland Informatics Campus. Correspondence to: Jan Philipp Schneider <Jan.Schneider@uni-siegen.de>. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https: //github.com/jp-schneider/awesome. |
| Open Datasets | Yes | To investigate the influence of implicit convex representations numerically, we exploit the scribble-based convexity dataset (Gorelick et al., 2014). It consists of 51 images with user scribbles, and (approximately) convex foreground objects to be segmented. [...] We evaluate the use of implicit representations for our PC prior on the sequences (18 in total) from the FBMS-59 dataset (Brox & Malik, 2010; Ochs et al., 2013) where the baseline implementation (Kardoost & Keuper, 2021) segmented single objects. [...] The convexity dataset is available at https://vision.cs.uwaterloo.ca/data/. |
| Dataset Splits | No | The paper describes averaging results over runs and images, but does not explicitly provide numerical training/validation/test dataset splits like percentages or sample counts for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | Yes | Real NVP (Dinh et al., 2017; Stimper et al., 2023) is a straightforward architecture choice. [...] normflows: A pytorch package for normalizing flows. Journal of Open Source Software, 8(86):5361, 2023. |
| Experiment Setup | Yes | The networks were trained using Adam optimizer with a learning rate of 0.02 for 3,000 optimization steps per image. For the first 200 steps, we set α = 0, to train both networks, Nθ, Gν, individually without regularization effects. This allows the networks to first individually predict stable segmentation before further joint optimization. Also with the start of joint training, we decrease the learning rate to 0.002. We set β and γ to 0.01. [...] Secondly, we optimize Pφ,ν only for ν using the Adam optimizer with a learning rate of 1 10 3 and 1,000 optimization steps on Lpc seqc. This results in a Pφ,ν with an approximately convex fit on the unaries. Thirdly, we optimize Pφ,ν for ν and φ together, but now with an Adamax optimizer, a learning rate of 1 10 3 and a weight decay of 1 10 5 on Dφ for 4,000 optimization steps on Lpc seqp6. |