Warm-starting Push-Relabel
Authors: Sami Davies, Sergei Vassilvitskii, Yuyan Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 4, we run our warm-start Push-Relabel compared to a cold-start version. We see that the warm-start improves over the cold-start by a larger percentage as the size of the image increases. and Table 1 shows average running times for both Ford-Fulkerson in [DMVW23] and Push Relabel. |
| Researcher Affiliation | Collaboration | Sami Davies , Sergei Vassilvitskii , Yuyan Wang UC Berkeley, Google Research New York davies@berkeley.edu, {sergeiv,wangyy}@google.com |
| Pseudocode | Yes | Algorithm 1 Push-Relabel, Algorithm 2 Warm-start Push-Relabel with Gap Relabeling, Algorithm 3 Moving all excess to the s-side of the cut, Algorithm 4 Define Heights, Algorithm 5 Find a Cut-saturating Pseudo-flow and Compute η, Algorithm 6 Moving all deficit to the t-side of the cut. |
| Open Source Code | Yes | Code is uploaded in the supplementary files. |
| Open Datasets | Yes | Our image groups are from the Pattern Recognition and Image Processing dataset from the University of Freiburg, and are titled BIRDHOUSE, HEAD, SHOE, and DOG. 1https://lmb.informatik.uni-freiburg.de/resources/datasets/sequences.en.html 2https://lmb.informatik.uni-freiburg.de/resources/datasets/Stereo Egomotion.en.html |
| Dataset Splits | No | From each group, we consider 10 images and crop them to be either 600 × 600 or 500 × 500 pixel images... We rescale... to be N × N pixels... Experiments are performed for N ∈ {30, 60, 120, 240, 480}. The first image in the sequence is solved from scratch. For the second image in the sequence, we reuse the old optimal flow and cut from the first image one, then for the ith image in the sequence, we reuse the optimal flow and cut from the i − 1st image. The paper describes the image processing and sequential application, but not typical train/val/test splits for model training/evaluation in the ML sense. |
| Hardware Specification | No | The paper describes implementation choices and mentions running experiments, but does not provide specific details on the hardware used (e.g., CPU/GPU models, memory, etc.). |
| Software Dependencies | No | The paper mentions implementing heuristics (gap relabeling, global relabeling) and uses a Push-Relabel subroutine, but does not list specific software dependencies or their version numbers (e.g., programming languages, libraries, frameworks). |
| Experiment Setup | Yes | Throughout the experiments, whenever the Push-Relabel subroutine is called on any auxiliary graph, it is implemented with the gap relabeling heuristic, as shown in Algorithm 2, and the global relabeling heuristic, which occasionally updates the heights to be a node s distance from t in the residual graph. These heuristics are known to improve the performance of Push-Relabel. As a tie-breaker for choosing the next active node to push from, we choose the one with highest height, which is known to improve the running time of Push-Relabel. We reuse the old max-flow on the new network by rounding down the flow on edges whose capacity has decreased, hence producing excesses and deficits, and pass this network and flow to the warm-start Push-Relabel algorithm in Section 3. |