CAMBranch: Contrastive Learning with Augmented MILPs for Branching
Authors: Jiacheng Lin, Meng XU, Zhihua Xiong, Huangang Wang
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that CAMBranch, trained with only 10% of the complete dataset, exhibits superior performance. Ablation studies further validate the effectiveness of our method. |
| Researcher Affiliation | Academia | Jiacheng Lin1, Meng Xu2 Zhihua Xiong2 Huangang Wang2 1University of Illinois Urbana-Champaign 2Tsinghua University |
| Pseudocode | No | The paper contains mathematical formulations and descriptions of processes, but no explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that source code for the described methodology is publicly available. |
| Open Datasets | Yes | We assess our method on four NP-hard problems, i.e., Set Covering (BALAS, 1980), Combinatorial Auction (Leyton-Brown et al., 2000), Capacitated Facility Location (Cornuejols et al., 1991), and Maximum Independent Set (Cire & Augusto, 2015). |
| Dataset Splits | No | The paper mentions 'training data' and 'test sets' (20k expert samples) but does not explicitly define a 'validation' set or its specific split percentage/count. |
| Hardware Specification | Yes | In this paper, all experiments are run on a cluster with Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz processors, 128GB RAM, and Nvidia RTX 2080Ti graphics cards. |
| Software Dependencies | Yes | In our experiments, we employed the open-source solver SCIP (version 6.0.1) (Gleixner et al., 2018) as our backend solver. |
| Experiment Setup | Yes | We set the hidden layer size of the GCNN network to 64. We conducted a grid search for the learning rate, considering values from {1 10 3, 5 10 4, 1 10 4}. Additionally, we selected the weight values λ1 = 0.05 and λ2 = 0.01 for the loss function. We utilized the Adam optimizer Kingma & Ba (2015) with β1 = 0.9 and β2 = 0.999. |