Optimizing Solution-Samplers for Combinatorial Problems: The Landscape of Policy-Gradient Method
Authors: Constantine Caramanis, Dimitris Fotakis, Alkis Kalavasis, Vasilis Kontonis, Christos Tzamos
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we investigate experimentally the effect of our main theoretical contributions, the entropy regularizer (see Equation (2)) and the fast/slow mixture scheme (see Equation (5)). |
| Researcher Affiliation | Academia | Constantine Caramanis UT Austin & Archimedes / Athena RC constantine@utexas.edu Dimitris Fotakis NTUA & Archimedes / Athena RC fotakis@cs.ntua.gr Alkis Kalavasis Yale University alvertos.kalavasis@yale.edu Vasilis Kontonis UT Austin vkonton@gmail.com Christos Tzamos UOA & Archimedes / Athena RC tzamos@wisc.edu |
| Pseudocode | No | The paper includes a Python class definition in the appendix (Figure 4), but it is not explicitly labeled as "Pseudocode" or an "Algorithm" block. |
| Open Source Code | Yes | For more details we refer to our full code submitted in the supplementary material. |
| Open Datasets | No | The paper states: "We generate 100 random πΊ(π, π) graphs with π= 15 nodes and π= 0.5" and refers to "random π-regular graphs with πnodes". These are generated graphs, not specific publicly available datasets with direct access information or formal citations. |
| Dataset Splits | No | The paper describes generating graphs and training models but does not provide specific details on train/validation/test splits, such as percentages, sample counts, or predefined split references. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or cloud computing instance specifications used for the experiments. |
| Software Dependencies | No | The paper mentions a "pytorch implementation" but does not specify version numbers for PyTorch or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | We perform 600 iterations and, for the entropy regularization, we progressively decrease the regularization weight, starting from 10, and dividing it by 2 every 60 iterations. We used a fast/slow mixing with mixture probability 0.2 and inverse temperature rho=0.03 |