Learning dynamic polynomial proofs
Authors: Alhussein Fawzi, Mateusz Malinowski, Hamza Fawzi, Omar Fawzi
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experimental results. We illustrate our dynamic proving approach on the stable set problem described in Section 2. This problem has been extensively studied in the polynomial optimization literature [Lau03]. We evaluate our method against standard linear programming hierarchies considered in this field. |
| Researcher Affiliation | Collaboration | Alhussein Fawzi Deep Mind afawzi@google.com Mateusz Malinowski Deep Mind mateuszm@google.com Hamza Fawzi University of Cambridge hf323@cam.ac.uk Omar Fawzi ENS Lyon omar.fawzi@ens-lyon.fr |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement about open-source code availability or a link to a code repository for the methodology described. |
| Open Datasets | No | We train our prover on randomly generated graphs of size n = 25, where an edge between nodes i and j is created with probability p [0.5, 1]. |
| Dataset Splits | No | The paper mentions training on 'randomly generated graphs' and evaluating on a 'test set', but does not provide specific details about validation data splits or how the training, validation, and test sets are partitioned. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models used for running the experiments. |
| Software Dependencies | No | The paper mentions using DQN and refers to existing proof systems but does not list any specific software dependencies with version numbers (e.g., PyTorch 1.9, TensorFlow 2.x). |
| Experiment Setup | Yes | We restrict the number of steps in the dynamic proof to be at most 100 steps and limit the degree of any intermediate lemma to 2. |