Learning Implicitly with Noisy Data in Linear Arithmetic
Authors: Alexander Rader, Ionela G Mocanu, Vaishak Belle, Brendan Juba
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this work, we extend the implicit learning approach to handle partial information given by bounds (e.g., intervals), thus capturing an even larger class of problem settings, and develop an implementation strategy for a hitherto purely theoretical framework. Furthermore, we provide the first empirical investigation of this hitherto purely theoretical framework. Using benchmark problems, we show that our implicit approach to learning optimal linear programming objective constraints significantly outperforms an explicit approach in practice. |
| Researcher Affiliation | Academia | Alexander P Rader1 , Ionela G Mocanu 2 , Vaishak Belle 2 and Brendan Juba 3 1Imperial College London 2University of Edinburgh 3Washington University in St. Louis |
| Pseudocode | Yes | Algorithm 1: Decide PAC, Algorithm 2: Optimise PAC |
| Open Source Code | Yes | The code can be found at https://github.com/APRader/pac-smt-arithmetic |
| Open Datasets | Yes | We use the following benchmark LP problems for our analysis: simplexn, cuben, pollution and police [Hillier and Lieberman, 1995]. |
| Dataset Splits | No | The paper mentions using '50% positive and 50% negative samples' and 'increasing sample sizes', but does not explicitly describe a separate 'validation' split or a detailed splitting methodology for training/validation/test sets. |
| Hardware Specification | Yes | The hardware we used was an Intel Core i7-6700 3.40GHz, with 16 GB of RAM running Ubuntu 20.04. |
| Software Dependencies | No | The implementation is written in Python and uses the Z3 Theorem Solver [de Moura and Bjørner, 2008] for SMT queries. While it mentions Python and Z3, it does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We set the accuracy for Optimise PAC to 60, which is the number of times our intervals are divided by 2. The reason being that we can match the accuracy of double precision floating-point values, which is about 2 53. |