Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
A Solver-free Framework for Scalable Learning in Neural ILP Architectures
Authors: Yatin Nandwani, Rishabh Ranjan, - Mausam, Parag Singla
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present several experiments on problems which require learning of ILP constraints and cost, with both symbolic as well as perceptual input. These include solving a symbolic sudoku as well as visual sudoku... Experiments on several problems, both perceptual as well as symbolic, which require learning the constraints of an ILP, show that our approach has superior performance and scales much better compared to purely neural baselines and other state-of-the-art models that require solver-based training. |
| Researcher Affiliation | Academia | Yatin Nandwani , Rishabh Ranjan , Mausam & Parag Singla Department of Computer Science, Indian Institute of Technology Delhi, INDIA EMAIL |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code available at: https://github.com/dair-iitd/ilploss |
| Open Datasets | Yes | For 9x9 sudokus, we use a standard dataset from Kaggle [Park, 2017], for k = 4 we use publically available data from Arcot and Kalluraya [2019], and for k = 6 we use the data generation process described in Nandwani et al. [2022]. We randomly select 10,000 samples for training, and 1000 samples for testing for each k. To generate the input images for visual-sudoku, we use the official train and test split of MNIST[Deng, 2012]. |
| Dataset Splits | No | We randomly select 10,000 samples for training, and 1000 samples for testing for each k. (No explicit mention of validation set size or splitting method, though it refers to 'best val set accuracy') |
| Hardware Specification | No | We thank IIT Delhi HPC facility for computational resources. ... See appendix for the details of the ILP solver used in our experiments, the hardware specifications, the hyper-parameters, and various other design choices. (The main paper text does not provide specific hardware models.) |
| Software Dependencies | No | See appendix for the details of the ILP solver used in our experiments... Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2022. URL https://www.gurobi.com. (No specific software version numbers are provided in the main text). |
| Experiment Setup | No | ยต+ and ยต are the hyperparameters representing the margins for the positive and the negative points respectively. and The temperature parameter ฯ needs to be annealed as the training progresses. and We pick an ฯต small enough.... However, it also states: See the appendix for the details... the hyper-parameters, and various other design choices. The main text describes the type of hyperparameters but does not provide concrete values for them. |