Teaching the Old Dog New Tricks: Supervised Learning with Constraints
Authors: Fabrizio Detassis, Michele Lombardi, Michela Milano3742-3749
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical Evaluation Here we describe our experimentation, which is designed around a few main questions: 1) How does the method work on a variety of constraints, tasks, and datasets? ... Our code and results are publicly available. |
| Researcher Affiliation | Academia | Fabrizio Detassis,1 Michele Lombardi, 1 Michela Milano 1, 2 1 DISI, University of Bologna 2 Alma Mater Research Institute for Human-Centered Artificial Intelligence |
| Pseudocode | Yes | Algorithm 1 MOVING TARGETS |
| Open Source Code | Yes | Our code and results are publicly available1. 1Code available at: github.com/fabdet/moving-targets |
| Open Datasets | Yes | We test our method on seven datasets from the UCI Machine Learning repository (Dua and Graff 2017) |
| Dataset Splits | Yes | For each experiment, we perform a 5-fold cross validation (with a fixed seed). Hence, the training set for each fold will include 80% of the data. |
| Hardware Specification | Yes | All our experiments are run on an Intel Core i7 laptop with 16GB RAM and no GPU acceleration |
| Software Dependencies | Yes | we use Cplex 12.8 to solve the master problems. The network is trained with 100 epochs of RMSProp in Keras/Tensorflow 2.0 (default parameters, batch size 64). We train this approach to convergence using the CVXPY 1.1 library (with default configuration). |
| Experiment Setup | Yes | The network is trained with 100 epochs of RMSProp in Keras/Tensorflow 2.0 (default parameters, batch size 64). Empirically, α = 1, β = 0.1 seems to works well and is used for all subsequent experiments. As a ML model, we use a fully-connected, feed-forward Neural Network (NN) with two hidden layers with 32-Rectifier Linear Units. |