Monotonic Kronecker-Factored Lattice
Authors: William Taylor Bakst, Nobuyuki Morioka, Erez Louidor
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results demonstrate that KFL trains faster with fewer parameters while still achieving accuracy and evaluation speeds comparable to or better than the baseline methods and preserving monotonicity guarantees on the learned model. |
| Researcher Affiliation | Industry | Nobuyuki Morioka , Erez Louidor , William Bakst Google Research {nmorioka,erez,wbakst}@google.com |
| Pseudocode | Yes | We use Algorithm 1 to map a vector w RD to a close (in L2 norm) vector w that satisfies w[1] . . . w[D]. |
| Open Source Code | Yes | Open-source code for KFL has been pushed to the Tensor Flow Lattice 2.0 library and can be downloaded at github.com/tensorflow/lattice. |
| Open Datasets | Yes | The first dataset is the same public Adult Income dataset (Dheeru & Karra Taniskidou, 2017) with the same monotonicity constraint setup described in Canini et al. (2016). |
| Dataset Splits | Yes | For each experiment, we train for 100 epochs with a batch size of 256 using the Adam optimizer and validate the learning rate from {0.001, 0.01, 0.1, 1.0} with five-fold cross-validation. |
| Hardware Specification | Yes | The train and evaluation times were measured on a workstation with 6 Intel Xeon W-2135 CPUs. |
| Software Dependencies | No | The paper mentions using "Tensor Flow" and "Tensor Flow Lattice" but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | For each experiment, we train for 100 epochs with a batch size of 256 using the Adam optimizer and validate the learning rate from {0.001, 0.01, 0.1, 1.0} with five-fold cross-validation. We use a lattice size with the same entry V for each direction and tune V from {2, 4, 8} and different settings of M from [1, 100] for KFL. |