Learning Energy Networks with Generalized Fenchel-Young Losses
Authors: Mathieu Blondel, Felipe Llinares-Lopez, Robert Dadashi, Leonard Hussenot, Matthieu Geist
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate our losses on multilabel classification and imitation learning tasks. |
| Researcher Affiliation | Industry | Mathieu Blondel, Felipe Llinares-López, Robert Dadashi, Léonard Hussenot, Matthieu Geist Google Research, Brain team {mblondel,fllinares,dadashi,hussenot,mfgeist}@google.com |
| Pseudocode | No | The paper describes algorithms and procedures, such as solving by coordinate ascent or projected gradient ascent, but it does not include formally labeled "Pseudocode" or "Algorithm" blocks. |
| Open Source Code | No | The paper does not contain an explicit statement about releasing source code for the described methodology, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We perform experiments on 6 publicly-available datasets, see Appendix C. We use the train-test split from the dataset when provided. When not, we use 80% for training data and 20% for test data. For the MuJoCo environments, we do not tune the hyperparameters on the evaluation performance during training, but on the mean squared error on actions since the ground truth actions are available. We used a fixed learning rate of 1e-4, batch size of 256, and trained for 1000 epochs. We evaluate the learned policy every 50 epochs and save the model with the lowest evaluation mean squared error. |
| Dataset Splits | Yes | We hold out 25% of the training data for hyperparameter validation purposes. |
| Hardware Specification | No | The paper mentions using "MuJoCo Gym locomotion environments" which implies simulation, but it does not provide any specific details about the hardware (e.g., GPU models, CPU types, or memory) used to run these experiments. |
| Software Dependencies | No | The paper mentions using "ADAM" as an optimizer and implies PyTorch for implementation (Appendix C.1), but it does not specify version numbers for any software dependencies, such as PyTorch, Python, or other libraries, which are necessary for reproducibility. |
| Experiment Setup | Yes | For all models, we solve the outer problem (11) using ADAM. We set Ω(p) to the Gini negentropy. We hold out 25% of the training data for hyperparameter validation purposes. We set R(θ) in (11) to λ 2 θ 2 2. For the regularization hyper-parameter λ, we search 5 log-spaced values between 10 4 and 101. For the learning rate parameter of ADAM, we search 10 log-spaced valued between 10 5 and 10 1. Once we selected the best hyperparameters, we refit the model on the entire training set. We average results over 3 runs with different seeds. |