Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization
Authors: Hang Wu, May Wang
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiment evaluation on benchmark datasets shows significant improvement in performance over conventional baselines and is also consistent with our theoretical results. For empirical evaluation of our proposed algorithms, we follow the conversion from supervised learning to bandit feedback method (Agarwal et al., 2014). |
| Researcher Affiliation | Academia | 1Georgia Institute of Technology, Atlanta, GA, USA. Correspondence to: Hang Wu <hangwu@gatech.edu>. |
| Pseudocode | Yes | Algorithm 1 Variational Minimizing Df(h||h0; P(X)) and Algorithm 2 Minimizing Variance Regularized Risk Separate Training |
| Open Source Code | Yes | Codes for reproducing the results can be found in the link https://github.com/hang-wu/VRCRM. |
| Open Datasets | Yes | We use four multi-label classification dataset collected in the UCI machine learning repo (Asuncion & Newman, 2007) |
| Dataset Splits | Yes | The hyperparameters are selected by performance on validation set and more details of their methods can be found in the original paper (Swaminathan & Joachims, 2015a). |
| Hardware Specification | Yes | We used Py Torch to implement the pipelines and trained networks with Nvidia K80 GPU cards. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'Adam' but does not specify version numbers for these or any other software dependencies needed to replicate the experiments. |
| Experiment Setup | Yes | The networks are trained with Adam (Kingma & Ba, 2014) of learning rate 0.001 and 0.01 respectively for the reweighted loss and the divergence minimization part. |