From Intervention to Domain Transportation: A Novel Perspective to Optimize Recommendation
Authors: Da Xu, Yuting Ye, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, Kannan Achan
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we carry out extensive real-data and semi-synthetic experiments to demonstrate the advantage of our approach, and launch online testing with a real-world IR system. |
| Researcher Affiliation | Collaboration | Da Xu Walmart Labs Sunnyvale, CA 94086, USA Da Xu5180@gmail.com. Yuting Ye Division of Biostatistics University of California, Berkeley Berkeley, CA 94720, USA yeyt@berkeley.edu. Chuanwei Ruan Instacart San Francisco, CA 94107, USA Ruanchuanwei@gmail.com |
| Pseudocode | Yes | Algorithm 1: Batch-wise GDA for Minimax Optimization |
| Open Source Code | Yes | All the implementation codes are provided in the supplement material. |
| Open Datasets | Yes | Benchmark datasets. We refer to the Movie Lens-1M, Last FM, and Good Reads datasets that are extensively employed by the IR literature. ... Movielens-1M 4. http://files.grouplens.org/datasets/movielens/ml-1m.zip ... Last FM 5. http://files.grouplens.org/datasets/hetrec2011/hetrec2011-lastfm-2k.zip ... Good Reads 6. The book recommendation dataset is obtained from the users public shelves on Goodread.com. |
| Dataset Splits | Yes | We adopt the widely-acknowledged approach for splitting the dataset to training, validation and testing as in He et al. (2017) and Rendle et al. (2020). In particular, we leverage the ordering information and use the last interaction of each user for testing, the second-to-last interaction for validation, and the previous interactions for training. |
| Hardware Specification | Yes | The computations are conducted on a Linux cluster with 2 Nvidia V100 GPU machines (each with 32 Gb memory) and 32 CPU with a total memory of 100 Gb. |
| Software Dependencies | No | The paper states: 'All the models are implemented with the auto-differentiation framework of Py Torch11.' While PyTorch is mentioned, a specific version number is not provided, nor are specific versions for other software components like 'sparse Adam' or 'Adam' optimizers. |
| Experiment Setup | Yes | For all the baseline models, we select the initial learning rate from {0.001, 0.005, 0.01, 0.05, 0.1}, and the ℓ2 regularization parameter from {0, 0.01, 0.05, 0.1, 0.2, 0.3}. ... The hyperparameters of the base models persist to our setting, in addition to the penalization parameter λ (in eq.(4)), which we select from {0.005, 0.1, 0.3, 0.5}. ... For all the experiments, we use the binary cross-entropy loss function for classification, and apply early stopping during training when the Rel@K metric does not improve for more than five epochs. ... In practice, we select the ratio among {1 : 1 : 10, 1 : 5 : 10, 1 : 10 : 10, 1 : 10 : 5} for our experiments. |