Stochastic Multiple Target Sampling Gradient Descent

Authors: Hoang Phan, Ngoc Tran, Trung Le, Toan Tran, Nhat Ho, Dinh Phung

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conduct comprehensive experiments to demonstrate the merit of our approach to multi-task learning.
Researcher Affiliation Collaboration 1 Vin AI Research, Vietnam 2 Monash University, Australia 3 University of Texas, Austin
Pseudocode Yes Algorithm 1 Pseudocode for MT-SGD. Input: Multiple unnormalized target densities p1:K. Output: The optimal particles θ1, θ2, . . . , θM. ... Algorithm 2 Pseudocode for multi-task learning MT-SGD. Input: A training set D = {(xi, yi1, ..., yi K)}N i=1. Output: The models θm = θj m K j=1 with m = 1, ..., M, where θj m = αm, βj m .
Open Source Code Yes Our codes are available at https://github.com/VietHoang1512/MT-SGD.
Open Datasets Yes Our method is validated on different benchmark datasets: (i) Multi-Fashion+MNIST [23], (ii) Multi-MNIST, and (iii) Multi-Fashion. Each of them consists of 120,000 training and 20,000 testing images generated from MNIST [12] and Fashion MNIST [27] by overlaying an image on top of another: one in the top-left corner and one in the bottom-right corner.
Dataset Splits No The paper specifies training and testing image counts but does not provide an explicit validation dataset split or count in the main text.
Hardware Specification No The detailed training and configuration are given in the supplementary material. ... Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] The detailed training environment will be attached in the supplementary material.
Software Dependencies No The tools used in our implementation are presented in our supplementary material.
Experiment Setup No The detailed training and configuration are given in the supplementary material.