Variational On-the-Fly Personalization

Authors: Jangho Kim, Jun-Tae Lee, Simyung Chang, Nojun Kwak

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To verify the proposed Vo P, we apply it to three tasks: keyword spotting, speaker verification and few-shot classification. For each task, we apply Vo P employing baseline networks where Vo P and the baseline model share the same architecture except the hyper-personalizer consisting of MLPs.
Researcher Affiliation Collaboration 1Qualcomm AI Research, an initiative of Qualcomm Technologies, Inc. Jangho Kim completed the research in part during an internship at Qualcomm Technologies, Inc. 2Seoul National University, South Korea.
Pseudocode Yes Algorithm 1 Variational On-the-Fly Personalization (Vo P)
Open Source Code No For the keyword spotting and speaker verification tasks, we used the released official implementations (Castorini, 2017; Clova AI, 2020) following a MIT License. The paper does not state that the authors' implementation of Vo P is open-source or provide a link to it.
Open Datasets Yes We use Qualcomm Keyword Speech (Kim et al., 2019) dataset. Dataset Vox Celeb1 (Nagrani et al., 2017). The mini Imagenet dataset consists of 60,000 color images of size 84 84 with 100 classes where each class includes 600 images. We followed the split introduced by (Vinyals et al., 2016): 64, 16, and 20 classes for training, validation and testing, respectively.
Dataset Splits Yes We followed the split introduced by (Vinyals et al., 2016): 64, 16, and 20 classes for training, validation and testing, respectively. The 16 validation classes is used for monitoring generalization performance.
Hardware Specification Yes For all experiments, we implement Vo P using the Pytorch libraries with a single 1080ti GPU.
Software Dependencies No For all experiments, we implement Vo P using the Pytorch libraries. While PyTorch is mentioned, specific version numbers for it or other software dependencies are not provided.
Experiment Setup Yes In training, following the most basic setting (the optimizer and total training epochs) of (Tang & Lin, 2017), we use a larger minibatch size (512) to compute the proto-means and the proto-variances of different personalities, together. Also, for the stable learning of Vo P, we experimentally set both learning rate and α in (12) as 0.0005, and use no learning rate decay.