Dynamic Flows on Curved Space Generated by Labeled Data
Authors: Xinru Hua, Truyen Nguyen, Tam Le, Jose Blanchet, Viet Anh Nguyen
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate the results of our proposed gradient flow method on several real-world datasets and show our method can improve the accuracy of classification models in transfer learning settings. The numerical experiments in Section 5 demonstrate that our gradient flows can effectively augment the target data, and thus can significantly boost the accuracy in the classification task in the few-shot learning setting. |
| Researcher Affiliation | Academia | Xinru Hua1 , Truyen Nguyen2 , Tam Le3 , Jose Blanchet1 and Viet Anh Nguyen4 1 Stanford University, United States 2The University of Akron, United States 3The Institute of Statistical Mathematics / RIKEN AIP, Japan 4Chinese University of Hong Kong, China |
| Pseudocode | Yes | Algorithm 1 Discretized Gradient Flow Algorithm for Scheme (4.2) ... Algorithm 2 in the Supplementary describes (4.3) in details. |
| Open Source Code | Yes | Our code and supplementary are available at https://github.com/ Lucy XH/Dynamic Flows Curved Space/ |
| Open Datasets | Yes | We consider three datasets: the MNIST (M) [Le Cun and Cortes, 2010], Fashion-MNIST (F) [Xiao et al., 2017], Kuzushiji-MNIST (K) [Clanuwat et al., 2018]. To demonstrate the scalability of our algorithm to higher-dimensional images, we run experiments on Tiny Image Net (TIN) [Russakovsky et al., 2015] and upscaled SVHN [Netzer et al., 2011] datasets |
| Dataset Splits | No | The paper describes the setup for few-shot learning (e.g., 1-shot, 5-shot) and the number of source/target samples used, but does not provide specific train/validation/test dataset splits (e.g., percentages or counts for a validation set). |
| Hardware Specification | Yes | The experiments are run on a C5.4xlarge AWS instance (a CPU instance) and all finish in about one hour. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the implementation or experiments. |
| Experiment Setup | No | The paper mentions some aspects of the setup, such as the mapping function phi and the type of tensor kernel used. However, it does not provide common machine learning hyperparameters like learning rates, batch sizes, optimizers, or training epochs. |