Few-shot Learning via Dirichlet Tessellation Ensemble
Authors: Chunwei Ma, Ziyun Huang, Mingchen Gao, Jinhui Xu
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our CIVD-based workflow enables us to achieve new state-of-the-art results on mini-Image Net, CUB, and tiered-Imagen Net datasets, with 2% 5% improvements upon the next best. |
| Researcher Affiliation | Academia | Chunwei Ma1, Ziyun Huang2, Mingchen Gao1, Jinhui Xu1 1Department of Computer Science and Engineering, University at Buffalo 2Computer Science and Software Engineering, Penn State Erie 1{chunweim,mgao8,jinhui}@buffalo.edu 2{zxh201}@psu.edu |
| Pseudocode | Yes | Algorithm 1: Voronoi Diagram-based Logistic Regression. |
| Open Source Code | Yes | Our code as well as data split, random seeds, hyperparameters, scripts for reproducing the results in the paper are available at https://github.com/horsepurve/Deep Voro. |
| Open Datasets | Yes | Table 2: Summarization of the datasets used in the paper. ... mini-Image Net (Vinyals et al., 2016) is a shrunk subset of ILSVRC-12 (Russakovsky et al., 2015)... CUB (Welinder et al., 2010) is another benchmark dataset for FSL... tiered-Image Net (Ren et al., 2018) is another subset of ILSVRC-12 (Russakovsky et al., 2015)... |
| Dataset Splits | Yes | For a fair evaluation of the learning performance on a few samples, the meta-testing stage is typically formulated as a series of K-way N-shot tasks (episodes) {T }. ... mini-Image Net (Vinyals et al., 2016)... consists of 100 classes in which 64 classes for training, 20 classes for testing and 16 classes for validation. |
| Hardware Specification | Yes | Table 5: Running time comparison. ... benchmarked in a 20-core Intel Core TM i7 CPU with Num Py (v1.20.3)... |
| Software Dependencies | Yes | Table 5: Running time comparison. ... benchmarked in a 20-core Intel Core TM i7 CPU with Num Py (v1.20.3)... For Power-LR, we train it directly on the transformed K-way N-shot support samples using Py Torch library... |
| Experiment Setup | Yes | For Power-LR, we train it directly on the transformed K-way N-shot support samples using Py Torch library with an Adam optimizer with batch size at 64 and learning rate at 0.01. ... In our compositional transformation, the function (hλ gw,b f)(z) is parameterized by w, b, λ. ... For each R, we select the β that gives rise to the best result on the validation set... |