Few-shot Learning for Feature Selection with Hilbert-Schmidt Independence Criterion

Authors: Atsutoshi Kumagai, Tomoharu Iwata, Yasutoshi Ida, Yasuhiro Fujiwara

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally demonstrate that the proposed method outperforms existing feature selection methods.In this section, we demonstrate the effectiveness of the proposed method using one synthetic and three real-world datasets.
Researcher Affiliation Industry Atsutoshi Kumagai NTT Computer and Data Science Laboratories atsutoshi.kumagai.ht@hco.ntt.co.jp Tomoharu Iwata NTT Communication Science Laboratories tomoharu.iwata.gy@hco.ntt.co.jp Yasutoshi Ida NTT Computer and Data Science Laboratories yasutoshi.ida@ieee.org Yasuhiro Fujiwara NTT Communication Science Laboratories yasuhiro.fujiwara.kh@hco.ntt.co.jp
Pseudocode Yes Algorithm 1 shows the pseudocode for our training procedure.Algorithm 1 Training procedure of our model.
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] The code is proprietary.
Open Datasets Yes We used there real-world datasets: Mnistr1, Isolet2, and Io T3, which have been widely used in previous studies [3, 25, 54].1https://github.com/ghif/mtae 2http://archive.ics.uci.edu/ml/datasets/ISOLET 3https://archive.ics.uci.edu/ml/datasets/detection_of_Io T_botnet_attacks_N_Ba Io T
Dataset Splits Yes We evaluated the average fraction of correctly selected features on each target task with different numbers of target support instances within {10, 20, 30}.For the proposed method, we selected the hyperparameter on the basis of mean validation loss.We randomly select one task for the target task, one task for the validation task, and the rest for the source tasks. For each dataset, we created 10 different target/validation/source task splits.
Hardware Specification Yes Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [Yes] Please see Section C.
Software Dependencies No The paper does not provide specific version numbers for software components (e.g., Python 3.x, PyTorch 1.x) or any self-contained solvers with their versions. It only references a supplemental section (Section C) for experimental settings, but this section is not provided in the paper text.
Experiment Setup Yes For the proposed method, we selected the hyperparameter on the basis of mean validation loss. For comparison methods, the best test results are reported from their hyperparameter candidates. The details of the experimental settings such as hyperparameter candidates are described in the supplemental material (Section C).