Completely Heterogeneous Transfer Learning with Attention - What And What Not To Transfer

Authors: Seungwhan Moon, Jaime Carbonell

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of the proposed approaches via extensive simulations as well as a real-world application. and We show the efficacy of the proposed approaches on extensive simulation studies as well as a novel real-world transfer learning task.
Researcher Affiliation Academia Seungwhan Moon, Jaime Carbonell Language Technologies Institute School of Computer Science Carnegie Mellon University [seungwhm | jgc]@cs.cmu.edu
Pseudocode No No structured pseudocode or algorithm blocks are present in the paper. The paper describes methods using mathematical equations and textual explanations.
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes Datasets: we use the RCV-1 dataset (English: 804,414 document; 116 classes) [Lewis et al., 2004], the 20 Newsgroups1 (English: 18,846 documents; 20 classes), the Reuters Multilingual [Amini et al., 2009] (French (FR): 26,648, Spanish (SP): 12,342, German (GR): 24,039, Italian (IT): 12,342 documents; 6 classes), and the R8 2 (English: 7,674 documents; 8 classes) datasets.
Dataset Splits Yes We obtain 5-fold results for each dataset generation, and report the overall average accuracy in Figure 4. and averaged over 10-fold runs.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU, GPU models, or memory specifications).
Software Dependencies No The paper mentions general techniques and models like word embeddings, knowledge graphs, and DNNs, but does not provide specific version numbers for any software dependencies or libraries used for implementation.
Experiment Setup Yes For the following experiments, we set NS = NT = 4000 (number of samples), M = 4 (number of source and target dataset classes), MS = MT = 20 (original feature dimension), ME = 15 (embedded label space dimension), K = 12 (number of attention clusters), σdiff = 0.5, σlabel {0.05, 0.1, 0.2, 0.3}, and %LT {0.005, 0.01, 0.02, 0.05}. and ϵ is a fixed margin which we set as 0.1 and MC = 320, ME = 300, label: word embeddings and K = 40.