Extracting Relationships by Multi-Domain Matching
Authors: Yitong Li, michael Murias, geraldine Dawson, David E. Carlson
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, we show that the proposed methodology handles higher numbers of source domains (up to 21 empirically), and provides state-of-the-art performance on image, text, and multi-channel time series classification, including clinical outcome data in an open label trial evaluating a novel treatment for Autism Spectrum Disorder. We tested MDMN by applying it to three classification problems: image recognition, natural-language sentiment detection, and multi-channel time series analysis. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Duke University 2Duke Institute for Brain Sciences, Duke University 3Departments of Psychiatry and Behavioral Sciences, Duke University 4Department of Civil and Environmental Engineering, Duke University 5Department of Biostatistics and Bioinformatics, Duke University |
| Pseudocode | Yes | The pseudo code for training is given in Algorithm 1. Algorithm 1 Multiple Source Domain Adaptation via WDA |
| Open Source Code | No | The paper does not provide a direct link to open-source code or an explicit statement that the code for the described methodology is publicly available. |
| Open Datasets | Yes | We first test the performance of the proposed MDMN model on MNIST, MNISTM, SVHN and USPS datasets. Two datasets are used to evaluate performance on Electroencephalography (EEG) data: SEED dataset [46] and an Autism Spectrum Disorder (ASD) dataset [11]. |
| Dataset Splits | Yes | In each time, one subject is picked out as test and the remaining 14 subjects are used as training and validation. We also adopt the leave-one-out cross-validation scheme for this dataset, where one subject is left out as testing, the remaining 21 subjects are separated as training and validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Adam [20] is used as the optimization method' but does not specify software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | Yes | The feature extractor E consists of two convolutional layers plus two fully connected layers. Both the label predictor and domain adapter are two layers MLP. Re LU nonlinearity is used between layers. Adam [20] is used as the optimization method. During training, the target domain weight βS in eq. (1) is always set to one, while sources domain weights are normalized to have sum one. As in [23], we set the filter number to 10 for both datasets. |