Towards Interpretation of Pairwise Learning
Authors: Mengdi Huai, Di Wang, Chenglin Miao, Aidong Zhang4166-4173
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Theoretical analysis and extensive experiments demonstrate the effectiveness of the proposed methods. Experiments We conduct experiments on both real-world and synthetic datasets to evaluate the performance of the proposed interpretation methods. |
| Researcher Affiliation | Academia | 1Department of Computer Science, University of Virginia 2Department of Computer Science and Engineering, State University of New York at Buffalo |
| Pseudocode | Yes | Algorithm 1 The robust approximation interpretation method for pairwise models |
| Open Source Code | No | The paper does not provide any explicit statement about making its source code available or include a link to a code repository. |
| Open Datasets | Yes | For real-world datasets, we adopt four UCI datasets (i.e., Heart, Diabetes, Parkinson and Ionosphere), and the MNIST 1V9 dataset (Le Cun et al. 1998) that is a subset of the 784-dimensional MNIST set. |
| Dataset Splits | No | The paper states: 'For each dataset, we randomly select 80% of the instances as the training set to train the pairwise model, and take the rest instances as the test set.' It does not mention a separate validation set or its split. |
| Hardware Specification | No | The paper discusses running time and computational complexity but does not specify any hardware details such as CPU, GPU models, or memory used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers (e.g., programming languages, libraries, frameworks, or solvers with their versions) that were used in the experiments. |
| Experiment Setup | Yes | We conduct experiments on both real-world and synthetic datasets to evaluate the performance of the proposed interpretation methods. ... (the value is set as 0.2 in our experiment). ... we vary the percentage of masked features over the total number of features from 0.001 to 0.25. |