Practical Markov Boundary Learning without Strong Assumptions

Authors: Xingyu Wu, Bingbing Jiang, Tianhao Wu, Huanhuan Chen

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate the efficacy of these contributions. The paper includes sections like “Experiments”, “MB Discovery on Mixed Data”, and “Feature Selection on Real-world Data”, along with tables and figures presenting empirical results.
Researcher Affiliation Academia 1School of Computer Science and Technology, University of Science and Technology of China, 2School of Information Science and Engineering, Hangzhou Normal University, 3School of Data Science, University of Science and Technology of China
Pseudocode Yes The paper includes 'Algorithm 1: The KMB Algorithm.' and 'Algorithm 2: The KMB Algorithm.' with structured steps.
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the methodology described. The only link provided is to an appendix PDF.
Open Datasets Yes ten datasets from diverse application domains with different scales. Table 3 provides their domains and standard statistics, including the number of features, training samples, and test samples. (Table 3 lists dataset names like Connect4, Spamebase, Splice, Sonar, Bankruptcy, Madelon, Coil20, Basehock, Gisette, Arcene, which are common benchmark datasets.)
Dataset Splits No The paper mentions 'K-fold cross-validation could be used here' as a general strategy, but does not provide specific train/validation/test dataset splits (e.g., percentages or counts) that were actually used for their experiments. Table 3 only lists training and test samples.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup No The paper describes some general aspects of the experimental setup, such as the characteristics of synthetic data and the use of SVM as a classifier, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, SVM kernel parameters) or detailed system-level training settings.