Column Networks for Collective Classification

Authors: Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate CLN on multiple real-world applications: (a) delay prediction in software projects, (b) Pub Med Diabetes publication classification and (c) film genre classification. In all of these applications, CLN demonstrates a higher accuracy than state-of-the-art rivals.
Researcher Affiliation Academia Trang Pham, Truyen Tran, Dinh Phung, Svetha Venkatesh Deakin University, Australia {phtra,truyen.tran, dinh.phung, svetha.venkatesh}@deakin.edu.au
Pseudocode No The paper describes the mathematical formulations of the model but does not include a formal pseudocode block or algorithm.
Open Source Code Yes Code for our model can be found on Github 2 https://github.com/trangptm/Column_networks
Open Datasets Yes We use the largest dataset reported in (Choetkiertikul et al. 2015), the JBoss, which contains 8,206 issues.; We used the Pubmed Diabetes dataset consisting of 19,717 scientific publications and 44,338 citation links among them3. Download: http://linqs.umiacs.umd.edu/projects//projects/lbc/; We used the Movie Lens Latest Dataset (Harper and Konstan 2016) which consists of 33,000 movies.
Dataset Splits Yes Each dataset is divided into 3 separated sets: training, validation and test sets. For hyper-parameter tuning, we search for (i) number of hidden layers: 2, 6, 10, ..., 30, (ii) hidden dimensions, and (iii) optimizers: Adam or RMSprop. The best training setting is chosen by the validation set and the results of the test set are reported.
Hardware Specification No The paper does not provide specific hardware details such as CPU, GPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using ReLU in hidden layers and Adam or RMSprop as optimizers but does not specify version numbers for any software or libraries (e.g., Python, TensorFlow, PyTorch, scikit-learn versions).
Experiment Setup Yes For hyper-parameter tuning, we search for (i) number of hidden layers: 2, 6, 10, ..., 30, (ii) hidden dimensions, and (iii) optimizers: Adam or RMSprop. CLN-FNN has 2 hidden layers and the same hidden dimension with CLN-HWN so that the two models have equal number of parameters.