Neural-based classification rule learning for sequential data

Authors: Marine Collery, Philippe Bonnard, François Fages, Remy Kusters

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the validity and usefulness of the approach on synthetic datasets and on an open-source peptides dataset. and 5 EXPERIMENTS In order to evaluate the validity and usefulness of this method, we apply it to both synthetic datasets and UCI membranolytic anticancer peptides dataset (Grisoni et al., 2019; Dua & Graff, 2017).
Researcher Affiliation Collaboration Marine Collery1,2 , Philippe Bonnard1, Franc ois Fages2 & Remy Kusters1,3 1IBM France Lab, 2Inria Saclay, 3IBM Research
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code is publicly available at https://github.com/IBM/cr2n.
Open Datasets Yes We apply it to both synthetic datasets and UCI membranolytic anticancer peptides dataset (Grisoni et al., 2019; Dua & Graff, 2017).
Dataset Splits Yes All datasets are partitioned in a stratified fashion with 60% for training, 20% for validation and 20% for testing datasets and we use a batch size of 100 sequences.
Hardware Specification Yes Experiments were run on CPU on a Mac Book Pro18,2 (2021) with Apple M1 Max chip, 10 Cores, 32 GB of RAM and running mac OS Monterey Version 12.4.
Software Dependencies No The paper mentions 'Pytorch' and 'Adam optimizer' but does not provide specific version numbers for these or other software libraries.
Experiment Setup Yes All datasets are partitioned in a stratified fashion with 60% for training, 20% for validation and 20% for testing datasets and we use a batch size of 100 sequences. The hidden size in the base rule model is set to the double of the input size of the AND layer (which is the window size of the convolution). and The loss function is described in Eq 7 and depends on the MSE loss and regularization coefficient λ = 10^-5. The adam optimizer is used with a fixed learning rate set to 0.1 and a run consists of 200 epochs.