Building Concise Logical Patterns by Constraining Tsetlin Machine Clause Size
Authors: K. Darshana Abeyrathna, Ahmed A. O. Abouzeid, Bimal Bhattarai, Charul Giri, Sondre Glimsdal, Ole-Christoffer Granmo, Lei Jiao, Rupsa Saha, Jivitesh Sharma, Svein A. Tunheim, Xuan Zhang
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To evaluate CSC-TM, we conduct classification, clustering, and regression experiments on tabular data, natural language text, images, and board games. Our results show that CSC-TM maintains accuracy with up to 80 times fewer literals. |
| Researcher Affiliation | Collaboration | 1Centre for Artificial Intelligence Research (CAIR), University of Agder, Grimstad, Norway 2Norwegian Research Centre (NORCE), Grimstad, Norway 3DNV, Oslo, Norway |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found. Table 1 presents feedback rules, but it is not pseudocode. |
| Open Source Code | No | The paper refers to a general TM learning resource (https://cair.github.io/ijcai2023_clause_rationing.html) but does not explicitly state that the source code for the CSC-TM methodology described in this paper is released or provided. |
| Open Datasets | Yes | We evaluate CSC-TM on five NLP datasets: BBC sports [Greene and Cunningham, 2006], R8 [Debole and Sebastiani, 2005], TREC-6 [Chang et al., 2002], Sem Eval 2010 Semantic Relations [Hendrickx et al., 2009], and ACL Internet Movie Database (IMDb) [Maas et al., 2011]. ... We evaluate our approach on two image datasets: MNIST and CIFAR-2... We use the Energy Performance dataset to evaluate regression performance based on [Abeyrathna et al., 2020c]. |
| Dataset Splits | No | The paper mentions using well-known datasets like MNIST but does not explicitly provide the training/test/validation dataset splits (e.g., percentages or sample counts) needed for reproduction. |
| Hardware Specification | Yes | The experiments use a CUDA implementation of CSC-TM and runs on Intel Xeon Platinum 8168 CPU at 2.70 GHz and a Nvidia DGX-2 with Tesla V100 GPU. |
| Software Dependencies | No | The paper mentions 'CUDA implementation' but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | For TM hyperparameters in BBC Sports, TREC, and R8, we use 8000 clauses, a voting margin T of 100, and specificity s of 10.0. For hyperparameters, we adopt 8000 clauses per class, a voting margin T as 10000, and specificity s as 5.0 in the MNIST experiments. For CIFAR-2, the number of clauses is 8000, T is 6000, and s is 10.0. |