8
0

A Novel Approach To Implementing Knowledge Distillation In Tsetlin Machines

Abstract

The Tsetlin Machine (TM) is a propositional logic based model that uses conjunctive clauses to learn patterns from data. As with typical neural networks, the performance of a Tsetlin Machine is largely dependent on its parameter count, with a larger number of parameters producing higher accuracy but slower execution. Knowledge distillation in neural networks transfers information from an already-trained teacher model to a smaller student model to increase accuracy in the student without increasing execution time. We propose a novel approach to implementing knowledge distillation in Tsetlin Machines by utilizing the probability distributions of each output sample in the teacher to provide additional context to the student. Additionally, we propose a novel clause-transfer algorithm that weighs the importance of each clause in the teacher and initializes the student with only the most essential data. We find that our algorithm can significantly improve performance in the student model without negatively impacting latency in the tested domains of image recognition and text classification.

View on arXiv
@article{kinateder2025_2504.01798,
  title={ A Novel Approach To Implementing Knowledge Distillation In Tsetlin Machines },
  author={ Calvin Kinateder },
  journal={arXiv preprint arXiv:2504.01798},
  year={ 2025 }
}
Comments on this paper