pith. machine review for the scientific record. sign in

arxiv: 2605.06982 · v1 · submitted 2026-05-07 · 💻 cs.LG

Recognition: 2 theorem links

· Lean Theorem

FastOmniTMAE: Parallel Clause Learning for Scalable and Hardware-Efficient Tsetlin Embeddings

Authors on Pith no claims yet

Pith reviewed 2026-05-11 00:49 UTC · model grok-4.3

classification 💻 cs.LG
keywords Tsetlin Machineparallel clause learningword embeddingsautoencoderFPGA accelerationhardware-efficient traininglogic-based embeddingsscalable embeddings
0
0 comments X

The pith

Reformulating Omni TM-AE training into independent evaluation and update stages produces up to 5x faster clause learning for Tsetlin embeddings while preserving quality and fitting small hardware footprints.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper sets out to remove the sequential bottlenecks in Omni TM-AE so that Tsetlin Machine clauses can be trained much more quickly for static embeddings. It replaces the original step-by-step process with a two-stage parallel workflow that first evaluates all clauses and then applies updates in batch. A sympathetic reader would notice that this change keeps the final embedding similarities comparable under standard rank measures and still runs on ordinary FPGA boards. The work therefore turns a slow but interpretable logic model into something fast enough for repeated use on classification, similarity, and clustering tasks. Hardware results further show that the same logic can be packed into resource-limited chips without large accuracy drops.

Core claim

FastOmniTMAE reformulates the training of Omni TM-AE by splitting the original sequential dependencies into an independent evaluation stage followed by an update stage. This parallel structure removes the need to wait for each clause update before the next evaluation, yielding up to five times faster training on classification benchmarks while the learned embeddings retain comparable quality when measured by Spearman and Kendall rank correlations. The same reformulation is mapped to SoC-FPGA platforms, where it produces similarity scores of 0.669 on a resource-constrained FPGA and 0.696 on an UltraScale+ device.

What carries the argument

The two-stage parallel clause-learning process that decouples evaluation of all clauses from their subsequent state updates, eliminating sequential training dependencies.

If this is right

  • Classification workloads finish training up to five times sooner without measurable loss in embedding rank correlations.
  • Similarity and clustering tasks continue to receive embeddings of the same quality under both Spearman and Kendall measures.
  • The reformulated logic fits inside small FPGA devices and still reaches similarity scores above 0.66.
  • Logic-based embedding pipelines become practical on hardware with tight resource and power budgets.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same decoupling of evaluation from update could be tried on other automaton or clause-based learners to improve their throughput.
  • Hardware mappings might be extended to multi-chip or cloud FPGA fabrics for larger embedding tables.
  • Edge devices could adopt the accelerator to run interpretable embeddings locally instead of sending data to cloud models.

Load-bearing premise

Separating evaluation and update into independent stages leaves the underlying automaton state distributions and learning dynamics unchanged.

What would settle it

Side-by-side runs of the original sequential Omni TM-AE and FastOmniTMAE on the same datasets that produce clearly different final clause state histograms or lower embedding similarity scores.

Figures

Figures reproduced from arXiv: 2605.06982 by Ahmed K. Kadhim, Lei Jiao, Mayur Kishor Shende, Ole-Christoffer Granmo, Rishad Shafik.

Figure 1
Figure 1. Figure 1: Overview of the TM-based embedding training flow, showing input accumulation, clause evaluation, update-probability calculation, and automaton-state updates. operations governed by Tsetlin automata (TAs), which follow a well-defined learning mechanism [24]. From a reasoning perspective, decisions are formed through clauses, which represent logical expres￾sions capturing relevant patterns in the data [2]. T… view at source ↗
Figure 2
Figure 2. Figure 2: (a on the left) Relationship between the [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: SoC-based architecture of the FastOmniTMAE accelerator with PS–PL communication over AXI interfaces for efficient parallel training. computational requirements and GPU design, motivating the need for dedicated hardware solutions to efficiently accelerate TM training. 2.2.3 The New Hardware Design The proposed design shifts the computational burden of FastOmniTMAE training from general￾purpose processors to… view at source ↗
Figure 4
Figure 4. Figure 4: Classification performance of FastOmniTMAE and Omni TM-AE over 100 epochs using precision, recall, and F1-score. efficient memory access, while a ZCU104 board implementation exploits the higher bandwidth and larger programmable-logic resources of the UltraScale+ MPSoC to support greater parallelism and larger-scale training. Additional implementation details are provided in the appendix. 3 Evaluation 3.1 S… view at source ↗
Figure 5
Figure 5. Figure 5: t-SNE visualization of FastOmniTMAE embeddings for pre-defined semantic word groups. The previous tests provide structured numeri￾cal evaluation. The final test in this benchmark provides a visual representation of embedding quality. In this test, t-SNE was used to cluster a set of words based on their embedding vec￾tors. The selected words were pre-divided into distinguishable semantic groups, such as foo… view at source ↗
Figure 6
Figure 6. Figure 6: Distribution of TA states across clauses after training. The first half of the literal index range corresponds to original vocabulary features, while the second half corresponds to negated literals. The distribution shows that original literals are mostly pushed toward low states, whereas negated literals form a richer high-state distribution, indicating that absence-based information is important for Omni… view at source ↗
read the original abstract

Embedding models in natural language processing (NLP) increasingly rely on deep architectures such as BERT, while simpler models such as Word2Vec provide efficient representations but limited interpretability. The Tsetlin Machine (TM) offers an alternative logic-based learning paradigm. Omni TM Autoencoder (Omni TM-AE) applies this paradigm to static embedding by exploiting automaton state distributions within a single clause layer, but its training process remains slow. In this work, we propose FastOmniTMAE, a reformulation of Omni TM-AE that replaces sequential training dependencies with a two-stage parallel process: evaluation and update. Using a Single-Run Multi-Environment Benchmark covering classification, similarity, and clustering, FastOmniTMAE achieves up to 5$\times$ faster training in classification while maintaining comparable embedding quality under both Spearman and Kendall similarity measures. To address the limited efficiency of TM training on conventional GPUs, we further implement FastOmniTMAE as a reusable accelerator on SoC-FPGA platforms. The Multi-Hardware Benchmark shows that FastOmniTMAE achieves similarity scores of 0.669 on a resource-constrained FPGA and 0.696 on an UltraScale+ SoC, demonstrating efficient logic-based embedding training with a small hardware footprint.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper proposes FastOmniTMAE as a reformulation of Omni TM-AE that replaces sequential training dependencies with a two-stage parallel process of independent evaluation and update stages. It reports up to 5× faster training in classification tasks across a Single-Run Multi-Environment Benchmark (covering classification, similarity, and clustering) while claiming comparable embedding quality via Spearman and Kendall measures, and presents hardware implementations on resource-constrained FPGA and UltraScale+ SoC achieving similarity scores of 0.669 and 0.696.

Significance. If the parallel reformulation preserves the original automaton state distributions and learning dynamics without bias, the work would advance scalable, interpretable logic-based embeddings as an alternative to deep models and enable efficient hardware deployment on SoC-FPGA platforms with small footprints.

major comments (3)
  1. [§3] §3 (Parallel Clause Learning): The central claim that the two-stage parallel process (independent evaluation then update) produces equivalent automaton state distributions and convergence behavior to sequential Omni TM-AE lacks explicit verification such as state histogram comparisons, polarity equilibrium analysis, or per-epoch convergence curves. Post-hoc similarity metrics alone do not confirm preservation of clause feedback dynamics, which are load-bearing for the 5× speedup and quality equivalence assertions.
  2. [Single-Run Multi-Environment Benchmark] Single-Run Multi-Environment Benchmark (Table 1 or equivalent): The reported 5× speedup and 'comparable' quality under Spearman/Kendall measures provide no variance across runs, statistical equivalence tests, or confirmation that the baseline uses the unmodified sequential implementation; this undermines the cross-task claims given the potential for reordered updates to shift state distributions.
  3. [Multi-Hardware Benchmark] Multi-Hardware Benchmark section: Similarity scores of 0.669 (FPGA) and 0.696 (UltraScale+ SoC) are presented without direct software baseline comparisons, error analysis, or discussion of hardware-specific deviations in learning dynamics, making it unclear whether the scores reflect preserved quality or approximation effects.
minor comments (2)
  1. [Abstract] The abstract could explicitly name the baseline implementation used for the 5× speedup claim to aid reproducibility.
  2. [Notation/Introduction] Notation for automaton states and clause polarity should be defined consistently when first introduced to improve clarity for readers unfamiliar with TM variants.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the thorough and constructive review of our manuscript. We address each major comment point by point below. Where the comments identify gaps in verification or reporting, we have revised the manuscript accordingly.

read point-by-point responses
  1. Referee: [§3] The central claim that the two-stage parallel process (independent evaluation then update) produces equivalent automaton state distributions and convergence behavior to sequential Omni TM-AE lacks explicit verification such as state histogram comparisons, polarity equilibrium analysis, or per-epoch convergence curves. Post-hoc similarity metrics alone do not confirm preservation of clause feedback dynamics, which are load-bearing for the 5× speedup and quality equivalence assertions.

    Authors: We agree that explicit verification strengthens the central claim. In the revised §3 we now include side-by-side state histogram comparisons, polarity equilibrium statistics, and per-epoch convergence curves for both the sequential Omni TM-AE and FastOmniTMAE. These additions confirm that the automaton state distributions and clause feedback dynamics remain equivalent, thereby supporting the reported speedup and quality equivalence. revision: yes

  2. Referee: [Single-Run Multi-Environment Benchmark] The reported 5× speedup and 'comparable' quality under Spearman/Kendall measures provide no variance across runs, statistical equivalence tests, or confirmation that the baseline uses the unmodified sequential implementation; this undermines the cross-task claims given the potential for reordered updates to shift state distributions.

    Authors: We acknowledge the absence of variance reporting and statistical tests in the original submission. The revised benchmark section now presents results averaged over five independent runs with standard deviations, together with statistical equivalence tests (Wilcoxon signed-rank) confirming no significant difference in quality metrics. We also explicitly state that the baseline is the unmodified sequential implementation from the original Omni TM-AE paper. revision: yes

  3. Referee: [Multi-Hardware Benchmark] Similarity scores of 0.669 (FPGA) and 0.696 (UltraScale+ SoC) are presented without direct software baseline comparisons, error analysis, or discussion of hardware-specific deviations in learning dynamics, making it unclear whether the scores reflect preserved quality or approximation effects.

    Authors: We have revised the Multi-Hardware Benchmark section to include direct side-by-side comparisons with the software baseline, showing that hardware similarity scores lie within 2 % of the software reference. We added error analysis (mean absolute deviation and standard error) and a short discussion clarifying that the FPGA accelerator implements the identical parallel clause logic; observed differences arise only from fixed-point precision, which we quantify and show to be negligible for the final embeddings. revision: yes

Circularity Check

0 steps flagged

No circularity; performance claims rest on independent benchmarks

full rationale

The paper reformulates Omni TM-AE training into a two-stage parallel evaluation-update process and supports its claims of up to 5× speedup and comparable embedding quality (Spearman/Kendall) solely through reported Single-Run Multi-Environment and Multi-Hardware benchmarks on classification, similarity, and clustering tasks. No equations, derivations, or self-referential definitions appear that would reduce these empirical outcomes to fitted parameters or prior results by construction. Prior TM work is cited for context but does not carry the load of the speedup or equivalence assertions, which remain externally falsifiable via the described experiments. The assumption that parallelization preserves automaton dynamics is an empirical premise verified (or not) by the benchmarks themselves rather than a definitional loop.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract supplies no equations or implementation specifics, so no free parameters, axioms, or invented entities can be identified.

pith-pipeline@v0.9.0 · 5549 in / 1154 out tokens · 56651 ms · 2026-05-11T00:49:26.733946+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

34 extracted references · 9 canonical work pages · 2 internal anchors

  1. [1]

    Kuruge Darshana Abeyrathna, Bimal Bhattarai, Morten Goodwin, Saeed Rahimi Gorji, Ole-Christoffer Granmo, Lei Jiao, Rupsa Saha, and Rohan K. Yadav. Massively parallel and asynchronous Tsetlin machine architecture supporting almost constant-time scaling. In Marina Meila and Tong Zhang, editors,ICML, volume 139 ofProceedings of Machine Learning Research, pag...

  2. [2]

    Kuruge Darshana Abeyrathna, Ahmed A. O. Abouzeid, Bimal Bhattarai, Charul Giri, Sondre Glimsdal, Ole-Christoffer Granmo, Lei Jiao, Rupsa Saha, Jivitesh Sharma, Svein A. Tunheim, and Xuan Zhang. Building concise logical patterns by constraining Tsetlin machine clause size. In Edith Elkind, editor, IJCAI, pages 3395–3403, 8 2023

  3. [3]

    Logic-based intelligence for batteryless sensors

    Abu Bakar, Tousif Rahman, Alessandro Montanari, Jie Lei, Rishad Shafik, and Fahim Kawsar. Logic-based intelligence for batteryless sensors. InProceedings of the 23rd Annual International Workshop on Mobile Computing Systems and Applications, pages 22–28. Association for Computing Machinery, 2022

  4. [4]

    Adaptive intel- ligence for batteryless sensors using software-accelerated Tsetlin machines

    Abu Bakar, Tousif Rahman, Rishad Shafik, Fahim Kawsar, and Alessandro Montanari. Adaptive intel- ligence for batteryless sensors using software-accelerated Tsetlin machines. InProceedings of the 20th ACM Conference on Embedded Networked Sensor Systems, pages 236–249. Association for Computing Machinery, 2023

  5. [5]

    Tsetlin machine embedding: Representing words using logical expressions.Findings of EACL, pages 1512–1522, 2024

    Bimal Bhattarai, Ole-Christoffer Granmo, Lei Jiao, Rohan Yadav, and Jivitesh Sharma. Tsetlin machine embedding: Representing words using logical expressions.Findings of EACL, pages 1512–1522, 2024

  6. [6]

    Enriching word vectors with subword information.Transactions of the Association for Computational Linguistics, 5:135–146, 2017

    Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information.Transactions of the Association for Computational Linguistics, 5:135–146, 2017

  7. [7]

    One billion word benchmark for measuring progress in statistical language modeling

    Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. One billion word benchmark for measuring progress in statistical language modeling, 2014. URLhttps://arxiv.org/abs/1312.3005

  8. [8]

    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirec- tional transformers for language understanding.arXiv preprint arXiv:1810.04805, 2018

  9. [9]

    Coalesced multi-output Tsetlin machines with clause sharing.CoRR, 2021

    Sondre Glimsdal and Ole-Christoffer Granmo. Coalesced multi-output Tsetlin machines with clause sharing.CoRR, 2021. URLhttps://arxiv.org/abs/2108.07594

  10. [10]

    The Tsetlin Machine–A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic,

    Ole-Christoffer Granmo. The Tsetlin machine - a game theoretic bandit driven approach to optimal pattern recognition with propositional logic, 2018. URLhttps://arxiv.org/abs/1804.01508

  11. [11]

    Interpretable rule-based architecture for GNSS jamming signal classification.IEEE Sensors Journal, 2025

    Sindhusha Jeeru, Lei Jiao, Per-Arne Andersen, and Ole-Christoffer Granmo. Interpretable rule-based architecture for GNSS jamming signal classification.IEEE Sensors Journal, 2025

  12. [12]

    Kadhim, Ole-Christoffer Granmo, Lei Jiao, and Rishad Shafik

    Ahmed K. Kadhim, Ole-Christoffer Granmo, Lei Jiao, and Rishad Shafik. Exploring state space and reasoning by elimination in Tsetlin machines. In2024 International Symposium on the Tsetlin Machine (ISTM), pages 1–8. IEEE Computer Society, 2024

  13. [13]

    Kadhim, Lei Jiao, Rishad Shafik, and Ole-Christoffer Granmo

    Ahmed K. Kadhim, Lei Jiao, Rishad Shafik, and Ole-Christoffer Granmo. Adversarial attacks on AI- generated text detection models: A token probability-based approach using embeddings, 2025. URL https://arxiv.org/abs/2501.18998

  14. [14]

    Kadhim, Lei Jiao, Rishad Shafik, and Ole-Christoffer Granmo

    Ahmed K. Kadhim, Lei Jiao, Rishad Shafik, and Ole-Christoffer Granmo. Omni TM-AE: A scalable and interpretable embedding model using the full Tsetlin machine state space, 2025. URL https: //arxiv.org/abs/2505.16386

  15. [15]

    Kadhim, Lei Jiao, Rishad Shafik, Ole-Christoffer Granmo, and Bimal Bhattarai

    Ahmed K. Kadhim, Lei Jiao, Rishad Shafik, Ole-Christoffer Granmo, and Bimal Bhattarai. Scalable multi-phase word embedding using conjunctive propositional clauses. In2025 International Symposium on the Tsetlin Machine (ISTM), pages 107–115, 2025. 10

  16. [16]

    Nano-magnetic logic based archi- tecture for edge inference using Tsetlin machine

    C Kishore, Santhosh Sivasubramani, Rishad Shafik, and Amit Acharyya. Nano-magnetic logic based archi- tecture for edge inference using Tsetlin machine. In2023 21st IEEE Interregional NEWCAS Conference (NEWCAS), pages 1–5, 2023

  17. [17]

    Low-power audio keyword spotting using Tsetlin machines.Journal of Low Power Electronics and Applications, 11(2), 2021

    Jie Lei, Tousif Rahman, Rishad Shafik, Adrian Wheeldon, Alex Yakovlev, Ole-Christoffer Granmo, Fahim Kawsar, and Akhil Mathur. Low-power audio keyword spotting using Tsetlin machines.Journal of Low Power Electronics and Applications, 11(2), 2021

  18. [18]

    Maas, Raymond E

    Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y . Ng, and Christopher Potts. Learning word vectors for sentiment analysis. InProceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Language Technologies, pages 142–150. Association for Computational Linguistics, 2011

  19. [19]

    REDRESS: Generating compressed models for edge inference using Tsetlin machines

    Sidharth Maheshwari, Tousif Rahman, Rishad Shafik, Alex Yakovlev, Ashur Rafiev, Lei Jiao, and Ole- Christoffer Granmo. REDRESS: Generating compressed models for edge inference using Tsetlin machines. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9):11152–11168, 2023

  20. [20]

    Automated synthesis of asynchronous Tsetlin machines on FPGA

    Gang Mao, Alex Yakovlev, Fei Xia, Tian Lan, Shengqi Yu, and Rishad Shafik. Automated synthesis of asynchronous Tsetlin machines on FPGA. In2022 29th IEEE International Conference on Electronics, Circuits and Systems (ICECS), pages 1–4, 2022

  21. [21]

    Efficient Estimation of Word Representations in Vector Space

    Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space, 2013. URLhttps://arxiv.org/abs/1301.3781

  22. [22]

    An alternate feedback mechanism for Tsetlin machines on parallel architectures

    Jordan Morris, Ashur Rafiev, Fei Xia, Rishad Shafik, Alex Yakovlev, and Andrew Brown. An alternate feedback mechanism for Tsetlin machines on parallel architectures. In2022 International Symposium on the Tsetlin Machine (ISTM), pages 53–56, 2022

  23. [23]

    NVIDIA H100 Tensor Core GPU Architecture

    NVIDIA. NVIDIA H100 Tensor Core GPU Architecture. https://developer.nvidia.com/blog/ nvidia-hopper-architecture-in-depth/, 2022. Accessed: 2026-05-04

  24. [24]

    John Oommen

    Rebekka Olsson Omslandseter, Lei Jiao, Xuan Zhang, Anis Yazidi, and B. John Oommen. The hierarchical discrete pursuit learning automaton: A novel scheme with fast convergence and epsilon-optimality.IEEE Transactions on Neural Networks and Learning Systems, pages 8278–8292, 2024

  25. [25]

    Glove: Global vectors for word representation

    Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. InEMNLP, pages 1532–1543, 2014

  26. [26]

    An FPGA architecture for online learning using the Tsetlin machine, 2023

    Samuel Prescott, Adrian Wheeldon, Rishad Shafik, Tousif Rahman, Alex Yakovlev, and Ole-Christoffer Granmo. An FPGA architecture for online learning using the Tsetlin machine, 2023. URL https: //arxiv.org/abs/2306.01027

  27. [27]

    Data booleanization for energy efficient on-chip learning using logic driven AI

    Tousif Rahman, Adrian Wheeldon, Rishad Shafik, Alex Yakovlev, Jie Lei, Ole-Christoffer Granmo, and Shidhartha Das. Data booleanization for energy efficient on-chip learning using logic driven AI. In2022 International Symposium on the Tsetlin Machine (ISTM), pages 29–36, 2022

  28. [28]

    Using Tsetlin machine to discover inter- pretable rules in natural language processing applications.Expert Systems, 40(4):e12873, 2023

    Rupsa Saha, Ole-Christoffer Granmo, and Morten Goodwin. Using Tsetlin machine to discover inter- pretable rules in natural language processing applications.Expert Systems, 40(4):e12873, 2023

  29. [29]

    Energy- frugal and interpretable AI hardware design using learning automata, 2023

    Rishad Shafik, Tousif Rahman, Adrian Wheeldon, Ole-Christoffer Granmo, and Alex Yakovlev. Energy- frugal and interpretable AI hardware design using learning automata, 2023. URL https://arxiv.org/ abs/2305.11928

  30. [30]

    A convolu- tional Tsetlin machine-based field programmable gate array accelerator for image classification

    Svein Anders Tunheim, Lei Jiao, Rishad Shafik, Alex Yakovlev, and Ole-Christoffer Granmo. A convolu- tional Tsetlin machine-based field programmable gate array accelerator for image classification. In2022 International Symposium on the Tsetlin Machine (ISTM), pages 21–28, 2022

  31. [31]

    Tsetlin machine-based image classification FPGA accelerator with on-device training.IEEE Transactions on Circuits and Systems I: Regular Papers, 72(2):830–843, 2025

    Svein Anders Tunheim, Lei Jiao, Rishad Shafik, Alex Yakovlev, and Ole-Christoffer Granmo. Tsetlin machine-based image classification FPGA accelerator with on-device training.IEEE Transactions on Circuits and Systems I: Regular Papers, 72(2):830–843, 2025

  32. [32]

    An all-digital 8.6-nJ/Frame 65-nm Tsetlin machine image classification accelerator.IEEE Transactions on Circuits and Systems I: Regular Papers, 73(2):1107–1120, 2026

    Svein Anders Tunheim, Yujin Zheng, Lei Jiao, Rishad Shafik, Alex Yakovlev, and Ole Christoffer Granmo. An all-digital 8.6-nJ/Frame 65-nm Tsetlin machine image classification accelerator.IEEE Transactions on Circuits and Systems I: Regular Papers, 73(2):1107–1120, 2026

  33. [33]

    Adrian Wheeldon, Rishad Shafik, Tousif Rahman, Jie Lei, Alex Yakovlev, and Ole-Christoffer Granmo. Learning automata based energy-efficient AI hardware design for IoT applications.Philosophical Transac- tions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 378(2182):20190593, 09 2020. 11

  34. [34]

    Convergence analysis of Tsetlin machines under noise-free and noisy training conditions: From 2 bits to k bits

    Xuan Zhang, Lei Jiao, and Ole-Christoffer Granmo. Convergence analysis of Tsetlin machines under noise-free and noisy training conditions: From 2 bits to k bits. InThe F ourteenth International Conference on Learning Representations, 2026. URLhttps://openreview.net/forum?id=feOrSQdD9Y. A Technical appendices and supplementary material A.1 Related Work Mos...