pith. machine review for the scientific record. sign in

arxiv: 2604.03345 · v1 · submitted 2026-04-03 · 💻 cs.LG

Recognition: no theorem link

Hardware-Oriented Inference Complexity of Kolmogorov-Arnold Networks

Authors on Pith no claims yet

Pith reviewed 2026-05-13 20:26 UTC · model grok-4.3

classification 💻 cs.LG
keywords Kolmogorov-Arnold NetworksKANinference complexityhardware metricsreal multiplicationsbit operationsB-splineFourier KAN
0
0 comments X

The pith

Kolmogorov-Arnold Networks now have platform-independent formulas that count real multiplications, bit operations, and additions for hardware inference.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper derives formulas that compute hardware inference complexity for KANs directly from network structure using counts of real multiplications, bit operations, and additions with bit-shifts. These formulas apply to B-spline, Gaussian radial basis function, Chebyshev, and Fourier variants of KANs. The metrics replace GPU floating-point counts with hardware-oriented measures suitable for dedicated accelerators in latency-sensitive settings such as optical communications. They enable early-stage comparisons between KAN architectures and other networks without requiring full hardware synthesis.

Core claim

We derive generalized, platform-independent formulae for evaluating the hardware inference complexity of KANs in terms of Real Multiplications (RM), Bit Operations (BOP), and Number of Additions and Bit-Shifts (NABS). We extend our analysis across multiple KAN variants, including B-spline, Gaussian Radial Basis Function (GRBF), Chebyshev, and Fourier KANs. The proposed metrics can be computed directly from the network structure and enable a fair and straightforward inference complexity comparison between KAN and other neural network architectures.

What carries the argument

Generalized formulae for Real Multiplications (RM), Bit Operations (BOP), and Number of Additions and Bit-Shifts (NABS) that evaluate inference cost directly from network structure.

If this is right

  • The formulas support direct computation of complexity from network architecture alone.
  • They allow comparison across B-spline, GRBF, Chebyshev, and Fourier KAN variants without synthesis.
  • The metrics enable early architectural decisions for power-constrained accelerators.
  • They provide a common basis for comparing KANs against other neural network types.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • If the counts prove accurate on real chips, they could shorten design cycles for edge-deployed basis-function networks.
  • The same counting approach might apply to other spline or radial-basis architectures beyond the four variants examined.
  • Designers could combine these metrics with memory-access estimates to refine total power predictions.

Load-bearing premise

Counts of real multiplications, bit operations, and additions derived only from network structure accurately predict real hardware resource use and latency.

What would settle it

A measured hardware latency or resource count on a specific accelerator for a KAN network that deviates substantially from the RM, BOP, and NABS values predicted by the formulas.

Figures

Figures reproduced from arXiv: 2604.03345 by Bilal Khalid, Jaroslaw E. Prilepsky, Pedro Freire, Sergei K. Turitsyn.

Figure 1
Figure 1. Figure 1: A KAN layer with two input and three output nodes. Unlike MLPs [PITH_FULL_IMAGE:figures/full_fig_p003_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Basis functions commonly used in KANs: (a) B-splines; (b) Gaussian radial basis functions (GRBF); (c) Chebyshev polynomials; (d) Fourier basis. [PITH_FULL_IMAGE:figures/full_fig_p004_2.png] view at source ↗
Figure 3
Figure 3. Figure 3: Comparison of hardware inference complexity for MLP and KAN variants using architecture [PITH_FULL_IMAGE:figures/full_fig_p009_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Complexity scaling with network width for architecture [PITH_FULL_IMAGE:figures/full_fig_p010_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Iso-complexity analysis showing the required hidden layer width [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
read the original abstract

Kolmogorov-Arnold Networks (KANs) have recently emerged as a powerful architecture for various machine learning applications. However, their unique structure raises significant concerns regarding their computational overhead. Existing studies primarily evaluate KAN complexity in terms of Floating-Point Operations (FLOPs) required for GPU-based training and inference. However, in many latency-sensitive and power-constrained deployment scenarios, such as neural network-driven non-linearity mitigation in optical communications or channel state estimation in wireless communications, training is performed offline and dedicated hardware accelerators are preferred over GPUs for inference. Recent hardware implementation studies report KAN complexity using platform-specific resource consumption metrics, such as Look-Up Tables, Flip-Flops, and Block RAMs. However, these metrics require a full hardware design and synthesis stage that limits their utility for early-stage architectural decisions and cross-platform comparisons. To address this, we derive generalized, platform-independent formulae for evaluating the hardware inference complexity of KANs in terms of Real Multiplications (RM), Bit Operations (BOP), and Number of Additions and Bit-Shifts (NABS). We extend our analysis across multiple KAN variants, including B-spline, Gaussian Radial Basis Function (GRBF), Chebyshev, and Fourier KANs. The proposed metrics can be computed directly from the network structure and enable a fair and straightforward inference complexity comparison between KAN and other neural network architectures.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

1 major / 0 minor

Summary. The paper claims to derive generalized, platform-independent formulae for the hardware inference complexity of Kolmogorov-Arnold Networks (KANs) and variants (B-spline, GRBF, Chebyshev, Fourier) in terms of Real Multiplications (RM), Bit Operations (BOP), and Number of Additions and Bit-Shifts (NABS). These are computed directly from network structure parameters such as layer widths, spline order, and basis type to support early-stage architectural decisions and cross-platform comparisons without requiring full hardware synthesis.

Significance. If the formulae prove complete and accurate, they would offer a lightweight, reproducible tool for comparing KAN inference costs against other architectures in power-constrained settings such as optical nonlinearity mitigation and wireless channel estimation, where offline training and dedicated accelerators are used.

major comments (1)
  1. Abstract: the claim that RM/BOP/NABS counts derived solely from network structure accurately predict hardware resource use and latency is load-bearing for the central contribution, yet the derivations treat each basis evaluation as a fixed sequence of arithmetic operations while omitting memory access patterns, BRAM/ROM coefficient storage costs, and routing overhead for variable grid sizes; these factors are not shown to be negligible and directly affect the hardware metrics the paper seeks to estimate.

Simulated Author's Rebuttal

1 responses · 0 unresolved

We thank the referee for the constructive feedback. We address the single major comment below and have revised the manuscript to clarify the intended scope of the proposed metrics.

read point-by-point responses
  1. Referee: Abstract: the claim that RM/BOP/NABS counts derived solely from network structure accurately predict hardware resource use and latency is load-bearing for the central contribution, yet the derivations treat each basis evaluation as a fixed sequence of arithmetic operations while omitting memory access patterns, BRAM/ROM coefficient storage costs, and routing overhead for variable grid sizes; these factors are not shown to be negligible and directly affect the hardware metrics the paper seeks to estimate.

    Authors: We agree that the original abstract wording could be read as implying that RM/BOP/NABS counts alone fully predict hardware resource consumption and latency. Our derivations intentionally count only the arithmetic operations (real multiplications, bit operations, additions, and shifts) required by each basis-function evaluation, treating these as fixed sequences derived from network structure parameters. Memory access patterns, BRAM/ROM storage for coefficients, and routing overhead for variable grid sizes are omitted because they are platform-dependent and cannot be expressed in a general, structure-only formula. We do not claim these arithmetic counts are sufficient to predict total resource use or latency; they are presented as a lightweight, reproducible proxy for early-stage architectural comparison, analogous to the use of FLOPs in software-oriented complexity analysis. To correct the overstatement, we have revised the abstract to state that the formulae estimate arithmetic-operation complexity for inference. We have also added a new limitations paragraph in the discussion section that explicitly lists the omitted factors, notes that they are not shown to be negligible, and recommends full hardware synthesis for precise resource and latency figures. These changes preserve the core contribution while setting appropriate expectations. revision: yes

Circularity Check

0 steps flagged

No circularity: RM/BOP/NABS counts derived directly from explicit architecture parameters

full rationale

The paper presents generalized formulae for Real Multiplications (RM), Bit Operations (BOP), and Number of Additions and Bit-Shifts (NABS) computed directly from network structure parameters such as layer widths, spline order, and basis type (B-spline, GRBF, Chebyshev, Fourier). These are explicit arithmetic operation counts extended across KAN variants, with no evidence of self-definitional loops, fitted inputs renamed as predictions, or load-bearing self-citations that reduce the central claims to their own inputs. The derivation remains self-contained against the stated network parameters and does not invoke uniqueness theorems or ansatzes from prior author work to force the result.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

Based on abstract only; no free parameters, invented entities, or non-standard axioms are described. The work assumes standard definitions of hardware operation costs that can be tallied from network topology.

axioms (1)
  • standard math Hardware operation costs (real multiplications, bit operations, additions, bit-shifts) are countable directly from network width, depth, and basis order.
    The paper treats these counts as the primary complexity measure without additional calibration constants.

pith-pipeline@v0.9.0 · 5554 in / 1167 out tokens · 36362 ms · 2026-05-13T20:26:04.678464+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Reference graph

Works this paper leans on

50 extracted references · 50 canonical work pages · 1 internal anchor

  1. [1]

    Multilayer feedforward networks are universal approximators,

    K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,”Neural networks, vol. 2, no. 5, pp. 359–366, 1989

  2. [2]

    KAN: Kolmogorov–arnold networks,

    Z. Liu, Y . Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Soljacic, T. Y . Hou, and M. Tegmark, “KAN: Kolmogorov–arnold networks,” in The Thirteenth International Conference on Learning Representations,

  3. [3]

    Available: https://openreview.net/forum?id=Ozo7qJ5vZi

    [Online]. Available: https://openreview.net/forum?id=Ozo7qJ5vZi

  4. [4]

    A survey on kolmogorov-arnold network,

    S. Somvanshi, S. A. Javed, M. M. Islam, D. Pandit, and S. Das, “A survey on kolmogorov-arnold network,”ACM Computing Surveys, vol. 58, no. 2, pp. 1–35, 2025

  5. [5]

    U-kan makes strong backbone for medical image segmentation and generation,

    C. Li, X. Liu, W. Li, C. Wang, H. Liu, Y . Liu, Z. Chen, and Y . Yuan, “U-kan makes strong backbone for medical image segmentation and generation,” inProceedings of the AAAI conference on artificial intelli- gence, vol. 39, no. 5, 2025, pp. 4652–4660

  6. [6]

    Ckan: Convolutional kolmogorov–arnold networks model for intrusion detection in iot envi- ronment,

    M. Abd Elaziz, I. A. Fares, and A. O. Aseeri, “Ckan: Convolutional kolmogorov–arnold networks model for intrusion detection in iot envi- ronment,”IEEE Access, vol. 12, pp. 134 837–134 851, 2024

  7. [7]

    Demonstrating the efficacy of kolmogorov-arnold networks in vision tasks,

    M. Cheon, “Demonstrating the efficacy of kolmogorov-arnold networks in vision tasks,” 2024. [Online]. Available: https://arxiv.org/abs/2406. 14916

  8. [8]

    Kolmogorov- arnold networks (kans) for time series analysis,

    C. J. Vaca-Rubio, L. Blanco, R. Pereira, and M. Caus, “Kolmogorov- arnold networks (kans) for time series analysis,” in2024 IEEE Globecom Workshops (GC Wkshps), 2024, pp. 1–6

  9. [9]

    Sigkan: Signature-weighted kolmogorov- arnold networks for time series,

    H. Inzirillo and R. Genet, “Sigkan: Signature-weighted kolmogorov- arnold networks for time series,” 2024. [Online]. Available: https: //arxiv.org/abs/2406.17890

  10. [10]

    Graphkan: Enhancing feature extraction with graph kolmogorov arnold networks,

    F. Zhang and X. Zhang, “Graphkan: Enhancing feature extraction with graph kolmogorov arnold networks,” 2024. [Online]. Available: https://arxiv.org/abs/2406.13597

  11. [11]

    Gkan: Graph kolmogorov-arnold networks,

    M. Kiamari, M. Kiamari, and B. Krishnamachari, “Gkan: Graph kolmogorov-arnold networks,” 2024. [Online]. Available: https://arxiv. org/abs/2406.06470

  12. [12]

    Kolmogorov–arnold-informed neural network: A physics-informed deep learning framework for solving forward and inverse problems based on kolmogorov–arnold networks,

    Y . Wang, J. Sun, J. Bai, C. Anitescu, M. S. Eshaghi, X. Zhuang, T. Rabczuk, and Y . Liu, “Kolmogorov–arnold-informed neural network: A physics-informed deep learning framework for solving forward and inverse problems based on kolmogorov–arnold networks,”Computer Methods in Applied Mechanics and Engineering, vol. 433, p. 117518, 2025

  13. [13]

    Adaptive training of grid-dependent physics-informed kolmogorov-arnold networks,

    S. Rigas, M. Papachristou, T. Papadopoulos, F. Anagnostopoulos, and G. Alexandridis, “Adaptive training of grid-dependent physics-informed kolmogorov-arnold networks,”IEEE Access, vol. 12, pp. 176 982– 176 998, 2024

  14. [14]

    Coxkan: Kolmogorov- arnold networks for interpretable, high-performance survival analysis,

    W. Knottenbelt, W. McGough, R. Wray, W. Z. Zhang, J. Liu, I. P. Machado, Z. Gao, and M. Crispin-Ortuzar, “Coxkan: Kolmogorov- arnold networks for interpretable, high-performance survival analysis,” Bioinformatics, vol. 41, no. 8, p. btaf413, 2025

  15. [15]

    Photonic KAN: a kolmogorov-arnold network inspired efficient photonic neuromorphic architecture,

    Y . Peng, S. Hooten, T. V . Vaerenbergh, X. Xiao, M. Fiorentino, and R. G. Beausoleil, “Photonic KAN: a kolmogorov-arnold network inspired efficient photonic neuromorphic architecture,” inNeurIPS 2024 Workshop Machine Learning with new Compute Paradigms, 2024. [Online]. Available: https://openreview.net/forum?id=xGymSunYzF

  16. [16]

    Photonic kolmogorov-arnold networks based on self-phase modulation in nonlin- ear waveguides,

    K. Sozos, D. Spanos, S. Deligiannidis, G. Sarantoglou, N. Pas- salis, N. Pleros, C. Mesaritakis, A. Tefas, and A. Bogris, “Photonic kolmogorov-arnold networks based on self-phase modulation in nonlin- ear waveguides,”Optics Letters, vol. 51, no. 3, pp. 664–667, 2026

  17. [17]

    Bsrbf-kan: a combination of b-splines and radial basis functions in kolmogorov-arnold networks,

    H.-T. Ta, “Bsrbf-kan: a combination of b-splines and radial basis functions in kolmogorov-arnold networks,” inInternational Symposium on Information and Communication Technology. Springer, 2024, pp. 3–15

  18. [18]

    Fc-kan: Function combinations in kolmogorov-arnold networks,

    H.-T. Ta, D.-Q. Thai, A. B. S. Rahman, G. Sidorov, and A. Gelbukh, “Fc-kan: Function combinations in kolmogorov-arnold networks,” Information Sciences, vol. 736, p. 123103, 2026. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0020025526000344

  19. [19]

    Kan versus mlp on irregular or noisy function,

    C. Zeng, J. Wang, H. Shen, and Q. Wang, “Kan versus mlp on irregular or noisy function,” in2025 15th IEEE International Conference on Pattern Recognition Systems (ICPRS), 2025, pp. 1–7

  20. [20]

    Kan or mlp: A fairer comparison,

    R. Yu, W. Yu, and X. Wang, “Kan or mlp: A fairer comparison,” 2024. [Online]. Available: https://arxiv.org/abs/2407.16674

  21. [21]

    Kolmogorov-arnold networks: A critical assessment of claims, performance, and practical viability,

    Y . Hou, T. Ji, D. Zhang, and A. Stefanidis, “Kolmogorov-arnold networks: A critical assessment of claims, performance, and practical viability,” 2025. [Online]. Available: https://arxiv.org/abs/2407.11075

  22. [22]

    Exploring kolmogorov–arnold networks for interpretable time series classification,

    I. Bara ˇsin, B. Bertalani ˇc, M. Mohor ˇciˇc, and C. Fortuna, “Exploring kolmogorov–arnold networks for interpretable time series classification,” International Journal of Intelligent Systems, vol. 2025, no. 1, p. 9553189, 2025. [Online]. Available: https://onlinelibrary.wiley.com/doi/ abs/10.1155/int/9553189

  23. [23]

    Powermlp: An efficient version of kan,

    R. Qiu, Y . Miao, S. Wang, Y . Zhu, L. Yu, and X.-S. Gao, “Powermlp: An efficient version of kan,” inProceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 19, 2025, pp. 20 069–20 076

  24. [24]

    Prkan: Parameter-reduced kolmogorov-arnold networks,

    H.-T. Ta, D.-Q. Thai, A. Tran, G. Sidorov, and A. Gelbukh, “Prkan: Parameter-reduced kolmogorov-arnold networks,” 2025. [Online]. Available: https://arxiv.org/abs/2501.07032

  25. [25]

    Exploring the limitations of kolmogorov- arnold networks in classification: Insights to software training and hardware implementation,

    T. X. H. Le, T. D. Tran, H. L. Pham, V . T. D. Le, T. H. Vu, V . T. Nguyen, Y . Nakashimaet al., “Exploring the limitations of kolmogorov- arnold networks in classification: Insights to software training and hardware implementation,” in2024 Twelfth International Symposium on Computing and Networking Workshops (CANDARW). IEEE, 2024, pp. 110–116

  26. [26]

    Huang, J

    W.-H. Huang, J. Jia, Y . Kong, F. Waqar, T.-H. Wen, M.-F. Chang, and S. Yu, “Hardware acceleration of kolmogorov-arnold network (kan) for BILAL KHALIDet al.: HARDW ARE-ORIENTED INFERENCE COMPLEXITY OF KOLMOGOROV–ARNOLD NETWORKS 12 lightweight edge inference,” inProceedings of the 30th Asia and South Pacific Design Automation Conference, 2025, pp. 693–699

  27. [27]

    Kan-sas: Efficient accelera- tion of kolmogorov-arnold networks on systolic arrays,

    S. Errabii, O. Sentieys, and M. Traiola, “Kan-sas: Efficient accelera- tion of kolmogorov-arnold networks on systolic arrays,” inIEEE/ACM Design, Automation & Test in Europe Conference, 2026

  28. [28]

    Hardware acceleration of kolmogorov-arnold network (kan) in large-scale systems,

    W.-H. Huang, J. Jia, Y . Kong, F. Waqar, T.-H. Wen, M.-F. Chang, and S. Yu, “Hardware acceleration of kolmogorov-arnold network (kan) in large-scale systems,” 2025. [Online]. Available: https://arxiv.org/abs/2509.05937

  29. [29]

    Design of a kolmogorov-arnold network hardware accelerator,

    F. Mammadzada, “Design of a kolmogorov-arnold network hardware accelerator,”Master’s thesis, Faculty of Engineering (LTH), Lund Uni- versity, 2025

  30. [30]

    Computational complexity optimization of neural network-based equalizers in digital signal processing: A comprehensive approach,

    P. Freire, S. Srivallapanondh, B. Spinnler, A. Napoli, N. Costa, J. E. Prilepsky, and S. K. Turitsyn, “Computational complexity optimization of neural network-based equalizers in digital signal processing: A comprehensive approach,”Journal of Lightwave Technology, vol. 42, no. 12, pp. 4177–4201, 2024

  31. [31]

    On the representations of continuous functions of many variables by superposition of continuous functions of one variable and addition,

    A. N. Kolmogorov, “On the representations of continuous functions of many variables by superposition of continuous functions of one variable and addition,” inDokl. Akad. Nauk USSR, vol. 114, 1957, pp. 953–956

  32. [32]

    A Practitioner's Guide to Kolmogorov-Arnold Networks

    A. Noorizadegan, S. Wang, L. Ling, and J. P. Dominguez-Morales, “A practitioner’s guide to kolmogorov-arnold networks,” 2026. [Online]. Available: https://arxiv.org/abs/2510.25781

  33. [33]

    arXiv preprint arXiv:2408.10205 (2024),https: //arxiv.org/abs/2408.10205

    Z. Liu, P. Ma, Y . Wang, W. Matusik, and M. Tegmark, “Kan 2.0: Kolmogorov-arnold networks meet science,” 2024. [Online]. Available: https://arxiv.org/abs/2408.10205

  34. [34]

    Deepokan: Deep operator network based on kolmogorov arnold networks for mechanics problems,

    D. W. Abueidda, P. Pantidis, and M. E. Mobasher, “Deepokan: Deep operator network based on kolmogorov arnold networks for mechanics problems,”Computer Methods in Applied Mechanics and Engineering, vol. 436, p. 117699, 2025

  35. [35]

    Kolmogorov-arnold networks are radial basis function networks,

    Z. Li, “Kolmogorov-arnold networks are radial basis function networks,”

  36. [36]

    Sovrano et al

    [Online]. Available: https://arxiv.org/abs/2405.06721

  37. [37]

    Chebyshev polynomial- based kolmogorov-arnold networks: An efficient architecture for nonlinear function approximation,

    S. SS, K. AR, G. R, and A. KP, “Chebyshev polynomial- based kolmogorov-arnold networks: An efficient architecture for nonlinear function approximation,” 2024. [Online]. Available: https: //arxiv.org/abs/2405.07200

  38. [38]

    TorchKAN: Simplified KAN model with varia- tions,

    S. S. Bhattacharjee, “TorchKAN: Simplified KAN model with varia- tions,” https://github.com/1ssb/torchkan/, 2024

  39. [39]

    Fourierkan-gcf: Fourier kolmogorov-arnold network–an effective and efficient feature transformation for graph collaborative filtering,

    J. Xu, Z. Chen, J. Li, S. Yang, W. Wang, X. Hu, and E. C. Ngai, “Fourierkan-gcf: Fourier kolmogorov-arnold network–an effective and efficient feature transformation for graph collaborative filtering,”arXiv preprint arXiv:2406.01034, vol. 10, 2024

  40. [40]

    fkan: Fractional kolmogorov–arnold networks with trainable jacobi basis functions,

    A. Afzal Aghaei, “fkan: Fractional kolmogorov–arnold networks with trainable jacobi basis functions,”Neurocomputing, vol. 623, p. 129414, 2025. [Online]. Available: https://www.sciencedirect.com/ science/article/pii/S0925231225000864

  41. [41]

    Wav-kan: Wavelet kolmogorov-arnold networks,

    Z. Bozorgasl and H. Chen, “Wav-kan: Wavelet kolmogorov-arnold networks,” 2024. [Online]. Available: https://arxiv.org/abs/2405.12832

  42. [42]

    Unveiling the power of wavelets: A wavelet-based kolmogorov-arnold network for hyperspectral image classification,

    S. T. Seydi, Z. Bozorgasl, and H. Chen, “Unveiling the power of wavelets: A wavelet-based kolmogorov-arnold network for hyperspectral image classification,” 2024. [Online]. Available: https://arxiv.org/abs/ 2406.07869

  43. [43]

    De Boor,A practical guide to splines

    C. De Boor,A practical guide to splines. Springer, New York, 1978, vol. 27

  44. [44]

    Fpga implementation of high speed fir filters using add and shift method,

    S. Mirzaei, A. Hosangadi, and R. Kastner, “Fpga implementation of high speed fir filters using add and shift method,” in2006 International Conference on Computer Design. IEEE, 2006, pp. 308–313

  45. [45]

    Power-of-two quantization for low bitwidth and hardware compliant neural networks,

    D. Przewlocka-Rus, S. S. Sarwar, H. E. Sumbul, Y . Li, and B. D. Salvo, “Power-of-two quantization for low bitwidth and hardware compliant neural networks,” 2022. [Online]. Available: https://arxiv.org/abs/2203.05025

  46. [46]

    Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks,

    Y . Li, X. Dong, and W. Wang, “Additive powers-of-two quantization: An efficient non-uniform discretization for neural networks,” in International Conference on Learning Representations, 2020. [Online]. Available: https://openreview.net/forum?id=BkgXT24tDS

  47. [47]

    Piegl and W

    L. Piegl and W. Tiller,B-Spline Basis Functions. Berlin, Heidelberg: Springer Berlin Heidelberg, 1997, pp. 47–79. [Online]. Available: https://doi.org/10.1007/978-3-642-59223-2 2

  48. [48]

    A genetic algorithm- based approach for automated optimization of kolmogorov-arnold networks in classification tasks,

    Q. Long, B. Wang, B. Xue, and M. Zhang, “A genetic algorithm- based approach for automated optimization of kolmogorov-arnold networks in classification tasks,” 2025. [Online]. Available: https: //arxiv.org/abs/2501.17411

  49. [49]

    Darts meets ants: A hybrid search strategy for opti- mizing kan-based 3d cnns for violence recognition in video,

    Z. Buribayev, M. Zhassuzak, M. Aouani, Z. Zhangabay, Z. Abdirazak, and A. Yerkos, “Darts meets ants: A hybrid search strategy for opti- mizing kan-based 3d cnns for violence recognition in video,”Applied Sciences, vol. 15, no. 20, p. 11035, 2025

  50. [50]

    Generalization bounds and model complexity for kolmogorov–arnold networks,

    X. Zhang and H. Zhou, “Generalization bounds and model complexity for kolmogorov–arnold networks,” inThe Thirteenth International Conference on Learning Representations, 2025. [Online]. Available: https://openreview.net/forum?id=q5zMyAUhGx