pith. machine review for the scientific record. sign in

arxiv: 2601.13041 · v2 · submitted 2026-01-19 · 💻 cs.CR

Recognition: 2 theorem links

· Lean Theorem

High-Throughput and Scalable Secure Inference Protocols for Deep Learning with Packed Secret Sharing

Authors on Pith no claims yet

Pith reviewed 2026-05-16 13:41 UTC · model grok-4.3

classification 💻 cs.CR
keywords packed secret sharingsecure inferencemulti-party computationneural network inferenceShamir secret sharingcommunication efficiencydeep learningMPC protocols
0
0 comments X

The pith

Packed Shamir secret sharing enables parallel secure inference for deep neural networks with major cuts in communication.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper presents MPC protocols for neural network inference that use packed Shamir secret sharing to compute many values at once inside each share. This targets the honest-majority semi-honest model and directly tackles the high communication costs that slow down deeper networks over wide-area links. The authors introduce vector-matrix multiplication-friendly random share tuples for efficient linear layers and a filter packing method for convolutions, then lift all non-linear operations to the packed setting. If the constructions hold, inference on models such as VGG16 becomes feasible with far less data transfer and shorter runtimes than earlier Shamir-based schemes.

Core claim

By defining vector-matrix multiplication-friendly random share tuples and applying filter packing inside packed Shamir secret sharing, the protocols perform parallel linear and non-linear operations across neural-network layers while preserving correctness and security, which produces up to 5.85 times less offline communication, 11.17 times less online communication, and 1.75 times faster total runtime than non-packed approaches.

What carries the argument

Packed Shamir secret sharing equipped with vector-matrix multiplication-friendly random share tuples and filter packing, which packs multiple independent secrets into single shares so that matrix-vector products and convolutions execute in parallel.

If this is right

  • Offline communication volume falls by up to 5.85 times.
  • Online communication volume falls by up to 11.17 times.
  • Total end-to-end runtime improves by up to 1.75 times.
  • Deeper networks remain practical under wide-area network latency.
  • The participant limit rises beyond the four-party bound common in prior protocols.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same packing primitives could be ported to other linear secret-sharing schemes to raise throughput in additional MPC tasks.
  • Reduced per-layer communication may allow secure inference on battery-powered or bandwidth-limited devices.
  • Parallel non-linear layers suggest a route to efficient secure training protocols built on the same foundation.
  • Lower overhead could accelerate deployment of privacy-preserving inference in regulated domains.

Load-bearing premise

The newly defined random share tuples and filter packing must keep every linear and non-linear operation both correct and secure against semi-honest adversaries.

What would settle it

Direct measurement of communication volume and wall-clock time for VGG16 inference using the packed protocol versus a standard Shamir protocol, run with the same number of parties over a wide-area network emulation.

Figures

Figures reproduced from arXiv: 2601.13041 by Qinghui Zhang, Xiaojun Chen, Xudong Chen, Yansong Zhang.

Figure 1
Figure 1. Figure 1: A toy example. Boxes with the same color represent a packed secret sharing, in which [PITH_FULL_IMAGE:figures/full_fig_p002_1.png] view at source ↗
Figure 2
Figure 2. Figure 2: Convolution process. k = 1, the PSS becomes standard Shamir secret sharing. The client and the model owner do not collude with the servers. It is worth noting that a degree-d PSS can pack k ∈ N secrets, and k and default positions {si} k−1 i=0 are publicly known to the client, model owner and servers. III. EFFICIENT VECTOR-MATRIX MULTIPLICATION In this section, we first present the design of the vector￾mat… view at source ↗
Figure 3
Figure 3. Figure 3: A toy example. Boxes with the same color represent a packed secret sharing, in which [PITH_FULL_IMAGE:figures/full_fig_p005_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: A sample example. qi/2 ℓx = r ′ i , i ∈ [3], q0 = (r0+r1+r2), q1 = (r3 + r4 + r5), q2 = (r6 + r7 + r8). Theorem 5. ΠT runcT riple securely realizes FT runcT riple in the (FRandomBits, FRandom, FDegreeT rans)-hybrid model with abort, in the presence of a fully semi-honest adversary controlling t corrupted parties. Proof. The proof of this theorem is given in Appendix C. Algorithm 5: ShConvert Input: JxKd, w… view at source ↗
Figure 5
Figure 5. Figure 5: A toy example where k = 3. Each pixel of the input tensor needs to be multiplied by each filter. y1 , y2 , y3 are packed into a single packed secret share. Three copies of x1 are packed into a single packed secret share. If the next layer is still a Conv operation, the result JzKd needs to be converted to Jz i Kd, i ∈ {1, 2, 3}, z = (z1, z2, z3), z 1 = (z1, z1, z1), z 2 = (z2, z2, z2), z 3 = (z3, z3, z3) b… view at source ↗
Figure 6
Figure 6. Figure 6: An 8-input prefix multiplication. denotes a multiplication gate that can be securely implemented by FPmult−DN . elements per party. The offline communication complexity is 1 rounds with n k(n−t) (4+⌈ℓlog2 (ℓ)⌉) field elements per party. Theorem 9. ΠBitwise−LT securely realizes FBitwise−LT in the (FP reOR, FDegreeT rans, FXor)-hybrid model with abort, in the presence of a fully semi-honest adversary control… view at source ↗
read the original abstract

Most existing secure neural network inference protocols based on secure multi-party computation (MPC) typically support at most four participants, demonstrating severely limited scalability. Liu et al. (USENIX Security'24) presented the first relatively practical approach by utilizing Shamir secret sharing with Mersenne prime fields. However, when processing deeper neural networks such as VGG16, their protocols incur substantial communication overhead, resulting in particularly significant latency in wide-area network (WAN) environments. In this paper, we propose a high-throughput and scalable MPC protocol for neural network inference against semi-honest adversaries in the honest-majority setting. The core of our approach lies in leveraging packed Shamir secret sharing (PSS) to enable parallel computation and reduce communication complexity. The main contributions are three-fold: i) We present a communication-efficient protocol for vector-matrix multiplication, based on our newly defined notion of vector-matrix multiplication-friendly random share tuples. ii) We design the filter packing approach that enables parallel convolution. iii) We further extend all non-linear protocols based on Shamir secret sharing to the PSS-based protocols for achieving parallel non-linear operations. Extensive experiments across various datasets and neural networks demonstrate the superiority of our approach in WAN. Compared to Liu et al. (USENIX Security'24), our scheme reduces the communication upto 5.85x, 11.17x, and 6.83x in offline, online and total communication overhead, respectively. In addition, our scheme is upto 1.59x, 2.61x, and 1.75x faster in offline, online and total running time, respectively.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

2 major / 1 minor

Summary. The paper presents a high-throughput MPC protocol for secure neural network inference in the honest-majority semi-honest model, based on packed Shamir secret sharing. It introduces vector-matrix multiplication-friendly random share tuples for efficient linear operations, a filter packing technique for parallel convolutions, and extensions of non-linear protocols to the packed setting. Experiments on datasets and networks including VGG16 report communication reductions of up to 5.85x (offline), 11.17x (online), and 6.83x (total) and runtime speedups of up to 1.59x (offline), 2.61x (online), and 1.75x (total) relative to Liu et al. (USENIX Security'24).

Significance. If the correctness and security of the new primitives hold, the work could meaningfully advance practical secure inference for deeper networks in WAN environments by exploiting parallelism in packed secret sharing. The concrete experimental gains are a strength, as is the focus on communication efficiency for larger models; however, the absence of full proofs and raw data limits immediate impact assessment.

major comments (2)
  1. [§4–§6] Security proofs (throughout §4–§6): the manuscript provides protocol sketches and claims standard simulation-based security for the vector-matrix multiplication-friendly random share tuples and filter packing, but does not include complete proofs or hybrid arguments; this is load-bearing because the claimed communication and runtime improvements rest directly on these primitives preserving both correctness and security under packing.
  2. [Experimental Evaluation] Experimental section: concrete speed-up numbers (e.g., 5.85x offline communication) are reported without raw data, implementation source, or parameter settings, preventing independent verification of the link between the new primitives and measured gains.
minor comments (1)
  1. [Abstract] Abstract: 'upto' should be 'up to' (occurs three times).

Simulated Author's Rebuttal

2 responses · 0 unresolved

We thank the referee for the constructive feedback. The comments correctly identify areas where additional rigor and transparency will strengthen the manuscript. We address each major comment below and will revise accordingly.

read point-by-point responses
  1. Referee: [§4–§6] Security proofs (throughout §4–§6): the manuscript provides protocol sketches and claims standard simulation-based security for the vector-matrix multiplication-friendly random share tuples and filter packing, but does not include complete proofs or hybrid arguments; this is load-bearing because the claimed communication and runtime improvements rest directly on these primitives preserving both correctness and security under packing.

    Authors: We agree that the current version contains protocol descriptions and high-level security arguments rather than complete formal proofs. This is a substantive point, as the security of the packed primitives underpins the efficiency claims. In the revised manuscript we will supply full simulation-based security proofs, including explicit hybrid arguments, for the vector-matrix multiplication-friendly random share tuples and the filter packing technique, establishing both correctness and security in the honest-majority semi-honest model. revision: yes

  2. Referee: [Experimental Evaluation] Experimental section: concrete speed-up numbers (e.g., 5.85x offline communication) are reported without raw data, implementation source, or parameter settings, preventing independent verification of the link between the new primitives and measured gains.

    Authors: We concur that greater experimental transparency is needed for independent verification. In the revision we will (i) release the implementation source code, (ii) document all parameter settings (field size, packing parameters, network topology, hardware), and (iii) include an appendix with raw communication and runtime measurements for the VGG16 and other reported benchmarks, allowing direct reproduction of the stated speed-ups. revision: yes

Circularity Check

0 steps flagged

No significant circularity in derivation chain

full rationale

The paper defines new primitives (vector-matrix multiplication-friendly random share tuples, filter packing) and extends Shamir-based protocols to packed secret sharing, with security arguments following standard simulation-based reasoning in the honest-majority semi-honest model. Performance claims are presented as experimental measurements against an external prior work (Liu et al., USENIX Security'24), not as quantities derived from fitted parameters or self-referential equations. No load-bearing self-citations, self-definitional steps, or reductions of predictions to inputs by construction appear in the provided text.

Axiom & Free-Parameter Ledger

0 free parameters · 1 axioms · 0 invented entities

The central claim rests on standard MPC assumptions and newly introduced protocol primitives; no free parameters or invented entities are introduced.

axioms (1)
  • domain assumption Semi-honest adversaries in the honest-majority setting
    Explicitly stated in the abstract as the threat model for the protocol.

pith-pipeline@v0.9.0 · 5605 in / 1197 out tokens · 44495 ms · 2026-05-16T13:41:29.262325+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

48 extracted references · 48 canonical work pages · 1 internal anchor

  1. [1]

    Communication complexity of secure com- putation,

    M. Franklin and M. Yung, “Communication complexity of secure com- putation,” inProceedings of the twenty-fourth annual ACM symposium on Theory of computing, 1992, pp. 699–710

  2. [2]

    How to share a secret,

    A. Shamir, “How to share a secret,”Communications of the ACM, vol. 22, no. 11, pp. 612–613, 1979

  3. [3]

    Sharing transformation and dishonest majority mpc with packed secret sharing,

    V . Goyal, A. Polychroniadou, and Y . Song, “Sharing transformation and dishonest majority mpc with packed secret sharing,” inAnnual International Cryptology Conference. Springer, 2022, pp. 3–32

  4. [4]

    Scalable and unconditionally secure mul- tiparty computation,

    I. Damg ˚ard and J. B. Nielsen, “Scalable and unconditionally secure mul- tiparty computation,” inProceedings of the 27th Annual International Cryptology Conference on Advances in Cryptology, ser. CRYPTO’07. Berlin, Heidelberg: Springer-Verlag, 2007, p. 572–590

  5. [5]

    Scalable multi-party computation protocols for machine learning in the honest-majority setting,

    F. Liu, X. Xie, and Y . Yu, “Scalable multi-party computation protocols for machine learning in the honest-majority setting,” inProc. 33rd USENIX Secur. Symp.(USENIX Secur.), 2024, pp. 1939–1956

  6. [6]

    {GAZELLE}: A low latency framework for secure neural network inference,

    C. Juvekar, V . Vaikuntanathan, and A. Chandrakasan, “{GAZELLE}: A low latency framework for secure neural network inference,” in27th USENIX security symposium (USENIX security 18), 2018, pp. 1651– 1669

  7. [7]

    Bolt: Privacy-preserving, accurate and efficient inference for transformers,

    Q. Pang, J. Zhu, H. M ¨ollering, W. Zheng, and T. Schneider, “Bolt: Privacy-preserving, accurate and efficient inference for transformers,” in 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024, pp. 4753–4771

  8. [8]

    Unconditional communication-efficient mpc via hall’s marriage theorem,

    V . Goyal, A. Polychroniadou, and Y . Song, “Unconditional communication-efficient mpc via hall’s marriage theorem,” inAnnual International Cryptology Conference. Springer, 2021, pp. 275–304

  9. [9]

    Aby3: A mixed protocol framework for machine learning,

    P. Mohassel and P. Rindal, “Aby3: A mixed protocol framework for machine learning,” inProceedings of the 2018 ACM SIGSAC conference on computer and communications security, 2018, pp. 35–52

  10. [10]

    Private neural network training with packed secret sharing,

    H. Zhou, “Private neural network training with packed secret sharing,” inInternational Computing and Combinatorics Conference. Springer, 2024, pp. 66–77

  11. [11]

    Helix: Scalable multi-party machine learning inference against malicious adversaries,

    Y . Zhang, X. Chen, Q. Zhang, Y . Dong, and X. Chen, “Helix: Scalable multi-party machine learning inference against malicious adversaries,” Cryptology ePrint Archive, 2025

  12. [12]

    Securenn: 3-party secure compu- tation for neural network training,

    S. Wagh, D. Gupta, and N. Chandran, “Securenn: 3-party secure compu- tation for neural network training,”Proceedings on Privacy Enhancing Technologies, 2019

  13. [13]

    Protocols for secure computations,

    A. C. Yao, “Protocols for secure computations,” in23rd annual sympo- sium on foundations of computer science (sfcs 1982). IEEE, 1982, pp. 160–164

  14. [14]

    How to play any mental game,

    S. Micali, O. Goldreich, and A. Wigderson, “How to play any mental game,” inProceedings of the Nineteenth ACM Symp. on Theory of Computing, STOC. ACM New York, 1987, pp. 218–229

  15. [15]

    Multiparty unconditionally se- cure protocols,

    D. Chaum, C. Cr ´epeau, and I. Damgard, “Multiparty unconditionally se- cure protocols,” inProceedings of the twentieth annual ACM symposium on Theory of computing, 1988, pp. 11–19

  16. [16]

    Completeness theo- rems for non-cryptographic fault-tolerant distributed computation,

    M. Ben-Or, S. Goldwasser, and A. Wigderson, “Completeness theo- rems for non-cryptographic fault-tolerant distributed computation,” in Providing sound foundations for cryptography: on the work of Shafi Goldwasser and Silvio Micali, 2019, pp. 351–371

  17. [17]

    Secureml: A system for scalable privacy- preserving machine learning,

    P. Mohassel and Y . Zhang, “Secureml: A system for scalable privacy- preserving machine learning,” in2017 IEEE symposium on security and privacy (SP). IEEE, 2017, pp. 19–38

  18. [18]

    Oblivious neural network predictions via minionn transformations,

    J. Liu, M. Juuti, Y . Lu, and N. Asokan, “Oblivious neural network predictions via minionn transformations,” inProceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 619–631

  19. [19]

    Delphi: A cryptographic inference system for neural networks,

    P. Mishra, R. Lehmkuhl, A. Srinivasan, W. Zheng, and R. A. Popa, “Delphi: A cryptographic inference system for neural networks,” in Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, 2020, pp. 27–30

  20. [20]

    Aby - a framework for efficient mixed-protocol secure two-party computation,

    D. Demmler, T. Schneider, and M. Zohner, “Aby - a framework for efficient mixed-protocol secure two-party computation,” inNetwork and Distributed System Security Symposium, 2015

  21. [21]

    {ABY2. 0}: Improved{Mixed-Protocol}secure{Two-Party}computation,

    A. Patra, T. Schneider, A. Suresh, and H. Yalame, “{ABY2. 0}: Improved{Mixed-Protocol}secure{Two-Party}computation,” in30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2165– 2182. 16

  22. [22]

    Chameleon: A hybrid secure computation frame- work for machine learning applications,

    M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar, “Chameleon: A hybrid secure computation frame- work for machine learning applications,” inProceedings of the 2018 on Asia conference on computer and communications security, 2018, pp. 707–721

  23. [23]

    Ezpc: Programmable and efficient secure two-party computation for machine learning,

    N. Chandran, D. Gupta, A. Rastogi, R. Sharma, and S. Tripathi, “Ezpc: Programmable and efficient secure two-party computation for machine learning,” in2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2019, pp. 496–511

  24. [24]

    Cryptflow2: Practical 2-party secure inference,

    D. Rathee, M. Rathee, N. Kumar, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma, “Cryptflow2: Practical 2-party secure inference,” in Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, 2020, pp. 325–342

  25. [25]

    Muse: Secure inference resilient to malicious clients,

    R. Lehmkuhl, P. Mishra, A. Srinivasan, and R. A. Popa, “Muse: Secure inference resilient to malicious clients,” in30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2201–2218

  26. [26]

    Simc 2.0: Improved secure ml inference against malicious clients,

    G. Xu, X. Han, T. Zhang, S. Xu, J. Ning, X. Huang, H. Li, and R. H. Deng, “Simc 2.0: Improved secure ml inference against malicious clients,”IEEE Transactions on Dependable and Secure Computing, vol. 21, no. 4, pp. 1708–1723, 2023

  27. [27]

    Falcon: Honest-majority maliciously secure framework for private deep learning,

    S. Wagh, S. Tople, F. Benhamouda, E. Kushilevitz, P. Mittal, and T. Rabin, “Falcon: Honest-majority maliciously secure framework for private deep learning,”Proceedings on Privacy Enhancing Technologies, vol. 2021, pp. 188 – 208, 2020

  28. [28]

    Blaze: Blazing fast privacy-preserving machine learning,

    A. Patra and A. Suresh, “Blaze: Blazing fast privacy-preserving machine learning,”ArXiv, vol. abs/2005.09042, 2020

  29. [29]

    Ef- ficient{3PC}for binary circuits with application to{Maliciously- Secure}{DNN}inference,

    Y . Li, Y . Duan, Z. Huang, C. Hong, C. Zhang, and Y . Song, “Ef- ficient{3PC}for binary circuits with application to{Maliciously- Secure}{DNN}inference,” in32nd USENIX Security Symposium (USENIX Security 23), 2023, pp. 5377–5394

  30. [30]

    {SWIFT}: Super-fast and robust{Privacy-Preserving}machine learning,

    N. Koti, M. Pancholi, A. Patra, and A. Suresh, “{SWIFT}: Super-fast and robust{Privacy-Preserving}machine learning,” in30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2651–2668

  31. [31]

    Flash: Fast and robust framework for privacy-preserving machine learning,

    M. Byali, H. Chaudhari, A. Patra, and A. Suresh, “Flash: Fast and robust framework for privacy-preserving machine learning,”Proceedings on Privacy Enhancing Technologies, vol. 2020, pp. 459 – 480, 2020

  32. [32]

    Tetrad: Actively secure 4pc for secure training and inference,

    N. Koti, A. Patra, R. Rachuri, and A. Suresh, “Tetrad: Actively secure 4pc for secure training and inference,”arXiv preprint arXiv:2106.02850, 2021

  33. [33]

    Trident: Efficient 4pc framework for privacy preserving machine learning,

    H. Chaudhari, R. Rachuri, and A. Suresh, “Trident: Efficient 4pc framework for privacy preserving machine learning,”arXiv preprint arXiv:1912.02631, 2019

  34. [34]

    Fantastic four:{Honest- Majority}{Four-Party}secure computation with malicious security,

    A. Dalskov, D. Escudero, and M. Keller, “Fantastic four:{Honest- Majority}{Four-Party}secure computation with malicious security,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2183–2200

  35. [35]

    Uncon- ditionally secure constant-rounds multi-party computation for equality, comparison, bits and exponentiation,

    I. Damg ˚ard, M. Fitzi, E. Kiltz, J. B. Nielsen, and T. Toft, “Uncon- ditionally secure constant-rounds multi-party computation for equality, comparison, bits and exponentiation,” inTheory of Cryptography Con- ference. Springer, 2006, pp. 285–304

  36. [36]

    Gradient-based learning applied to document recognition,

    Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,”Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 2002

  37. [37]

    Learning multiple layers of features from tiny images,

    A. Krizhevsky, G. Hintonet al., “Learning multiple layers of features from tiny images,” 2009

  38. [38]

    Imagenet classification with deep convolutional neural networks,

    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,”Advances in neural informa- tion processing systems, vol. 25, 2012

  39. [39]

    Very Deep Convolutional Networks for Large-Scale Image Recognition

    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,”arXiv preprint arXiv:1409.1556, 2014

  40. [40]

    Seconnds: Secure outsourced neural network inference on imagenet,

    S. Balla, “Seconnds: Secure outsourced neural network inference on imagenet,”arXiv preprint arXiv:2506.11586, 2025

  41. [41]

    Cheetah: Lean and fast secure{Two-Party}deep neural network inference,

    Z. Huang, W.-j. Lu, C. Hong, and J. Ding, “Cheetah: Lean and fast secure{Two-Party}deep neural network inference,” in31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 809–826

  42. [42]

    Panther: Practical secure 2-party neural network inference,

    J. Feng, Y . Wu, H. Sun, S. Zhang, and D. Liu, “Panther: Practical secure 2-party neural network inference,”IEEE Transactions on Information Forensics and Security, 2025

  43. [43]

    Secure transformer inference made non- interactive,

    J. Zhang, X. Yang, L. He, K. Chen, W.-j. Lu, Y . Wang, X. Hou, J. Liu, K. Ren, and X. Yang, “Secure transformer inference made non- interactive,”Cryptology ePrint Archive, 2024

  44. [44]

    Bumblebee: Secure two-party inference framework for large transformers,

    W.-j. Lu, Z. Huang, Z. Gu, J. Li, J. Liu, C. Hong, K. Ren, T. Wei, and W. Chen, “Bumblebee: Secure two-party inference framework for large transformers,”Cryptology ePrint Archive, 2023

  45. [45]

    Astra: High throughput 3pc over rings with application to secure prediction,

    H. Chaudhari, A. Choudhury, A. Patra, and A. Suresh, “Astra: High throughput 3pc over rings with application to secure prediction,” in Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop, 2019, pp. 81–92

  46. [46]

    Multi-party replicated secret sharing over a ring with applications to privacy-preserving machine learning,

    A. Baccarini, M. Blanton, and C. Yuan, “Multi-party replicated secret sharing over a ring with applications to privacy-preserving machine learning,”Proceedings on Privacy Enhancing Technologies, 2023

  47. [47]

    Motion–a framework for mixed-protocol multi-party computation,

    L. Braun, D. Demmler, T. Schneider, and O. Tkachenko, “Motion–a framework for mixed-protocol multi-party computation,”ACM Transac- tions on Privacy and Security, vol. 25, no. 2, pp. 1–35, 2022

  48. [48]

    Turbopack: Honest majority mpc with constant online communication,

    D. Escudero, V . Goyal, A. Polychroniadou, and Y . Song, “Turbopack: Honest majority mpc with constant online communication,” inPro- ceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 951–964