Recognition: 2 theorem links
· Lean TheoremHigh-Throughput and Scalable Secure Inference Protocols for Deep Learning with Packed Secret Sharing
Pith reviewed 2026-05-16 13:41 UTC · model grok-4.3
The pith
Packed Shamir secret sharing enables parallel secure inference for deep neural networks with major cuts in communication.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
By defining vector-matrix multiplication-friendly random share tuples and applying filter packing inside packed Shamir secret sharing, the protocols perform parallel linear and non-linear operations across neural-network layers while preserving correctness and security, which produces up to 5.85 times less offline communication, 11.17 times less online communication, and 1.75 times faster total runtime than non-packed approaches.
What carries the argument
Packed Shamir secret sharing equipped with vector-matrix multiplication-friendly random share tuples and filter packing, which packs multiple independent secrets into single shares so that matrix-vector products and convolutions execute in parallel.
If this is right
- Offline communication volume falls by up to 5.85 times.
- Online communication volume falls by up to 11.17 times.
- Total end-to-end runtime improves by up to 1.75 times.
- Deeper networks remain practical under wide-area network latency.
- The participant limit rises beyond the four-party bound common in prior protocols.
Where Pith is reading between the lines
- The same packing primitives could be ported to other linear secret-sharing schemes to raise throughput in additional MPC tasks.
- Reduced per-layer communication may allow secure inference on battery-powered or bandwidth-limited devices.
- Parallel non-linear layers suggest a route to efficient secure training protocols built on the same foundation.
- Lower overhead could accelerate deployment of privacy-preserving inference in regulated domains.
Load-bearing premise
The newly defined random share tuples and filter packing must keep every linear and non-linear operation both correct and secure against semi-honest adversaries.
What would settle it
Direct measurement of communication volume and wall-clock time for VGG16 inference using the packed protocol versus a standard Shamir protocol, run with the same number of parties over a wide-area network emulation.
Figures
read the original abstract
Most existing secure neural network inference protocols based on secure multi-party computation (MPC) typically support at most four participants, demonstrating severely limited scalability. Liu et al. (USENIX Security'24) presented the first relatively practical approach by utilizing Shamir secret sharing with Mersenne prime fields. However, when processing deeper neural networks such as VGG16, their protocols incur substantial communication overhead, resulting in particularly significant latency in wide-area network (WAN) environments. In this paper, we propose a high-throughput and scalable MPC protocol for neural network inference against semi-honest adversaries in the honest-majority setting. The core of our approach lies in leveraging packed Shamir secret sharing (PSS) to enable parallel computation and reduce communication complexity. The main contributions are three-fold: i) We present a communication-efficient protocol for vector-matrix multiplication, based on our newly defined notion of vector-matrix multiplication-friendly random share tuples. ii) We design the filter packing approach that enables parallel convolution. iii) We further extend all non-linear protocols based on Shamir secret sharing to the PSS-based protocols for achieving parallel non-linear operations. Extensive experiments across various datasets and neural networks demonstrate the superiority of our approach in WAN. Compared to Liu et al. (USENIX Security'24), our scheme reduces the communication upto 5.85x, 11.17x, and 6.83x in offline, online and total communication overhead, respectively. In addition, our scheme is upto 1.59x, 2.61x, and 1.75x faster in offline, online and total running time, respectively.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper presents a high-throughput MPC protocol for secure neural network inference in the honest-majority semi-honest model, based on packed Shamir secret sharing. It introduces vector-matrix multiplication-friendly random share tuples for efficient linear operations, a filter packing technique for parallel convolutions, and extensions of non-linear protocols to the packed setting. Experiments on datasets and networks including VGG16 report communication reductions of up to 5.85x (offline), 11.17x (online), and 6.83x (total) and runtime speedups of up to 1.59x (offline), 2.61x (online), and 1.75x (total) relative to Liu et al. (USENIX Security'24).
Significance. If the correctness and security of the new primitives hold, the work could meaningfully advance practical secure inference for deeper networks in WAN environments by exploiting parallelism in packed secret sharing. The concrete experimental gains are a strength, as is the focus on communication efficiency for larger models; however, the absence of full proofs and raw data limits immediate impact assessment.
major comments (2)
- [§4–§6] Security proofs (throughout §4–§6): the manuscript provides protocol sketches and claims standard simulation-based security for the vector-matrix multiplication-friendly random share tuples and filter packing, but does not include complete proofs or hybrid arguments; this is load-bearing because the claimed communication and runtime improvements rest directly on these primitives preserving both correctness and security under packing.
- [Experimental Evaluation] Experimental section: concrete speed-up numbers (e.g., 5.85x offline communication) are reported without raw data, implementation source, or parameter settings, preventing independent verification of the link between the new primitives and measured gains.
minor comments (1)
- [Abstract] Abstract: 'upto' should be 'up to' (occurs three times).
Simulated Author's Rebuttal
We thank the referee for the constructive feedback. The comments correctly identify areas where additional rigor and transparency will strengthen the manuscript. We address each major comment below and will revise accordingly.
read point-by-point responses
-
Referee: [§4–§6] Security proofs (throughout §4–§6): the manuscript provides protocol sketches and claims standard simulation-based security for the vector-matrix multiplication-friendly random share tuples and filter packing, but does not include complete proofs or hybrid arguments; this is load-bearing because the claimed communication and runtime improvements rest directly on these primitives preserving both correctness and security under packing.
Authors: We agree that the current version contains protocol descriptions and high-level security arguments rather than complete formal proofs. This is a substantive point, as the security of the packed primitives underpins the efficiency claims. In the revised manuscript we will supply full simulation-based security proofs, including explicit hybrid arguments, for the vector-matrix multiplication-friendly random share tuples and the filter packing technique, establishing both correctness and security in the honest-majority semi-honest model. revision: yes
-
Referee: [Experimental Evaluation] Experimental section: concrete speed-up numbers (e.g., 5.85x offline communication) are reported without raw data, implementation source, or parameter settings, preventing independent verification of the link between the new primitives and measured gains.
Authors: We concur that greater experimental transparency is needed for independent verification. In the revision we will (i) release the implementation source code, (ii) document all parameter settings (field size, packing parameters, network topology, hardware), and (iii) include an appendix with raw communication and runtime measurements for the VGG16 and other reported benchmarks, allowing direct reproduction of the stated speed-ups. revision: yes
Circularity Check
No significant circularity in derivation chain
full rationale
The paper defines new primitives (vector-matrix multiplication-friendly random share tuples, filter packing) and extends Shamir-based protocols to packed secret sharing, with security arguments following standard simulation-based reasoning in the honest-majority semi-honest model. Performance claims are presented as experimental measurements against an external prior work (Liu et al., USENIX Security'24), not as quantities derived from fitted parameters or self-referential equations. No load-bearing self-citations, self-definitional steps, or reductions of predictions to inputs by construction appear in the provided text.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Semi-honest adversaries in the honest-majority setting
Lean theorems connected to this paper
-
IndisputableMonolith/Cost/FunctionalEquation.leanwashburn_uniqueness_aczel unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
We present a communication-efficient protocol for vector-matrix multiplication, based on our newly defined notion of vector-matrix multiplication-friendly random share tuples... filter packing approach that enables parallel convolution... extend all non-linear protocols based on Shamir secret sharing to the PSS-based protocols
-
IndisputableMonolith/Foundation/AbsoluteFloorClosure.leanreality_from_one_distinction unclear?
unclearRelation between the paper passage and the cited Recognition theorem.
leveraging packed Shamir secret sharing (PSS) to enable parallel computation and reduce communication complexity
What do these tags mean?
- matches
- The paper's claim is directly supported by a theorem in the formal canon.
- supports
- The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
- extends
- The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
- uses
- The paper appears to rely on the theorem as machinery.
- contradicts
- The paper's claim conflicts with a theorem or certificate in the canon.
- unclear
- Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.
Reference graph
Works this paper leans on
-
[1]
Communication complexity of secure com- putation,
M. Franklin and M. Yung, “Communication complexity of secure com- putation,” inProceedings of the twenty-fourth annual ACM symposium on Theory of computing, 1992, pp. 699–710
work page 1992
-
[2]
A. Shamir, “How to share a secret,”Communications of the ACM, vol. 22, no. 11, pp. 612–613, 1979
work page 1979
-
[3]
Sharing transformation and dishonest majority mpc with packed secret sharing,
V . Goyal, A. Polychroniadou, and Y . Song, “Sharing transformation and dishonest majority mpc with packed secret sharing,” inAnnual International Cryptology Conference. Springer, 2022, pp. 3–32
work page 2022
-
[4]
Scalable and unconditionally secure mul- tiparty computation,
I. Damg ˚ard and J. B. Nielsen, “Scalable and unconditionally secure mul- tiparty computation,” inProceedings of the 27th Annual International Cryptology Conference on Advances in Cryptology, ser. CRYPTO’07. Berlin, Heidelberg: Springer-Verlag, 2007, p. 572–590
work page 2007
-
[5]
Scalable multi-party computation protocols for machine learning in the honest-majority setting,
F. Liu, X. Xie, and Y . Yu, “Scalable multi-party computation protocols for machine learning in the honest-majority setting,” inProc. 33rd USENIX Secur. Symp.(USENIX Secur.), 2024, pp. 1939–1956
work page 2024
-
[6]
{GAZELLE}: A low latency framework for secure neural network inference,
C. Juvekar, V . Vaikuntanathan, and A. Chandrakasan, “{GAZELLE}: A low latency framework for secure neural network inference,” in27th USENIX security symposium (USENIX security 18), 2018, pp. 1651– 1669
work page 2018
-
[7]
Bolt: Privacy-preserving, accurate and efficient inference for transformers,
Q. Pang, J. Zhu, H. M ¨ollering, W. Zheng, and T. Schneider, “Bolt: Privacy-preserving, accurate and efficient inference for transformers,” in 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024, pp. 4753–4771
work page 2024
-
[8]
Unconditional communication-efficient mpc via hall’s marriage theorem,
V . Goyal, A. Polychroniadou, and Y . Song, “Unconditional communication-efficient mpc via hall’s marriage theorem,” inAnnual International Cryptology Conference. Springer, 2021, pp. 275–304
work page 2021
-
[9]
Aby3: A mixed protocol framework for machine learning,
P. Mohassel and P. Rindal, “Aby3: A mixed protocol framework for machine learning,” inProceedings of the 2018 ACM SIGSAC conference on computer and communications security, 2018, pp. 35–52
work page 2018
-
[10]
Private neural network training with packed secret sharing,
H. Zhou, “Private neural network training with packed secret sharing,” inInternational Computing and Combinatorics Conference. Springer, 2024, pp. 66–77
work page 2024
-
[11]
Helix: Scalable multi-party machine learning inference against malicious adversaries,
Y . Zhang, X. Chen, Q. Zhang, Y . Dong, and X. Chen, “Helix: Scalable multi-party machine learning inference against malicious adversaries,” Cryptology ePrint Archive, 2025
work page 2025
-
[12]
Securenn: 3-party secure compu- tation for neural network training,
S. Wagh, D. Gupta, and N. Chandran, “Securenn: 3-party secure compu- tation for neural network training,”Proceedings on Privacy Enhancing Technologies, 2019
work page 2019
-
[13]
Protocols for secure computations,
A. C. Yao, “Protocols for secure computations,” in23rd annual sympo- sium on foundations of computer science (sfcs 1982). IEEE, 1982, pp. 160–164
work page 1982
-
[14]
S. Micali, O. Goldreich, and A. Wigderson, “How to play any mental game,” inProceedings of the Nineteenth ACM Symp. on Theory of Computing, STOC. ACM New York, 1987, pp. 218–229
work page 1987
-
[15]
Multiparty unconditionally se- cure protocols,
D. Chaum, C. Cr ´epeau, and I. Damgard, “Multiparty unconditionally se- cure protocols,” inProceedings of the twentieth annual ACM symposium on Theory of computing, 1988, pp. 11–19
work page 1988
-
[16]
Completeness theo- rems for non-cryptographic fault-tolerant distributed computation,
M. Ben-Or, S. Goldwasser, and A. Wigderson, “Completeness theo- rems for non-cryptographic fault-tolerant distributed computation,” in Providing sound foundations for cryptography: on the work of Shafi Goldwasser and Silvio Micali, 2019, pp. 351–371
work page 2019
-
[17]
Secureml: A system for scalable privacy- preserving machine learning,
P. Mohassel and Y . Zhang, “Secureml: A system for scalable privacy- preserving machine learning,” in2017 IEEE symposium on security and privacy (SP). IEEE, 2017, pp. 19–38
work page 2017
-
[18]
Oblivious neural network predictions via minionn transformations,
J. Liu, M. Juuti, Y . Lu, and N. Asokan, “Oblivious neural network predictions via minionn transformations,” inProceedings of the 2017 ACM SIGSAC conference on computer and communications security, 2017, pp. 619–631
work page 2017
-
[19]
Delphi: A cryptographic inference system for neural networks,
P. Mishra, R. Lehmkuhl, A. Srinivasan, W. Zheng, and R. A. Popa, “Delphi: A cryptographic inference system for neural networks,” in Proceedings of the 2020 Workshop on Privacy-Preserving Machine Learning in Practice, 2020, pp. 27–30
work page 2020
-
[20]
Aby - a framework for efficient mixed-protocol secure two-party computation,
D. Demmler, T. Schneider, and M. Zohner, “Aby - a framework for efficient mixed-protocol secure two-party computation,” inNetwork and Distributed System Security Symposium, 2015
work page 2015
-
[21]
{ABY2. 0}: Improved{Mixed-Protocol}secure{Two-Party}computation,
A. Patra, T. Schneider, A. Suresh, and H. Yalame, “{ABY2. 0}: Improved{Mixed-Protocol}secure{Two-Party}computation,” in30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2165– 2182. 16
work page 2021
-
[22]
Chameleon: A hybrid secure computation frame- work for machine learning applications,
M. S. Riazi, C. Weinert, O. Tkachenko, E. M. Songhori, T. Schneider, and F. Koushanfar, “Chameleon: A hybrid secure computation frame- work for machine learning applications,” inProceedings of the 2018 on Asia conference on computer and communications security, 2018, pp. 707–721
work page 2018
-
[23]
Ezpc: Programmable and efficient secure two-party computation for machine learning,
N. Chandran, D. Gupta, A. Rastogi, R. Sharma, and S. Tripathi, “Ezpc: Programmable and efficient secure two-party computation for machine learning,” in2019 IEEE European Symposium on Security and Privacy (EuroS&P). IEEE, 2019, pp. 496–511
work page 2019
-
[24]
Cryptflow2: Practical 2-party secure inference,
D. Rathee, M. Rathee, N. Kumar, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma, “Cryptflow2: Practical 2-party secure inference,” in Proceedings of the 2020 ACM SIGSAC conference on computer and communications security, 2020, pp. 325–342
work page 2020
-
[25]
Muse: Secure inference resilient to malicious clients,
R. Lehmkuhl, P. Mishra, A. Srinivasan, and R. A. Popa, “Muse: Secure inference resilient to malicious clients,” in30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2201–2218
work page 2021
-
[26]
Simc 2.0: Improved secure ml inference against malicious clients,
G. Xu, X. Han, T. Zhang, S. Xu, J. Ning, X. Huang, H. Li, and R. H. Deng, “Simc 2.0: Improved secure ml inference against malicious clients,”IEEE Transactions on Dependable and Secure Computing, vol. 21, no. 4, pp. 1708–1723, 2023
work page 2023
-
[27]
Falcon: Honest-majority maliciously secure framework for private deep learning,
S. Wagh, S. Tople, F. Benhamouda, E. Kushilevitz, P. Mittal, and T. Rabin, “Falcon: Honest-majority maliciously secure framework for private deep learning,”Proceedings on Privacy Enhancing Technologies, vol. 2021, pp. 188 – 208, 2020
work page 2021
-
[28]
Blaze: Blazing fast privacy-preserving machine learning,
A. Patra and A. Suresh, “Blaze: Blazing fast privacy-preserving machine learning,”ArXiv, vol. abs/2005.09042, 2020
-
[29]
Ef- ficient{3PC}for binary circuits with application to{Maliciously- Secure}{DNN}inference,
Y . Li, Y . Duan, Z. Huang, C. Hong, C. Zhang, and Y . Song, “Ef- ficient{3PC}for binary circuits with application to{Maliciously- Secure}{DNN}inference,” in32nd USENIX Security Symposium (USENIX Security 23), 2023, pp. 5377–5394
work page 2023
-
[30]
{SWIFT}: Super-fast and robust{Privacy-Preserving}machine learning,
N. Koti, M. Pancholi, A. Patra, and A. Suresh, “{SWIFT}: Super-fast and robust{Privacy-Preserving}machine learning,” in30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2651–2668
work page 2021
-
[31]
Flash: Fast and robust framework for privacy-preserving machine learning,
M. Byali, H. Chaudhari, A. Patra, and A. Suresh, “Flash: Fast and robust framework for privacy-preserving machine learning,”Proceedings on Privacy Enhancing Technologies, vol. 2020, pp. 459 – 480, 2020
work page 2020
-
[32]
Tetrad: Actively secure 4pc for secure training and inference,
N. Koti, A. Patra, R. Rachuri, and A. Suresh, “Tetrad: Actively secure 4pc for secure training and inference,”arXiv preprint arXiv:2106.02850, 2021
-
[33]
Trident: Efficient 4pc framework for privacy preserving machine learning,
H. Chaudhari, R. Rachuri, and A. Suresh, “Trident: Efficient 4pc framework for privacy preserving machine learning,”arXiv preprint arXiv:1912.02631, 2019
-
[34]
Fantastic four:{Honest- Majority}{Four-Party}secure computation with malicious security,
A. Dalskov, D. Escudero, and M. Keller, “Fantastic four:{Honest- Majority}{Four-Party}secure computation with malicious security,” in 30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2183–2200
work page 2021
-
[35]
I. Damg ˚ard, M. Fitzi, E. Kiltz, J. B. Nielsen, and T. Toft, “Uncon- ditionally secure constant-rounds multi-party computation for equality, comparison, bits and exponentiation,” inTheory of Cryptography Con- ference. Springer, 2006, pp. 285–304
work page 2006
-
[36]
Gradient-based learning applied to document recognition,
Y . LeCun, L. Bottou, Y . Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,”Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 2002
work page 2002
-
[37]
Learning multiple layers of features from tiny images,
A. Krizhevsky, G. Hintonet al., “Learning multiple layers of features from tiny images,” 2009
work page 2009
-
[38]
Imagenet classification with deep convolutional neural networks,
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,”Advances in neural informa- tion processing systems, vol. 25, 2012
work page 2012
-
[39]
Very Deep Convolutional Networks for Large-Scale Image Recognition
K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,”arXiv preprint arXiv:1409.1556, 2014
work page internal anchor Pith review Pith/arXiv arXiv 2014
-
[40]
Seconnds: Secure outsourced neural network inference on imagenet,
S. Balla, “Seconnds: Secure outsourced neural network inference on imagenet,”arXiv preprint arXiv:2506.11586, 2025
-
[41]
Cheetah: Lean and fast secure{Two-Party}deep neural network inference,
Z. Huang, W.-j. Lu, C. Hong, and J. Ding, “Cheetah: Lean and fast secure{Two-Party}deep neural network inference,” in31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 809–826
work page 2022
-
[42]
Panther: Practical secure 2-party neural network inference,
J. Feng, Y . Wu, H. Sun, S. Zhang, and D. Liu, “Panther: Practical secure 2-party neural network inference,”IEEE Transactions on Information Forensics and Security, 2025
work page 2025
-
[43]
Secure transformer inference made non- interactive,
J. Zhang, X. Yang, L. He, K. Chen, W.-j. Lu, Y . Wang, X. Hou, J. Liu, K. Ren, and X. Yang, “Secure transformer inference made non- interactive,”Cryptology ePrint Archive, 2024
work page 2024
-
[44]
Bumblebee: Secure two-party inference framework for large transformers,
W.-j. Lu, Z. Huang, Z. Gu, J. Li, J. Liu, C. Hong, K. Ren, T. Wei, and W. Chen, “Bumblebee: Secure two-party inference framework for large transformers,”Cryptology ePrint Archive, 2023
work page 2023
-
[45]
Astra: High throughput 3pc over rings with application to secure prediction,
H. Chaudhari, A. Choudhury, A. Patra, and A. Suresh, “Astra: High throughput 3pc over rings with application to secure prediction,” in Proceedings of the 2019 ACM SIGSAC Conference on Cloud Computing Security Workshop, 2019, pp. 81–92
work page 2019
-
[46]
A. Baccarini, M. Blanton, and C. Yuan, “Multi-party replicated secret sharing over a ring with applications to privacy-preserving machine learning,”Proceedings on Privacy Enhancing Technologies, 2023
work page 2023
-
[47]
Motion–a framework for mixed-protocol multi-party computation,
L. Braun, D. Demmler, T. Schneider, and O. Tkachenko, “Motion–a framework for mixed-protocol multi-party computation,”ACM Transac- tions on Privacy and Security, vol. 25, no. 2, pp. 1–35, 2022
work page 2022
-
[48]
Turbopack: Honest majority mpc with constant online communication,
D. Escudero, V . Goyal, A. Polychroniadou, and Y . Song, “Turbopack: Honest majority mpc with constant online communication,” inPro- ceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022, pp. 951–964
work page 2022
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.