Recognition: unknown
Assistants, Not Architects: The Role of LLMs in Networked Systems Design
Pith reviewed 2026-05-07 14:47 UTC · model grok-4.3
The pith
LLMs cannot reliably design networked system architectures because they miss constraints and stick to familiar patterns.
A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.
Core claim
Large language models are not architects for networked systems. They generate plausible configurations yet routinely miss critical constraints, encode wrong assumptions about hardware or workloads, and show stickiness to familiar design patterns. Kepler addresses this by encoding architecturally significant properties—requirements, incompatibilities, and qualitative trade-offs—as constraints, then using SMT-based optimization to produce feasible designs that optimize user goals while remaining at an abstract level that avoids the cost of full behavioral simulation.
What carries the argument
Kepler, a framework that encodes requirements, incompatibilities, and trade-offs as constraints and applies SMT optimization to synthesize and explain feasible system designs at an abstract level.
If this is right
- System designs produced by Kepler are guaranteed to satisfy all encoded constraints, unlike typical LLM outputs.
- Architects can explore hardware and configuration alternatives without running expensive simulations for each candidate.
- The optimizer supplies human-readable explanations for each design choice.
- The same constraint set can be reused across multiple workloads or objectives by changing only the optimization target.
Where Pith is reading between the lines
- The method could be applied to other design domains where rules-of-thumb dominate, such as storage systems or distributed databases.
- LLMs might still be useful for drafting the initial constraint specifications that Kepler then verifies and optimizes.
- If real deployments repeatedly violate Kepler designs, it would indicate that the abstract constraint level is insufficient and more detailed models are required.
Load-bearing premise
Architecturally important properties and cross-layer interactions can be captured well enough by abstract rules-of-thumb expressed as constraints.
What would settle it
Deploy an LLM-proposed network configuration and a Kepler-generated one on the same testbed workload; if the LLM version violates a documented incompatibility or performance bound that Kepler avoided, the claim holds.
Figures
read the original abstract
Designing the architecture of modern networked systems requires navigating a large, combinatorial space of hardware, systems, and configuration choices with complex cross-layer interactions. Architects must balance competing objectives such as performance, cost, and deployability while satisfying compatibility and resource constraints, often relying on scattered rules-of-thumb drawn from benchmarks, papers, documentation, and expert experience. This raises a natural question: can large language models (LLMs) reliably perform this kind of architectural reasoning? We find that they cannot. While LLMs produce plausible configurations, they frequently miss critical constraints, encode incorrect assumptions, and exhibit ``stickiness'' to familiar patterns. A natural workaround--iterative validation via simulation or experimentation--is often prohibitively expensive at scale and, in many cases, infeasible, particularly when comparing hardware-dependent alternatives. Motivated by this gap, we present Kepler, a lightweight reasoning framework for architecture design that combines structured, expert-driven specifications with SMT-based optimization. Kepler encodes architecturally significant properties--requirements, incompatibilities, and qualitative trade-offs--about systems, hardware, and workloads as constraints, and synthesizes feasible designs that optimize user-defined objectives. It operates at an abstract level, capturing ``rules-of-thumb'' rather than detailed system behavior, enabling tractable reasoning while preserving key interactions, and provides explanations for its decisions. Through experiments and case studies, we show that Kepler uncovers interactions missed by LLMs and supports systematic, explainable design exploration.
Editorial analysis
A structured set of objections, weighed in public.
Referee Report
Summary. The paper claims that LLMs cannot reliably perform architectural reasoning for modern networked systems, as they produce plausible but flawed configurations that miss critical constraints, encode incorrect assumptions, and exhibit stickiness to familiar patterns. Motivated by the expense of iterative validation, it introduces Kepler, a lightweight SMT-based framework that encodes expert-driven abstract specifications of requirements, incompatibilities, and qualitative trade-offs as constraints to synthesize feasible, optimized designs while providing explanations.
Significance. If the experimental evidence and case studies hold, the work usefully demonstrates concrete limitations of current LLMs on constraint-rich, cross-layer design tasks in networking and supplies a practical, explainable alternative that leverages expert rules-of-thumb for systematic exploration without requiring full simulation at every step.
major comments (2)
- [Experiments and case studies (referenced in abstract and §4)] The central claim that LLMs 'frequently miss critical constraints' and 'exhibit stickiness' rests on experiments and case studies, yet the manuscript provides no details on LLM prompts, specific test cases, workloads, or quantitative metrics for missed constraints (e.g., how many constraints were violated per trial). This absence is load-bearing for the soundness of the negative result on LLMs.
- [Kepler framework description (§3)] Kepler is presented as operating 'at an abstract level, capturing rules-of-thumb rather than detailed system behavior' while still preserving key interactions. However, the manuscript does not demonstrate or discuss whether this abstraction level suffices to detect incompatibilities that only appear under detailed simulation or measurement (e.g., NIC offload interactions or queueing under realistic workloads), which directly affects whether Kepler closes the validation gap attributed to LLMs.
minor comments (2)
- [Abstract] The abstract states high-level claims about experiments without any numerical results or methodology summary, reducing immediate verifiability.
- [Kepler framework description] Notation for the SMT encoding (variables, objectives, and constraint types) should be introduced with a small example early in the Kepler section to aid readability.
Simulated Author's Rebuttal
We thank the referee for the constructive and detailed feedback. We address each major comment point by point below, clarifying our approach and indicating where revisions have been made to strengthen the manuscript.
read point-by-point responses
-
Referee: [Experiments and case studies (referenced in abstract and §4)] The central claim that LLMs 'frequently miss critical constraints' and 'exhibit stickiness' rests on experiments and case studies, yet the manuscript provides no details on LLM prompts, specific test cases, workloads, or quantitative metrics for missed constraints (e.g., how many constraints were violated per trial). This absence is load-bearing for the soundness of the negative result on LLMs.
Authors: We agree that the original manuscript lacked sufficient detail on the LLM evaluation methodology, which is necessary to substantiate the negative results. In the revised version, we have added a new subsection (4.1) and Appendix A that specify the exact prompt templates (including system and user prompts for each model), the complete set of test cases (e.g., 5G core, data-center fabric, and edge-computing scenarios with explicit hardware and workload parameters), the workloads used (synthetic and trace-driven), and quantitative metrics (e.g., average and per-trial counts of missed constraints across 100 runs per LLM, with breakdowns by constraint category). These additions make the experimental claims reproducible and directly address the load-bearing concern. revision: yes
-
Referee: [Kepler framework description (§3)] Kepler is presented as operating 'at an abstract level, capturing rules-of-thumb rather than detailed system behavior' while still preserving key interactions. However, the manuscript does not demonstrate or discuss whether this abstraction level suffices to detect incompatibilities that only appear under detailed simulation or measurement (e.g., NIC offload interactions or queueing under realistic workloads), which directly affects whether Kepler closes the validation gap attributed to LLMs.
Authors: We acknowledge that the manuscript did not explicitly discuss the boundaries of the abstraction with respect to simulation-only effects. Kepler is intentionally scoped to architecturally significant, expert-encoded constraints that can be evaluated without full simulation, precisely because exhaustive detailed validation is often infeasible at design time (as stated in the introduction). In the revision, we have added a dedicated paragraph in §3.3 that clarifies this scope: Kepler eliminates designs violating high-level rules-of-thumb (e.g., hardware incompatibilities and qualitative trade-offs), while low-level phenomena such as specific NIC offload behaviors or workload-dependent queueing require subsequent simulation or measurement. We also note that the case studies in §4 show Kepler surfacing interactions missed by LLMs that align with known rules-of-thumb, thereby narrowing the space for any later detailed validation. revision: partial
Circularity Check
No circularity: empirical claims rest on external SMT solvers and expert specifications
full rationale
The paper contains no equations, derivations, fitted parameters, or self-referential steps. Its evaluation of LLMs and the Kepler framework relies on expert-driven abstract constraints fed to standard SMT solvers, with no load-bearing self-citations, ansatzes, or reductions of predictions to inputs by construction. The central argument is supported by direct comparison against external benchmarks rather than internal fitting or renaming.
Axiom & Free-Parameter Ledger
axioms (1)
- domain assumption Architecturally significant properties about systems, hardware, and workloads can be encoded as constraints that preserve key interactions at an abstract level.
invented entities (1)
-
Kepler framework
no independent evidence
Reference graph
Works this paper leans on
-
[1]
Greenberg, David A
Mohammad Alizadeh, Albert G. Greenberg, David A. Maltz, Jitendra Padhye, Parveen Patel, Balaji Prabhakar, Sudipta Sen- gupta, and Murari Sridharan. Data Center TCP (DCTCP). In SIGCOMM, 2010
2010
-
[2]
AWS Nitro System
Amazon. AWS Nitro System. https://aws.amazon.com/ ec2/nitro/
-
[3]
Copa: Practical delay- based congestion control for the internet
Venkat Arun and Hari Balakrishnan. Copa: Practical delay- based congestion control for the internet. InNSDI, 2018
2018
-
[4]
A General Approach to Network Configuration Verification
Ryan Beckett, Aarti Gupta, Ratul Mahajan, and David Walker. A General Approach to Network Configuration Verification. InSIGCOMM, 2017
2017
-
[5]
Lightweight automated reasoning for network architectures
Rahul Bothra, Venkat Arun, Brighten Godfrey, Akshay Narayan, and Ahmed Saeed. Lightweight automated reasoning for network architectures. InProceedings of the 23rd ACM Workshop on Hot Topics in Networks, HotNets ’24, pages 237–
-
[6]
Brakmo, Sean W
Lawrence S. Brakmo, Sean W. O’Malley, and Larry L. Peter- son. TCP Vegas: New Techniques for Congestion Detection and Avoidance. InSIGCOMM, 1994
1994
-
[7]
Stephen Gunn, Soheil Has- sas Yeganeh, and Van Jacobson
Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Has- sas Yeganeh, and Van Jacobson. BBR: Congestion-Based Congestion Control. 2017
2017
-
[8]
Rao, and Mohit Tawarmalani
Yiyang Chang, Sanjay G. Rao, and Mohit Tawarmalani. Robust Validation of Network Designs under Uncertain Demands and Failures. InNSDI, 2017
2017
-
[9]
Audrey Cheng, Shu Liu, Melissa Pan, Zhifei Li, Shubham Agarwal, Mert Cemri, Bowen Wang, Alexander Krentsel, Tian Xia, Jongseok Park, et al. Let the barbarians in: How ai can accelerate systems performance research.arXiv preprint arXiv:2512.14806, 2025
-
[10]
AI-Driven Research for Databases
Audrey Cheng, Harald Ng, Aaron Kabcenell, Peter Bailis, Matei Zaharia, Lin Ma, Xiao Shi, and Ion Stoica. Ai-driven research for databases.arXiv preprint arXiv:2604.06566, 2026
work page internal anchor Pith review Pith/arXiv arXiv 2026
-
[11]
Andromeda: Perfor- mance, Isolation, and Velocity at Scale in Cloud Network Vir- tualization
Michael Dalton, David Schultz, Jacob Adriaens, Ahsan Arefin, Anshuman Gupta, Brian Fahs, Dima Rubinstein, Enrique Can- dromeda Zermeno, Erik Rubow, James Alexander Docauer, Jesse Alpert, Jing Ai, Jon Olson, Kevin DeCabooter, Marc de Kruijf, Nan Hua, Nathan Lewis, Nikhil Kasinadhuni, Ric- cardo Crepaldi, Srinivas Krishnan, Subbaiah Venkata, Yossi Richter, ...
2018
-
[12]
Z3: an efficient smt solver
Leonardo De Moura and Nikolaj Bjørner. Z3: an efficient smt solver. TACAS’08/ETAPS’08, page 337–340, Berlin, Heidel- berg, 2008. Springer-Verlag
2008
-
[13]
Argyraki
Mihai Dobrescu and Katerina J. Argyraki. Software Dataplane Verification. InNSDI, 2014
2014
-
[14]
The design and operation of {CloudLab}
Dmitry Duplyakin, Robert Ricci, Aleksander Maricq, Gary Wong, Jonathon Duerig, Eric Eide, Leigh Stoller, Mike Hibler, David Johnson, Kirk Webb, et al. The design and operation of {CloudLab}. In2019 USENIX annual technical conference (USENIX ATC 19), pages 1–14, 2019
2019
-
[15]
VFP: A Virtual Switch Platform for Host Sdn in the Public Cloud
Daniel Firestone. VFP: A Virtual Switch Platform for Host Sdn in the Public Cloud. InNSDI, 2017
2017
-
[16]
Millstein
Ari Fogel, Stanley Fung, Luis Pedrosa, Meg Walraed-Sullivan, Ramesh Govindan, Ratul Mahajan, and Todd D. Millstein. A General Approach to Network Configuration Analysis. In NSDI, 2015
2015
-
[17]
An open-source benchmark suite for cloud and iot microservices, 2019
Yu Gan, Yanqi Zhang, Dailun Cheng, Ankitha Shetty, Priyal Rathi, Nayan Katarki, Ariana Bruno, Justin Hu, Brian Ritchken, Brendon Jackson, Kelvin Hu, Meghna Pancholi, Yuan He, Brett Clancy, Chris Colen, Fukang Wen, Catherine Leung, Siyuan Wang, Leon Zaruvinsky, Mateo Espinosa, Rick Lin, Zhongling Liu, Jake Padilla, and Christina Delimitrou. An open-source ...
2019
-
[18]
SIMON: A Simple and Scalable Method for Sensing, Inference and Measurement in Data Center Networks
Yilong Geng, Shiyu Liu, Zi Yin, Ashish Naik, Balaji Prabhakar, Mendel Rosenblum, and Amin Vahdat. SIMON: A Simple and Scalable Method for Sensing, Inference and Measurement in Data Center Networks. InNSDI, 2019
2019
-
[19]
Towards Integrating Formal Methods into ML- Based Systems for Networking
Fengchen Gong, Divya Raghunathan, Aarti Gupta, and Maria Apostolaki. Towards Integrating Formal Methods into ML- Based Systems for Networking. InHotNets, 2023
2023
-
[20]
Anderson
Prateesh Goyal, Preey Shah, Kevin Zhao, Georgios Nikolaidis, Mohammad Alizadeh, and Thomas E. Anderson. Backpressure Flow Control. InNSDI, 2022
2022
-
[21]
Pingmesh: A Large- Scale System for Data Center Network Latency Measurement and Analysis
Chuanxiong Guo, Lihua Yuan, Dong Xiang, Yingnong Dang, Ray Huang, Dave Maltz, Zhaoyi Liu, Vin Wang, Bin Pang, Hua Chen, Zhi-Wei Lin, and Varugis Kurien. Pingmesh: A Large- Scale System for Data Center Network Latency Measurement and Analysis. InSIGCOMM, 2015
2015
-
[22]
Sonata: Query-Driven Streaming Network Telemetry
Arpit Gupta, Rob Harrison, Marco Canini, Nick Feamster, Jen- nifer Rexford, and Walter Willinger. Sonata: Query-Driven Streaming Network Telemetry. InSIGCOMM, 2018
2018
-
[23]
CUBIC: A New TCP-Friendly High-Speed TCP Variant.SIGOPS Oper
Sangtae Ha, Injong Rhee, and Lisong Xu. CUBIC: A New TCP-Friendly High-Speed TCP Variant.SIGOPS Oper. Syst. Rev., 42(5):64–74, jul 2008
2008
-
[24]
Glia: A Human-Inspired AI for Automated Systems Design and Optimization
Pouya Hamadanian, Pantea Karimi, Arash Nasr-Esfahany, Kimia Noorbakhsh, Joseph Chandler, Ali ParandehGheibi, Mo- hammad Alizadeh, and Hari Balakrishnan. Glia: A human- inspired ai for automated systems design and optimization. arXiv preprint arXiv:2510.27176, 2025
work page internal anchor Pith review Pith/arXiv arXiv 2025
-
[25]
Planning Anything with Rigor: General-Purpose Zero-Shot Planning with LLM- based Formalized Programming
Yilun Hao, Yang Zhang, and Chuchu Fan. Planning Anything with Rigor: General-Purpose Zero-Shot Planning with LLM- based Formalized Programming. InICLR, 2025
2025
-
[26]
Reiter, and Vyas Sekar
Victor Heorhiadi, Sanjay Chandrasekaran, Michael K. Reiter, and Vyas Sekar. Intent-Driven Composition of Resource- Management SDN Applications. InCoNEXT, 2018. 13
2018
-
[27]
Rao, and Mohit Tawarmalani
Chuan Jiang, Sanjay G. Rao, and Mohit Tawarmalani. PCF: Provably Resilient Flexible Routing. InSIGCOMM, 2020
2020
-
[28]
Real Time Net- work Policy Checking Using Header Space Analysis
Peyman Kazemian, Michael Chan, Hongyi Zeng, George Varghese, Nick McKeown, and Scott Whyte. Real Time Net- work Policy Checking Using Header Space Analysis. InNSDI, 2013
2013
-
[29]
Header Space Analysis: Static Checking for Networks
Peyman Kazemian, George Varghese, and Nick McKeown. Header Space Analysis: Static Checking for Networks. In NSDI, 2012
2012
-
[30]
SiP-ML: High-Bandwidth Optical Network Interconnects for Machine Learning Training
Mehrdad Khani, Manya Ghobadi, Mohammad Alizadeh, Ziyi Zhu, Madeleine Glick, Keren Bergman, Amin Vahdat, Ben- jamin Klenk, and Eiman Ebrahimi. SiP-ML: High-Bandwidth Optical Network Interconnects for Machine Learning Training. InSIGCOMM, 2021
2021
-
[31]
VeriFlow: Verifying Network- Wide Invariants in Real Time
Ahmed Khurshid, Xuan Zou, Wenxuan Zhou, Matthew Caesar, and Philip Brighten Godfrey. VeriFlow: Verifying Network- Wide Invariants in Real Time. InNSDI, 2013
2013
-
[32]
Hyojoon Kim, Joshua Reich, Arpit Gupta, Muhammad Shah- baz, Nick Feamster, and Russell J. Clark. Kinetic: Verifiable Dynamic Network Control. InNSDI, 2015
2015
-
[33]
Gautam Kumar, Nandita Dukkipati, Keon Jang, Hassan M. G. Wassel, Xian Wu, Behnam Montazeri, Yaogong Wang, Kevin Springborn, Christopher Alfeld, Michael Ryan, David Wether- all, and Amin Vahdat. Swift: Delay is Simple and Effective for Congestion Control in the Datacenter. InSIGCOMM, 2020
2020
-
[34]
Towards Verified Self-Driving Infrastructure
Bingzhe Liu, Ali Kheradmand, Matthew Caesar, and Philip Brighten Godfrey. Towards Verified Self-Driving Infrastructure. InHotnets, 2020
2020
-
[35]
Brighten Godfrey, and Samuel Talmadge King
Haohui Mai, Ahmed Khurshid, Rachit Agarwal, Matthew Cae- sar, P. Brighten Godfrey, and Samuel Talmadge King. Debug- ging the Data Plane with Anteater. InSIGCOMM, 2011
2011
-
[36]
Michael Marty, Marc de Kruijf, Jacob Adriaens, Christopher Alfeld, Sean Bauer, Carlo Contavalli, Michael Dalton, Nan- dita Dukkipati, William C. Evans, Steve Gribble, Nicholas Kidd, Roman Kononov, Gautam Kumar, Carl Mauer, Emily Musick, Lena Olson, Erik Rubow, Michael Ryan, Kevin Spring- born, Paul Turner, Valas Valancius, Xi Wang, and Amin Vahdat. Snap: ...
2019
-
[37]
Mellette, Rajdeep Das, Yibo Guo, Rob McGuin- ness, Alex C
William M. Mellette, Rajdeep Das, Yibo Guo, Rob McGuin- ness, Alex C. Snoeren, and George Porter. Expanding across time to deliver bandwidth efficiency and low latency. 2019
2019
-
[38]
Blem, Hassan M
Radhika Mittal, Vinh The Lam, Nandita Dukkipati, Emily R. Blem, Hassan M. G. Wassel, Monia Ghobadi, Amin Vahdat, Yaogong Wang, David Wetherall, and David Zats. TIMELY: RTT-based Congestion Control for the Datacenter. InSIG- COMM, 2015
2015
-
[39]
Mogul, Drago Goricanec, Martin Pool, Anees Shaikh, Douglas Turk, Bikash Koley, and Xiaoxue Zhao
Jeffrey C. Mogul, Drago Goricanec, Martin Pool, Anees Shaikh, Douglas Turk, Bikash Koley, and Xiaoxue Zhao. Expe- riences with Modeling Network Topologies at Multiple Levels of Abstraction . InNSDI, 2020
2020
-
[40]
Ousterhout
Behnam Montazeri, Yilong Li, Mohammad Alizadeh, and John K. Ousterhout. Homa: A Receiver-Driven Low-Latency Transport Protocol Using Network Priorities. InSIGCOMM, 2018
2018
-
[41]
Language-Directed Hard- ware Design for Network Performance Monitoring
Srinivas Narayana, Anirudh Sivaraman, Vikram Nathan, Pra- teesh Goyal, Venkat Arun, Mohammad Alizadeh, Vimalkumar Jeyakumar, and Changhoon Kim. Language-Directed Hard- ware Design for Network Performance Monitoring. InSIG- COMM, 2017
2017
-
[42]
Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning
Liangming Pan, Alon Albalak, Xinyi Wang, and William Wang. Logic-lm: Empowering large language models with symbolic solvers for faithful logical reasoning. InFindings of the Asso- ciation for Computational Linguistics: EMNLP 2023, pages 3806–3824, 2023
2023
-
[43]
Verifying Reachability in Networks with Mutable Datapaths
Aurojit Panda, Ori Lahav, Katerina Argyraki, Mooly Sagiv, and Scott Shenker. Verifying Reachability in Networks with Mutable Datapaths. InNSDI, 2017
2017
-
[44]
Plankton: Scalable Network Configuration Verification Through Model Checking
Santhosh Prabhu, Kuan-Yen Chou, Ali Kheradmand, Brighten Godfrey, and Matthew Caesar. Plankton: Scalable Network Configuration Verification Through Model Checking. InNSDI, 2020
2020
-
[45]
PGA: Using Graphs to Express and Automatically Reconcile Network Policies
Chaithan Prakash, Jeongkeun Lee, Yoshio Turner, Joon-Myung Kang, Aditya Akella, Sujata Banerjee, Charles Clark, Yadi Ma, Puneet Sharma, and Ying Zhang. PGA: Using Graphs to Express and Automatically Reconcile Network Policies. In SIGCOMM, 2015
2015
-
[46]
Ammar, Ellen W
Ahmed Saeed, Varun Gupta, Prateesh Goyal, Milad Sharif, Rong Pan, Mostafa H. Ammar, Ellen W. Zegura, Keon Jang, Mohammad Alizadeh, Abdul Kabbani, and Amin Vahdat. An- nulus: A Dual Congestion Control Loop for Datacenter and W AN Traffic Aggregates. InSIGCOMM, 2020
2020
-
[47]
Iyengar, and Mirja Kühlewind
Stanislav Shalunov, Greg Hazel, Janardhan R. Iyengar, and Mirja Kühlewind. Low Extra Delay Background Transport (LEDBAT). 2012
2012
-
[48]
PROSPER: Extract- ing Protocol Specifications Using Large Language Models
Prakhar Sharma and Vinod Yegneswaran. PROSPER: Extract- ing Protocol Specifications Using Large Language Models. In HotNets, 2023
2023
-
[49]
Parshin Shojaee, Iman Mirzadeh, Keivan Alizadeh, Maxwell Horton, Samy Bengio, and Mehrdad Farajtabar. The illusion of thinking: Understanding the strengths and limitations of reasoning models via the lens of problem complexity.arXiv preprint arXiv:2506.06941, 2025
work page internal anchor Pith review arXiv 2025
-
[50]
Brighten Godfrey
Ankit Singla, Chi-Yao Hong, Lucian Popa, and P. Brighten Godfrey. Jellyfish: Networking Data Centers Randomly. In NSDI, 2012
2012
-
[51]
cores", PINGMESH_CPU_FACTOR *len(computes)) 7constraints.append(cores_per_server) 8returnconstraints 9PingMesh = System( 10id=
Robert Soulé, Shrutarshi Basu, Parisa Jalili Marandi, Fernando Pedone, Robert D. Kleinberg, Emin Gün Sirer, and Nate Foster. Merlin: A Language for Provisioning Network Resources. In CoNEXT, 2014. 14 1defpingmesh_constraints(workload): 2devices = workload.get_deployed_devices() 3computes = devices.get(DEVICE_TYPE.COMPUTES) 4constraints = [] 5forcomputeinc...
2014
-
[52]
Detecting Network Load Viola- tions for Distributed Control Planes
Kausik Subramanian, Anubhavnidhi Abhashkumar, Loris D’Antoni, and Aditya Akella. Detecting Network Load Viola- tions for Distributed Control Planes. InPLDI, 2020
2020
-
[53]
Wong, and Hongyi Zeng
Yu-Wei Eric Sung, Xiaozheng Tie, Starsky H.Y . Wong, and Hongyi Zeng. Robotron: Top-down Network Management at Facebook Scale. InSIGCOMM, 2016
2016
-
[54]
Liveness Verification of Stateful Network Functions
Farnaz Yousefi, Anubhavnidhi Abhashkumar, Kausik Subra- manian, Kartik Hans, Soudeh Ghorbani, and Aditya Akella. Liveness Verification of Stateful Network Functions. InNSDI, 2020
2020
-
[55]
NetSMC: A Custom Symbolic Model Checker for Stateful Network Verification
Yifei Yuan, Soo-Jin Moon, Sahil Uppal, Limin Jia, and Vyas Sekar. NetSMC: A Custom Symbolic Model Checker for Stateful Network Verification. InNSDI, 2020
2020
-
[56]
Iyer, Matteo Rizzo, Luis Pedrosa, Katerina J
Arseniy Zaostrovnykh, Solal Pirelli, Rishabh R. Iyer, Matteo Rizzo, Luis Pedrosa, Katerina J. Argyraki, and George Candea. Verifying Software Network Functions with No Verification Expertise. InSOSP, 2019. A Constraint Encoding and Architecture De- sign Synthesis We use PingMesh, a network monitoring system, as a run- ning example in explaining our encodi...
2019
-
[57]
Workload mismatch: S is incompatible with W (e.g., due to workload orderings or explicit constraints)
-
[58]
Objective misalignment:the objectives satisfied by S are not aligned with those specified inW
-
[59]
Insufficient inventory:no hardware configuration H in the available inventory can satisfy the aggregate resource requirements whenSis included
-
[60]
Inference
System incompatibility: S is incompatible with one or more systems among the fixed choices in C\C ′ or any feasible replacements withinC ′. Detailed explanation.We now argue that these categories are exhaustive. By construction, any constraint involving a system S in Kepler falls into one of three classes: (i) con- straints imposed by the workload specifi...
-
[61]
Network stack: Netchannel
-
[62]
Congestion control: BFC
-
[63]
Transport protocol: TCP
-
[64]
Network load balancer: PacketSpray
-
[65]
Network monitoring: Sonata 25 26** Hardware deployed ** 27Racks[0] -
-
[66]
Intel Xeon 96 cores, 128GB DDR4, CX5 NIC, 4x- NVIDIA V100 16GB
Server: "Intel Xeon 96 cores, 128GB DDR4, CX5 NIC, 4x- NVIDIA V100 16GB",
-
[67]
Tofino1 2Tbps, 2 pipeline
TOR switch: "Tofino1 2Tbps, 2 pipeline",
-
[68]
100Gbps Ethernet
Server-Tor cable: "100Gbps Ethernet" 31... Listing 8.Description of the ML inference workload with Kepler, and Kepler’s output. listing 8, consisting of nuances encoded as constraints for systems. For example, the encoding of the packet spraying system captured that with perfect load balancing, larger re- order buffer in the NICs are required. Kepler pick...
2017
discussion (0)
Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.