pith. machine review for the scientific record. sign in

arxiv: 2604.08914 · v1 · submitted 2026-04-10 · 💻 cs.DC

Recognition: 2 theorem links

· Lean Theorem

Finding Nemo-Nemo: CFT DAG-based Consensus in the WAN

Authors on Pith no claims yet

Pith reviewed 2026-05-10 17:52 UTC · model grok-4.3

classification 💻 cs.DC
keywords CFT consensusDAG-based consensuswide-area networkscrash fault tolerancemulti-leader protocolsdeferred executiondistributed systems
0
0 comments X

The pith

Nemo-Nemo structures consensus around a causally ordered DAG so every replica can propose commands, separates dissemination from ordering, and defers missed proposals to exceed prior CFT performance in wide-area networks.

A machine-rendered reading of the paper's core claim, the machinery that carries it, and where it could break.

The paper introduces Nemo-Nemo, a crash-fault tolerant consensus protocol built for wide-area networks. It organizes command flow through a causally ordered DAG that lets all replicas propose without creating a single-leader choke point. By keeping command dissemination separate from the actual consensus decisions, the system keeps working even when ordering stalls. Missed leader proposals are not discarded but scheduled for later deterministic execution, so transient delays do not destroy overall progress. The design reaches commit decisions after only two network hops, matching the latency of classic CFT protocols while delivering substantially higher throughput under realistic WAN conditions.

Core claim

Nemo-Nemo is the first DAG-based CFT consensus protocol proven to exceed state-of-the-art wide-area network performance in both speed and resilience. It achieves this by bridging CFT and BFT design ideas: a causally ordered DAG for self-regulating command propagation, a multi-leader architecture that removes single-leader bottlenecks, separation of dissemination from consensus logic, and deterministic deferral of any proposal that misses its deadline.

What carries the argument

The causally ordered DAG that carries all proposals and lets every replica participate while automatically throttling communication.

If this is right

  • The protocol matches the two-hop latency of existing CFT systems while sustaining higher throughput.
  • Command dissemination continues even when consensus commits are temporarily blocked by network conditions.
  • No proposal is ever dropped; every leader message is eventually executed after a deterministic deferral.
  • Multi-leader operation removes the throughput ceiling imposed by rotating a single leader.

Where Pith is reading between the lines

These are editorial extensions of the paper, not claims the author makes directly.

  • The same separation of dissemination from ordering could be reused in other geo-distributed systems that must tolerate variable latency.
  • Deferred execution offers a concrete way to preserve liveness without relaxing safety when deadlines are missed.
  • Because the DAG is causally ordered, the approach may reduce the coordination overhead that usually appears when many leaders propose concurrently.

Load-bearing premise

That the causally ordered DAG, multi-leader proposals, and deferred execution can be built without creating new bottlenecks or correctness problems that only appear in real wide-area deployments.

What would settle it

A controlled wide-area testbed run that compares Nemo-Nemo head-to-head with an established CFT protocol such as Raft, measuring sustained throughput and commit latency while injecting realistic packet delays and losses.

Figures

Figures reproduced from arXiv: 2604.08914 by Alberto Sonnino, Dahlia Malkhi, Igor Zablotchi, Pasindu Tennage, Philipp Jovanovic, Rithwik Kerur.

Figure 1
Figure 1. Figure 1: The structure of the Nemo-Nemo DAG. Left: The structure of a wave, consisting of two rounds (Propose and Decide). Right: Waves patterns in the Nemo-Nemo protocol (each round starts a new overlapping wave). 4 The Nemo-Nemo Consensus Protocol By itself, the DAG layer described in Section 3 functions as a scalable data dissemination layer. It grows with the network and ensures that all disseminated transactio… view at source ↗
Figure 3
Figure 3. Figure 3: Performance under normal case WAN execution. [PITH_FULL_IMAGE:figures/full_fig_p008_3.png] view at source ↗
Figure 4
Figure 4. Figure 4: Scalability with replication factor. Saturation [PITH_FULL_IMAGE:figures/full_fig_p009_4.png] view at source ↗
Figure 5
Figure 5. Figure 5: Scalability with command size. Saturation through [PITH_FULL_IMAGE:figures/full_fig_p010_5.png] view at source ↗
Figure 6
Figure 6. Figure 6: Throughput under crash failures with 5 replicas. At 25 seconds, we crash the leader in Multi-Paxos and QuePaxa, and a [PITH_FULL_IMAGE:figures/full_fig_p011_6.png] view at source ↗
Figure 9
Figure 9. Figure 9: Average CPU utilization across all 5 replicas. [PITH_FULL_IMAGE:figures/full_fig_p011_9.png] view at source ↗
Figure 8
Figure 8. Figure 8: Comparison against state-of-the-art BFT DAG-based [PITH_FULL_IMAGE:figures/full_fig_p011_8.png] view at source ↗
read the original abstract

This paper introduces Nemo-Nemo, a practical crash-fault tolerant (CFT) consensus protocol designed to outperform existing protocols in wide-area networks by bridging design principles from the CFT and Byzantine-fault tolerant (BFT) worlds. By structuring command propagation through a causally ordered DAG, Nemo-Nemo allows all consensus replicas to propose commands with a naturally self-regulating communication regime. By exploiting multi-leader architecture, Nemo-Nemo avoids the performance bottleneck inherent to single-leader protocols. By separating command dissemination from consensus logic, Nemo-Nemo handles challenging network conditions even when consensus commits are stalled. Moreover, leader proposals that miss a deadline are never dropped, but deterministically deferred and executed later, preserving throughput under transient network delays. And by enabling Nemo-Nemo to commit on a DAG in just two network hops, it matches the latency of existing CFT systems, while achieving significantly higher throughput. The result is a robust, deployable system: the first DAG-based CFT consensus protocol proven to exceed state-of-the-art wide-area network performance in both speed and resilience.

Editorial analysis

A structured set of objections, weighed in public.

Desk editor's note, referee report, simulated authors' rebuttal, and a circularity audit. Tearing a paper down is the easy half of reading it; the pith above is the substance, this is the friction.

Referee Report

3 major / 2 minor

Summary. The paper introduces Nemo-Nemo, a CFT consensus protocol for WANs that structures command propagation via a causally ordered DAG to support multi-leader proposals, separates dissemination from consensus logic, defers missed proposals for later deterministic execution, and achieves two-hop commits. It claims this yields higher throughput and resilience than prior CFT systems while matching their latency, positioning it as the first DAG-based CFT protocol proven to exceed SOTA WAN performance in both speed and resilience.

Significance. If the performance and resilience claims hold with supporting analysis and evidence, the work would be significant for distributed systems research. It bridges CFT and BFT design principles in a practical, deployable system and directly targets WAN challenges such as variable latency and transient delays, potentially improving real-world consensus deployments.

major comments (3)
  1. [Abstract] Abstract: The assertion that Nemo-Nemo is 'proven to exceed state-of-the-art wide-area network performance in both speed and resilience' is load-bearing for the central claim but rests on high-level descriptions of the causally ordered DAG, multi-leader architecture, and deferred execution without concrete bounds, overhead analysis, or handling of variable WAN latency/packet loss; this leaves the performance superiority unverified.
  2. [Protocol description] Protocol description (around the two-hop commit and separation of dissemination from consensus): The claim that these mechanisms translate to higher throughput without introducing new bottlenecks or correctness risks under real WAN conditions requires explicit analysis or proof; the abstract notes 'never-dropped proposals' and 'deterministically deferred' execution but does not address coordination costs that typically arise in DAG and multi-leader systems.
  3. [Evaluation] Evaluation or experimental section: No implementation details, throughput/latency measurements, error bars, or resilience tests under realistic WAN traces are referenced, undermining the 'practical, deployable system' claim and the comparison to SOTA; the weakest assumption (that DAG maintenance adds no overhead) cannot be assessed without such data.
minor comments (2)
  1. Clarify notation for the causally ordered DAG and multi-leader proposals on first use to aid readability for readers unfamiliar with hybrid CFT/BFT designs.
  2. [Abstract] Ensure the abstract's performance claims are cross-referenced to specific theorems, lemmas, or experimental figures in the body.

Simulated Author's Rebuttal

3 responses · 0 unresolved

We thank the referee for the constructive and detailed feedback. We address each major comment point by point below, providing clarifications on the protocol analysis and design rationale while agreeing to revisions where they strengthen the presentation without altering the core claims.

read point-by-point responses
  1. Referee: [Abstract] Abstract: The assertion that Nemo-Nemo is 'proven to exceed state-of-the-art wide-area network performance in both speed and resilience' is load-bearing for the central claim but rests on high-level descriptions of the causally ordered DAG, multi-leader architecture, and deferred execution without concrete bounds, overhead analysis, or handling of variable WAN latency/packet loss; this leaves the performance superiority unverified.

    Authors: The abstract condenses results from the protocol analysis in Sections 3 and 4. The two-hop commit matches the latency of standard CFT protocols such as Raft or Paxos by requiring only a proposal and a quorum acknowledgment on the DAG. Multi-leader proposals combined with causal ordering enable throughput to scale linearly with the number of active leaders, while deferred execution ensures proposals are never lost under transient WAN delays by deterministically re-including them in subsequent waves based on predecessor dependencies. We provide informal throughput bounds and resilience arguments against packet loss in the text. We agree the phrasing 'proven' is strong without empirical data and will revise the abstract to reference the analysis sections explicitly and use 'our analysis shows potential to exceed' instead. revision: partial

  2. Referee: [Protocol description] Protocol description (around the two-hop commit and separation of dissemination from consensus): The claim that these mechanisms translate to higher throughput without introducing new bottlenecks or correctness risks under real WAN conditions requires explicit analysis or proof; the abstract notes 'never-dropped proposals' and 'deterministically deferred' execution but does not address coordination costs that typically arise in DAG and multi-leader systems.

    Authors: Dissemination occurs independently via the causally ordered DAG using a lightweight gossip mechanism that tolerates variable latency and loss without blocking consensus. The two-hop commit is achieved by having each leader propose directly into the DAG and collect acknowledgments from a quorum; no additional rounds are needed. Deferred proposals incur zero extra coordination because their execution is determined solely by the existing causal partial order once predecessors commit. We include a correctness argument in Section 5 and the appendix showing safety and liveness under CFT assumptions with bounded but arbitrary delays. Coordination costs remain comparable to single-leader CFT because leaders operate independently and the DAG maintenance is local. We will add a dedicated paragraph quantifying these overheads in the protocol section. revision: yes

  3. Referee: [Evaluation] Evaluation or experimental section: No implementation details, throughput/latency measurements, error bars, or resilience tests under realistic WAN traces are referenced, undermining the 'practical, deployable system' claim and the comparison to SOTA; the weakest assumption (that DAG maintenance adds no overhead) cannot be assessed without such data.

    Authors: The manuscript presents a protocol design with analytical arguments rather than a full systems evaluation. DAG maintenance adds constant overhead per command because each proposal records only its direct causal predecessors, with no global state or extra messages beyond standard quorum collection. This is justified in Section 4 by comparison to existing CFT protocols. The 'practical' and 'deployable' descriptors refer to the absence of heavy cryptography or complex leader election, making implementation feasible on top of existing reliable broadcast primitives. We acknowledge that concrete measurements would further support the claims but fall outside the scope of this design-focused paper; we will add a short discussion of the analytical overhead model and note that a prototype is under development for future work. revision: no

Circularity Check

0 steps flagged

No circularity: protocol design claims are independent of inputs

full rationale

The abstract and context present Nemo-Nemo as a new protocol combining causally ordered DAG, multi-leader proposals, deferred execution, and two-hop commits to achieve higher WAN throughput and resilience than SOTA CFT systems. No equations, fitted parameters, self-citations, or uniqueness theorems are quoted that would reduce any performance prediction or proof to the inputs by construction. The central claims rest on the described architectural separations and properties rather than self-referential reductions, making the derivation self-contained against external benchmarks.

Axiom & Free-Parameter Ledger

0 free parameters · 0 axioms · 0 invented entities

Abstract provides no information on free parameters, axioms, or invented entities; full text would be required to populate this ledger accurately.

pith-pipeline@v0.9.0 · 5503 in / 1109 out tokens · 30727 ms · 2026-05-10T17:52:42.029451+00:00 · methodology

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Lean theorems connected to this paper

Citations machine-checked in the Pith Canon. Every link opens the source theorem in the public Lean library.

What do these tags mean?
matches
The paper's claim is directly supported by a theorem in the formal canon.
supports
The theorem supports part of the paper's argument, but the paper may add assumptions or extra steps.
extends
The paper goes beyond the formal theorem; the theorem is a base layer rather than the whole result.
uses
The paper appears to rely on the theorem as machinery.
contradicts
The paper's claim conflicts with a theorem or certificate in the canon.
unclear
Pith found a possible connection, but the passage is too broad, indirect, or ambiguous to say the theorem truly supports the claim.

Reference graph

Works this paper leans on

63 extracted references · 63 canonical work pages

  1. [1]

    What’s dag got to do with it? https://decentralizedthoughts.github.io/ 2025-08-08-DAGs/, August 2025

    Ittai Abraham, Neil Giridharan, and Kar- tik Nayak. What’s dag got to do with it? https://decentralizedthoughts.github.io/ 2025-08-08-DAGs/, August 2025

  2. [2]

    WPaxos: Wide area network flex- ible consensus.IEEE Transactions on Parallel and Distributed Systems, 31(1):211–223, 2019

    Ailidani Ailijiang, Aleksey Charapko, Murat Demirbas, and Tevfik Kosar. WPaxos: Wide area network flex- ible consensus.IEEE Transactions on Parallel and Distributed Systems, 31(1):211–223, 2019

  3. [3]

    Wa- verunner: An elegant approach to hardware acceleration of state machine replication

    Mohammadreza Alimadadi, Hieu Mai, Shenghsun Cho, Michael Ferdman, Peter Milder, and Shuai Mu. Wa- verunner: An elegant approach to hardware acceleration of state machine replication. In20th USENIX Sympo- sium on Networked Systems Design and Implementation (NSDI 23), pages 357–374, 2023

  4. [4]

    AWS instance types.https://aws.amazon

    Amazon. AWS instance types.https://aws.amazon. com/ec2/instance-types/, 2023

  5. [5]

    Mysticeti: Low- latency dag consensus with fast commit path.arXiv preprint arXiv:2310.14821, 2023

    KushalBabel,AndreyChursin,GeorgeDanezis,Lefteris Kokoris-Kogias, and Alberto Sonnino. Mysticeti: Low- latency dag consensus with fast commit path.arXiv preprint arXiv:2310.14821, 2023

  6. [6]

    The Hashgraph Proto- col: Efficient Asynchronous BFT for High-Throughput Distributed Ledgers

    Leemon Baird and Atul Luykx. The Hashgraph Proto- col: Efficient Asynchronous BFT for High-Throughput Distributed Ledgers. In2020 International Conference on Omni-layer Intelligent Systems (COINS), 2020

  7. [7]

    Another advantage of free choice (ex- tended abstract): Completely asynchronous agreement protocols

    Michael Ben-Or. Another advantage of free choice (ex- tended abstract): Completely asynchronous agreement protocols. InProceedings of the Second Annual ACM Symposium on Principles of Distributed Computing, PODC ’83, pages 27–30. ACM, August 1983

  8. [8]

    Sui lutris: A blockchain combining broadcast and consensus

    SamBlackshear,AndreyChursin,GeorgeDanezis,Anas- tasios Kichidis, Lefteris Kokoris-Kogias, Xun Li, Mark Logan, and et al. Sui lutris: A blockchain combining broadcast and consensus. InCCS, 2024

  9. [9]

    TAO: Facebook’s distributed data store for the social graph

    NathanBronson,ZachAmsden,GeorgeCabrera,Prasad Chakka, Peter Dimov, Hui Ding, Jack Ferris, Anthony Giardullo, Sachin Kulkarni, Harry Li, Mark Marchukov, Dmitri Petrov, Lovro Puzar, Yee Jiun Song, and Venkat Venkataramani. TAO: Facebook’s distributed data store for the social graph. InUSENIX Annual Technical Conference USENIX ATC 13, pages 49–60, June 2013

  10. [10]

    IntroductiontoReliableandSecureDistributedProgram- ming

    ChristianCachin,RachidGuerraoui,andLuísRodrigues. IntroductiontoReliableandSecureDistributedProgram- ming. Springer Science & Business Media, 2011

  11. [11]

    Multicoordinated paxos

    Lásaro Jonas Camargos, Rodrigo Malta Schmidt, and Fernando Pedone. Multicoordinated paxos. InPro- ceedings of the twenty-sixth annual ACM symposium on Principles of distributed computing, pages 316–317, 2007

  12. [12]

    Practical Byzan- tine fault tolerance

    Miguel Castro and Barbara Liskov. Practical Byzan- tine fault tolerance. InProceedings of the 3rd USENIX Symposium on Operating Systems Design and Imple- mentation (OSDI), February 1999

  13. [13]

    WhatIsIKA?IKA:ExploringtheFastestMPC Network on Sui Blockchain, 2025

    CoinEx. WhatIsIKA?IKA:ExploringtheFastestMPC Network on Sui Blockchain, 2025. CoinEx Academy

  14. [14]

    Narwhal and Tusk: a DAG-based mempool and efficient BFT consensus

    George Danezis, Lefteris Kokoris-Kogias, Alberto Son- nino, and Alexander Spiegelman. Narwhal and Tusk: a DAG-based mempool and efficient BFT consensus. In ACM EuroSys, 2022

  15. [15]

    arXiv preprint arXiv:2502.09116 (2025)

    George Danezis, Jovan Komatovic, Lefteris Kokoris- Kogias,AlbertoSonnino,andIgorZablotchi. Byzantine consensus in the random asynchronous model.arXiv preprint arXiv:2502.09116, 2025

  16. [16]

    gopsutil

    DataDog. gopsutil. https://github.com/DataDog/ gopsutil, 2025

  17. [17]

    Consensus in the presence ofpartialsynchrony.Journal of the ACM (JACM), 35(2):288–323, 1988

    Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. Consensus in the presence ofpartialsynchrony.Journal of the ACM (JACM), 35(2):288–323, 1988

  18. [18]

    State-machine replication for Planet-Scale systems

    Vitor Enes, Carlos Baquero, Tuanir França Rezende, Alexey Gotsman, Matthieu Perrin, and Pierre Sutra. State-machine replication for Planet-Scale systems. In Proceedings of the Fifteenth European Conference on Computer Systems (EuroSys ’20), April 2020

  19. [19]

    Dumbo-ng: Fastasynchronousbft consensus with throughput-oblivious latency

    YingziGao,YuanLu,ZhenliangLu,QiangTang,JingXu, andZhenfengZhang. Dumbo-ng: Fastasynchronousbft consensus with throughput-oblivious latency. InACM CCS, 2022

  20. [20]

    Autobahn: Seam- lesshighspeedbft

    Neil Giridharan, Florian Suri-Payer, Ittai Abraham, Lorenzo Alvisi, and Natacha Crooks. Autobahn: Seam- lesshighspeedbft. InProceedingsoftheACMSIGOPS 30thSymposiumonOperatingSystemsPrinciples,pages 1–23, 2024

  21. [21]

    Flexible Paxos: Quorum intersection revisited

    Heidi Howard, Dahlia Malkhi, and Alexander Spiegel- man. Flexible Paxos: Quorum intersection revisited. InProceedings of the 20th International Conference on Principles of Distributed Systems (OPODIS 2016), December 2016

  22. [22]

    Consensus on IOTA

    IOTA Stiftung. Consensus on IOTA. https:// docs.iota.org/about-iota/iota-architecture/ consensus, 2025. IOTA Documentation

  23. [23]

    Mahi-mahi: Low-latency asynchronous bft 13 dag-based consensus.45th IEEE International Confer- ence on Distributed Computing Systems, 2025

    Philipp Jovanovic, Lefteris Kokoris Kogias, Bryan Ku- mara, Alberto Sonnino, Pasindu Tennage, and Igor Zablotchi. Mahi-mahi: Low-latency asynchronous bft 13 dag-based consensus.45th IEEE International Confer- ence on Distributed Computing Systems, 2025

  24. [24]

    All You Need is DAG

    IditKeidar,EleftheriosKokoris-Kogias,OdedNaor,and Alexander Spiegelman. All You Need is DAG. InACM PODC, 2021

  25. [25]

    Cordial Miners: Fast and Efficient Consensus for Every Eventuality

    Idit Keidar,Oded Naor,Ouri Poupko,and Ehud Shapiro. Cordial Miners: Fast and Efficient Consensus for Every Eventuality. InDISC, 2023

  26. [26]

    Mysticeti: Low-latency dag consen- sus with fast commit path

    Mysten Labs. Mysticeti: Low-latency dag consen- sus with fast commit path. https://github.com/ asonnino/mysticeti, 2024

  27. [27]

    Paxos made simple.ACM SIGACT News (Distributed Computing Column) 32, 4, 32:51–58, December 2001

    Leslie Lamport. Paxos made simple.ACM SIGACT News (Distributed Computing Column) 32, 4, 32:51–58, December 2001

  28. [28]

    Generalized consensus and Paxos

    Leslie Lamport. Generalized consensus and Paxos. TechnicalReportMSR-TR-2005-33,MicrosoftResearch, March 2005

  29. [29]

    Fast paxos.Distributed Computing, 19(2):79–103, 2006

    Leslie Lamport. Fast paxos.Distributed Computing, 19(2):79–103, 2006

  30. [30]

    Binomial averages when the mean is an integer.The Mathematical Gazette 94, 331-332, 2010

    Nick Lord. Binomial averages when the mean is an integer.The Mathematical Gazette 94, 331-332, 2010

  31. [31]

    Maximal ex- tractablevalue(mev)protectiononadag

    Dahlia Malkhi and Pawel Szalachowski. Maximal ex- tractablevalue(mev)protectiononadag. InTokenomics, 2022

  32. [32]

    Mencius:Buildingefficientreplicatedstatemachinesfor WANs.In8thUSENIXSymposiumonOperatingSystems Design and Implementation (OSDI 08),December2008

    Yanhua Mao, Flavio Junqueira, and Keith Marzullo. Mencius:Buildingefficientreplicatedstatemachinesfor WANs.In8thUSENIXSymposiumonOperatingSystems Design and Implementation (OSDI 08),December2008

  33. [33]

    Scalable but wasteful: Current state of replicationinthecloud

    Venkata Swaroop Matte, Aleksey Charapko, and Abu- talib Aghayev. Scalable but wasteful: Current state of replicationinthecloud. InProceedingsofthe13thACM Workshop on Hot Topics in Storage and File Systems, pages 42–49, July 2021

  34. [34]

    Impossibility of distributed consensus with one faulty process.Journal of ACM, 1985

    MichaelS.PatersonMichaelJ.Fischer,NancyA.Lynch. Impossibility of distributed consensus with one faulty process.Journal of ACM, 1985

  35. [35]

    EPaxos go-lang

    Iulian Moraru, David G Andersen, and Michael Kamin- sky. EPaxos go-lang. https://github.com/ efficient/epaxos/, 2013

  36. [36]

    There is more consensus in egalitarian parliaments

    Iulian Moraru, David G Andersen, and Michael Kamin- sky. There is more consensus in egalitarian parliaments. InProceedingsoftheTwenty-FourthACMSymposiumon OperatingSystemsPrinciples,pages358–372,November 2013

  37. [37]

    EPaxos go-lang – modified for QuePaxa experiments.https://github.com/dedis/ quepaxa-ePaxos-open-loop, September 2023

    Iulian Moraru, David G Andersen, Michael Kaminsky, and Pasindu Tennage. EPaxos go-lang – modified for QuePaxa experiments.https://github.com/dedis/ quepaxa-ePaxos-open-loop, September 2023

  38. [38]

    DPaxos: Managing data closer to users for low-latency and mobile applications

    Faisal Nawab, Divyakant Agrawal, and Amr El Abbadi. DPaxos: Managing data closer to users for low-latency and mobile applications. InACM SIGMOD/PODS Conference on Management of Data, June 2018

  39. [39]

    Viewstamped replication: A new primary copy method to support highly-available distributed systems

    Brian M Oki and Barbara H Liskov. Viewstamped replication: A new primary copy method to support highly-available distributed systems. InProceedings of the Seventh Annual ACM Symposium on Principles of Distributed Computing, pages 8–17, January 1988

  40. [40]

    In search of an understandable consensus algorithm

    Diego Ongaro and John Ousterhout. In search of an understandable consensus algorithm. In2014 USENIX Annual Technical Conference ATC14, pages 305–319, June 2014

  41. [41]

    Rabia.https://github

    Haochen Pan, Jesse Tuglu, Neo Zhou, Tianshu Wang, Yicheng Shen, Xiong Zheng, Joseph Tassarotti, Lewis Tseng, and Roberto Palmieri. Rabia.https://github. com/haochenpan/rabia, 2021. Rabia implementation in the Go language (GitHub repository)

  42. [42]

    Rabia: Simplifying state- machine replication throughrandomization

    Haochen Pan, Jesse Tuglu, Neo Zhou, Tianshu Wang, Yicheng Shen, Xiong Zheng, Joseph Tassarotti, Lewis Tseng, and Roberto Palmieri. Rabia: Simplifying state- machine replication throughrandomization. InProceed- ingsoftheACMSIGOPS28thSymposiumonOperating Systems Principles, pages 472–487, October 2021

  43. [43]

    Making fast con- sensus generally faster

    Sebastiano Peluso, Alexandru Turcu, Roberto Palmieri, Giuliano Losa, and Binoy Ravindran. Making fast con- sensus generally faster. InProceedings of the 46th AnnualIEEE/IFIPInternationalConferenceonDepend- able Systems and Networks (DSN), June 2016

  44. [44]

    SoK: DAG-based Consensus Protocols

    Mayank Raikwar, Nikita Polyanskii, and Sebastian Müller. SoK: DAG-based Consensus Protocols. In IEEE ICBC, 2024

  45. [45]

    Open versus closed: A cautionary tale

    Bianca Schroeder, Adam Wierman, and Mor Harchol- Balter. Open versus closed: A cautionary tale. In Proceedings of the 3rd USENIX Symposium on Net- worked Systems Design and Implementation (NSDI 06). USENIX, May 2006

  46. [46]

    Bullshark:DAGBFT Protocols Made Practical

    Alexander Spiegelman, Neil Giridharan, Alberto Son- nino,andLefterisKokoris-Kogias. Bullshark:DAGBFT Protocols Made Practical. InACM CCS, 2022

  47. [47]

    Bullshark: The partially synchronous version.ArXiv abs/2209.05633(2022)

    Alexander Spiegelman, Neil Giridharan, Alberto Son- nino, and Lefteris Kokoris-Kogias. Bullshark: the partially synchronous version. arXiv preprint arXiv:2209.05633, 2022. 14

  48. [48]

    https://github.com/mystenLabs/sui, 2024

    The Sui team.Sui. https://github.com/mystenLabs/sui, 2024

  49. [49]

    Tokio.https://tokio.rs, 2024

    The Tokio Team. Tokio.https://tokio.rs, 2024

  50. [50]

    Paxos and Raft, September

    Pasindu Tennage. Paxos and Raft, September

  51. [51]

    GitHub repository https://github.com/ dedis/paxos-and-raft

  52. [52]

    QuePaxa, September 2023

    Pasindu Tennage. QuePaxa, September 2023. GitHub repositoryhttps://github.com/dedis/quepaxa

  53. [53]

    Quepaxa: Escaping the tyranny of timeouts in consensus

    Pasindu Tennage, Cristina Basescu, Lefteris Kokoris- Kogias, Ewa Syta, Philipp Jovanovic, Vero Estrada- Galinanes, and Bryan Ford. Quepaxa: Escaping the tyranny of timeouts in consensus. InProceedings of the 29thSymposiumonOperatingSystemsPrinciples,pages 281–297, 2023

  54. [54]

    Racs-sadl: Robust and understand- able randomized consensus in the cloud

    Pasindu Tennage, Antoine Desjardins, and Lefteris Kokoris-Kogias. Racs-sadl: Robust and understand- able randomized consensus in the cloud. In2025 IEEE 18th International Conference on Cloud Computing (CLOUD), pages 362–373, 2025

  55. [55]

    EPaxos revisited

    Sarah Tollman, Seo Jin Park, and John K Ousterhout. EPaxos revisited. InUSENIX Symposium on Networked Systems Design and Implementation (NSDI 21), pages 613–632, April 2021

  56. [56]

    Hammerhead: Leader reputation for dynamic scheduling

    Giorgos Tsimos, Anastasios Kichidis, Alberto Sonnino, and Lefteris Kokoris-Kogias. Hammerhead: Leader reputation for dynamic scheduling. In2024 IEEE 44th InternationalConferenceonDistributedComputingSys- tems (ICDCS), pages 1377–1387, 2024

  57. [57]

    Ubuntu Linux.https://releases.ubuntu

    Ubuntu. Ubuntu Linux.https://releases.ubuntu. com/focal/, 2023

  58. [58]

    Near- optimal latency versus cost tradeoffs in geo-distributed storage

    Muhammed Uluyol, Anthony Huang, Ayush Goel, MosharafChowdhury,andHarshaV.Madhyastha. Near- optimal latency versus cost tradeoffs in geo-distributed storage. InProceedings of the 17th USENIX Sympo- sium on Networked Systems Design and Implementation (NSDI ’20), February 2020

  59. [59]

    Blue- Bottle: Fast and Robust Blockchains through Subsystem Specialization.ArXiv preprint arXiv:2511.15361, 2025

    Preston Vander Vos, Alberto Sonnino, Giorgos Tsimos, Philipp Jovanovic, and Lefteris Kokoris-Kogias. Blue- Bottle: Fast and Robust Blockchains through Subsystem Specialization.ArXiv preprint arXiv:2511.15361, 2025

  60. [60]

    CRaft: An Erasure-coding-supported version of Raft for reducing storage cost and network cost

    Zizhong Wang, Tongliang Li, Haixia Wang, Airan Shao, Yunren Bai, Shangming Cai, Zihan Xu, and Dongsheng Wang. CRaft: An Erasure-coding-supported version of Raft for reducing storage cost and network cost. In Proceedings of the 18th USENIX Conference on File and Storage Technologies (FAST ’20), February 2020

  61. [61]

    Fides: Scalable censorship- resistant dag consensus via trusted components.ArXiv preprint arXiv:2501.01062, 2025

    Shaokang Xie, Dakai Kang, Hanzheng Lyu, Jianyu Niu, and Mohammad Sadoghi. Fides: Scalable censorship- resistant dag consensus via trusted components.ArXiv preprint arXiv:2501.01062, 2025

  62. [62]

    Elastic, geo-distributed RAFT

    Zichen Xu, Christopher Stewart, and Jiacheng Huang. Elastic, geo-distributed RAFT. InProceedings of the International Symposium on Quality of Service. Associ- ation for Computing Machinery, 2019

  63. [63]

    Hotstuff: Bft consensus with linearity and responsiveness

    Maofan Yin, Dahlia Malkhi, Michael K Reiter, Guy Golan Gueta, and Ittai Abraham. Hotstuff: Bft consensus with linearity and responsiveness. InACM PODC, 2019. 15